Deep learning is the fastest growing field in machine learning and is used to solve many big data problems such as computer vision, speech recognition, and natural language processing. It’s used across multiple industries and as a method of overcoming real world problems like preventing disease, building smart cities, revolutionising analytics and so much more.

Whilst deep learning is being adopted by businesses of all shapes and sizes, it can be overwhelming. Until very recently, its implementation has only been realistic for large companies with access to huge amounts of data such as images, signals and text to provide to the machine in order for it to learn. Smaller businesses or researchers who have limited historical data have previously been missing out; however there are ways of overcoming this barrier to implement deep learning methods.

NVIDIA is the top maker of GPUs used in computing systems for Machine Learning, and deliver GPU acceleration to data centres, desktops, laptops, and supercomputers. This means that businesses that haven’t previously been able to implement deep learning are being given the opportunity through their platform. As a leading provider in this area, NVIDIA first attended the Deep Learning Summit in London in 2015, and have presented at and partnered at RE•WORK events yearly since. They most recently joined us in London, Montreal, and will be with us this month in San Francisco as partners, as we continue the Global Deep Learning Summit Series.

At the Deep Learning Summit in San Francisco this January 25 & 26, we will hear from Clement Farabet, the VP of AI Infrastructure at NVIDIA who will be sharing his current work in AI. Save 20% on all RE•WORK summits when you register using the code NVIDIA20.

Having preveously spoken in Montreal, Clement explained that whilst the first industry they plan to tap into is automotive, ‘soon doctors, radiologists, the healthcare industry and many more will be transformed by providing them with tools to rapidly create their own predictors, and use them as assistants, in their daily lives’. Following this, AI will branch progressively into other industries, such as those who need perception capabilities, e.g. video surveillance (smart cities), robotics, etc.

When discussing his current projects, Clement told us how the AI applications he’s currently working on are mostly based on DL, and unpacked the term for us explaining that it’s ‘a family of techniques that is surprisingly effective at extracting high-level patterns from data, and is being used to solve challenging perceptual problems such as speech recognition or image and video recognition.’ Traditionally, algorithms solving these problems are rule-based, whereas DL systems learn from example and create generalised complex non-linear rules from them.

When asked what his team are trying to achieve with their work, Clement explained that they’re trying to ‘help standardize, and simplify the process of producing such AIs, for a whole range of applications, such as perception in autonomous cars, or complex perceptual tasks in medical imaging applications. This involves managing the lifecycle of large-scale datasets, the training and testing of deep neural networks on these datasets, all the way to deployment to the edge.’ Although the team has a heavy focus on autonomous vehicles, they range from distributed systems, cloud technology, all the way to R&D in DL and computer vision.

Having heard from Clement in London and Montreal, I was fortunate enough to discuss some of his current work and ask some questions ahead of his next presentation in San Francisco:
How did you begin you work in AI, and more specifically deep learning?

I started playing with neural networks back in 2006 whilst still at school, in an exchange program in Australia, while working on unmanned helicopters (UAVs). I was then designing custom circuit boards and computer architectures (running on FPGAs) to run small neural nets to do things like control, navigation, and tracking. I quickly became obsessed with the idea that neural networks could learn to do anything, and that we should never need to hard code anything, just teach them.

I then met with Pf Yann LeCun, and started working for him as a research scientist at NYU. I fell in love with his vision, and his passion to get neural networks to do anything that one could think of. I scaled up my work to support more complex neural network architectures, and co-patented a computer architecture called NeuFlow, which could run large-scale neural networks, in particular convolutional neural networks, at a very high-efficiency power/compute ratio. We started using NeuFlow to run (then) very advanced video processing pipelines, such as full real-time semantic image segmentation. This was very exciting early work that showed the way towards applications like autonomous driving. In fact I then collaborated with Urs Muller and his team, now at NVIDIA as well, on early versions of his autonomous driving technology, which was then running on small robots.

At the time, back in 2009/10, it was still unbelievably painful and slow to train such neural networks to produce satisfying results. The tools and frameworks we used then were still very primitive. Building the right tools and platforms to let us accelerate our work on deep learning quickly became one of my passions. I partnered up with Koray Kavukcuoglu, and Ronan Collobert, to co-develop a library called Torch7, which helped us scale up our work, and enable lots of new research in Deep Learning. This passion for tools as never really left me, and in my role today, more than ever, I believe that building the right platform and software abstractions is what will take Deep Learning to the next level, in particular, by facilitating its access to new industries.

With NVIDIA being the platform for all new AI-powered applications, which industries do you see benefiting from, or being disrupted by AI the most?

My belief is that every single industry will be (or is already being) disrupted by AI. As we commoditize AI by transforming it into a new layer in our software abstraction stack, we're going to see every aspect of our lives getting infused with AI. Whether in ways that are subtle, like business analytics becoming more and more infused with predictive machine learning, or in ways that are radical, like the automotive industry being entirely rebuilt from the ground up around automation.

AI, today, has the potential to supersede most of the mundane tasks we, as humans, have to perform. From searching a pattern in a collection of photos ("I'm looking for this photo of my dog 2y ago") to driving a car in complex and unknown environments, or translating from one language to another.

I do believe that AI is going to be an assistive technology, nothing scary, it's simply going to extend us, and give us superpowers.

Much like Google already lets us answer so many questions at the tap of a finger, AI is going to augment us, give us more eyes, give us access to more languages in real-time, free us from mundane tasks.

As NVIDIA works across industries from gaming to healthcare, are there any untapped sectors you think will begin to benefit from your offering?

Our platform is going to enable the automotive industry first, by letting them build the perception modules they need to assist in driving. Then doctors / radiologists / the healthcare industry, by providing them with tools to rapidly create their own predictors, and use them as assistants, in their daily lives. Then progressively other industries, who need perception capabilities, e.g. video surveillance (smart cities), robotics, etc.

Keen to hear more from NVIDIA? Join us in San Francisco for the Deep Learning Summit, and save 20% on all RE•WORK summits when you register using the code NVIDIA20.