NVIDIA have been a pioneer in accelerating deep learning, and have been developing deep learning software, libraries and tools for a number of years. Many of today's deep learning solutions rely on NVIDIA GPU-accelerated computing to train and speed up challenging applications such as image, handwriting, and voice identification. At the RE•WORK Machine Intelligence Summit this week, Axel Koehler, Principal Solution Architect at NVIDIA, will provide an overview about the latest hardware and software developments for deep learning, focusing in particular on NVIDIA® DGX-1™, the world’s first purpose-built system for deep learning. In his role Axel supports researchers, scientists, engineers and hardware and software partners in the implementation of GPU-based machine learning and HPC solutions. I spoke to him ahead of the summit to learn more about his role at NVIDIA, infinite compute power, deep learning applications and more.What motivated you to begin your work in deep learning and machine intelligence? Machine learning is one of the most important computing developments of our time. Advanced machine learning techniques are powering an explosion in artificial intelligence, enabling new waves of smart applications and services. Deep learning is a fundamentally new software model where billions of software-neurons and trillions of connections are trained, in parallel. Running DNN algorithms and learning from examples, the computer is essentially writing its own software. This radically different software model needs a new computer platform to run efficiently. That’s where NVIDIA’s work comes in.
NVIDIA invented the graphics processing unit (GPU) in 1999. To some, it seems counterintuitive that a chip originally designed to play 3D games has become the engine of today’s AI revolution. But in fact the problem of computer graphics has features in common with many other applications, from computational fluid dynamics and medical imaging to computer vision and natural language processing. At a high level, the unifying factor is that these problems can be parallelised. Our chip might be called a ‘graphics’ processor, but in fact it’s an incredibly versatile parallel processing engine which is playing a pivotal role in democratising AI.
Tell us about infinite compute power and how you see this impacting the future of deep learning. As Greg Diamos of Baidu recently observed, the availability of huge amounts of data combined with advances in the training of deep artificial neural networks has turbo-charged deep learning and accelerated progress on some of the world’s most complex computing problems.
Deep learning is a new software model and as such it needs a new computing platform to run it — an architecture that can efficiently execute programmer-coded commands as well as the massively parallel training of deep neural networks. Several years ago, NVIDIA anticipated deep learning’s potential and invested heavily in ensuring that our compute platform includes features specifically designed for this application. At the time, a lot of people thought we were crazy! But we see this decision to pivot towards computing’s future, rather than focusing only on its present, as crucial.
By collaborating closely with AI developers, we are continuing to improve our GPU designs, system architecture, compilers and algorithms. We’ve been successful in speeding up training deep neural networks by 50x in just three years. Faster training and iteration ultimately means faster innovation and faster time to a solution or market. Recently we’ve responded to demand for a ‘plug and play’ deep learning solution by introducing the NVIDIA DGX-1. It’s the world’s first purpose-built server for deep learning, with fully integrated hardware and software that can be deployed quickly and easily.
Another important factor in the rapid adoption of GPUs for deep learning is the NVIDIA SDK. It’s a suite of powerful tools and libraries that give data scientists and researchers the building blocks for training and deploying deep neural nets. Based on our experience in developing CUDA, our parallel computing platform, as well as feedback from the developer community, we knew a strong deep learning SDK would be vital in helping data scientists and developers make the most of the vast opportunities in deep learning.
The SDK includes DIGITS, NVIDIA’s Deep Learning GPU Training System. This lets data scientists and researchers quickly design the best deep neural network based on their data using real-time network behaviour visualisation. It also includes cuDNN, the NVIDIA CUDA Deep Neural Network. Its optimised routines allow developers to focus on designing and training neural network models rather than low-level performance tuning. It includes other libraries and tools as well — cuBLAS, cuSPARSE, NCCL and the CUDA toolkit — all optimised for machine learning workloads.
What do you feel are the most valuable applications of deep learning?I’m really excited about self-driving cars. This application of deep learning will revolutionise personal mobility services and have the potential to do amazing social good. More than a million people are killed in road traffic accidents every year, the vast majority caused by human error. Projects like the Volvo Drive Me initiative are already taking steps towards a future where autonomous cars make accidents a thing of the past. Volvo will put self-driving cars equipped with NVIDIA’s end-to-end deep learning platform DRIVE PX on real roads with normal consumers behind the wheel over the next two years.
The impact of deep learning in healthcare and life sciences will also touch us all. Today genomics is applying GPU-based deep learning to understand how genetic variations can lead to disease. In the future, companies like French startup DreamQuark will help doctors and insurance professionals combine the vast amounts of data available via medical records with deep learning to create better prevention, diagnosis and care systems.
Deep learning will underpin so many advances that it’s difficult to pick just a few. From real-time speech translation to autonomous robots and machines that can read human emotions through facial analysis, deep learning and artificial intelligence are already having a transformative impact on every industry and research field.
Which industries do you think will be disrupted by deep learning in the future, and how?All industries want intelligence. That’s why AI will be part of every industry. It’s helping future factory specialists like Akeoplus create industrial robots that can learn new processes, rather than requiring production lines to undergo costly updates or replacement. It has enabled online retail giant Zalando to streamline its warehouses by using deep learning to optimise picking routes. And it’s helping researchers monitor coral reefs to understand how these fragile ecosystems can be saved. I believe, over the next few years, there will be no industry which AI does not touch.
Startups and established companies are now ramping up an AI arms race to create new products and services or improve their operations. In just two years, the number of companies NVIDIA collaborates with on deep learning has jumped nearly 35x to over 3,400. Big data is an important factor in this trend. Industries such as healthcare, life sciences, energy, financial services, automotive, manufacturing, and entertainment gather massive amounts of information every day. The volumes of data are too massive for manual processing so they have remained an untapped resource, a ‘black box,’ until deep learning offered a means to automate the process of extracting meaning from them. Many problems that were previously assumed to be unsolvable are now within our reach, thanks to the combination of big data and deep learning.
What are you looking forward to most at the Machine Intelligence Summit? I’m really excited about the Machine Intelligence Summit! It brings together industry experts, scientists, researchers and innovators to focus on how this technology mega-trend is unfolding.
I’m particularly interested in discovering start-ups which are deploying deep learning and artificial intelligence. Over the course of NVIDIA’s history, some of the most creative and important applications of our technology have been pioneered by start-ups. In recognition of this, we’ve just launched the NVIDIA Inception Program, which provides supports new companies working in the field of deep learning.
In addition, we’ll be hosting our Emerging Companies Summit in Europe for the first time later this year. Part of the GPU Technology Conference Europe, ECS gives start-ups using GPU technology a platform to connect with potential investors, customers and employees. It’s taking place in Amsterdam on September 28th 2016 and I hope some of the entrepreneurs I meet at the Machine Intelligence Summit will be inspired to attend.
The next Machine Intelligence Summit will take place in New York on 2-3 November 2016. Discounted passes are now available - for more information and to register, please visit the website here. Previous events have sold out so please book early to avoid disappointment.
We are holding summits focused on AI, Deep Learning and Machine Intelligence in London, Amsterdam, Boston, San Francisco, New York, Hong Kong and Singapore. See the full events list here.