How Can Computers Learn, See & Simulate Our World?
Data scientists and artificial intelligence (AI) researchers require accuracy, simplicity, and speed for deep learning success, as faster training and iteration ultimately means faster innovation and time to market. NVIDIA recently unveiled the DGX-1 Deep Learning System, a purpose-built supercomputer for deep learning that aims to significantly accelerate training time. It's the first system built with Pascal, which was created to be the engine of computers that learn, see, and simulate the world. Jack Watts is Industry Business Development Manager for Deep Learning at NVIDIA, joining the company in 2014 to work with the increasing number of industry startups and commercial companies who are leveraging NVIDIA technology in their AI research and applications. At the Deep Learning Summit in London on 22-23 September, Jack will talk through some of the industry use cases today for deep learning and explore how AI won’t be an industry, it will be part of every industry. I spoke to him to learn more about challenges and advancements in deep learning, and what we can expect to see in the future.Tell us a bit more about your work at NVIDIA.I work with companies leveraging NVIDIA’s deep learning platform in the artificial intelligence arena. From startups to enterprise and across a wide range of industries, what’s striking is the huge number of organisations using AI and deep learning to transform their business. AI is enabling them to make better use of their data, which in turn helps them deliver benefits like increased customer loyalty and retention, as well as an increase in the productivity of their employees and data scientists.
I also look after our NVIDIA Inception Program. Inception has been designed for start-up companies in the field of AI who are interested in having a relationship with NVIDIA that allows them to leverage our technical and go-to-market expertise. They also receive early access to software and hardware releases to help them better develop their products and platforms.
What are the key factors that have enabled recent advancements in deep learning?
Competition in the research and data science space has been very important in driving progress. In the computer vision space alone, quite apart from formal challenges like ImageNet there are a whole host of organisations, large corporations and data science teams competing with each other to get the best image classification accuracy rate. If you then include DARPA, Kaggle and others, there are a huge number of competitions being run on a yearly basis. There’s a huge community out there, all focused on improving the speed and accuracy of their neural networks.
If you then combine the competitions with the advancements being made at the software framework level by corporations like Google with Tensorflow and Microsoft with their CNTK framework, you’ll see there is a gold rush to be ‘The AI Company.’ Google itself has thousands of applications now running and benefiting from AI, a huge growth over the past few years.
One of the latest stories from Google’s Deepmind was their success in increasing one of their datacentre’s power usage effectiveness (PuE), reducing its power and cooling requirements by leveraging deep learning.
Another important factor is the advancements in the hardware sector. As Baidu’s Andrew Ng recently said, in this sector access to cutting-edge infrastructure is critical. “If you’re a machine learning researcher, having access to a machine that is 2x as fast means that you are 2x as productive as a researcher.”
For example, a year ago our GPUs each packed 8TFLOPs of compute power. Our latest Tesla P100 SXM GPU housed within our deep learning supercomputer, the NVIDIA DGX-1, was announced in April 2016. Each of those can achieve up to 21.2TFLOPs of compute, with the entire supercomputer itself delivering an enormous 170TFLOPs of performance. By any measure, that’s a significant advancement in compute capability within a remarkably short time.
Lastly it’s fantastic events like RE•WORK that are creating a really productive melting pot of ideas and collaboration. Facilitating contact between industry corporations and startup companies provides a perfect environment to spark big ideas. In this vein, our GPU Technology Conference (GTC) is coming to Europe this September for the first time. One of its core themes is artificial intelligence and we’ll have CxOs, developers and executives from organisations of all sizes coming to share their work with the world. There will also be a wealth of speaker sessions, exhibitor booths and hands-on labs that will give anyone working with AI and deep learning plenty to get their neural networks firing!
What do you feel are the most valuable applications of deep learning?
I believe the health sector is just starting to see the great value of deep learning. Personalised health-care, faster times to diagnosis, surgical robotics and genome mapping are just around the corner and it’s not long before these will start to be approved for day-to-day use in our National Health Service. Collaboration with startup companies and big industry, along with the availability of data, will be key to medical breakthroughs and discoveries that will make a tangible difference to our everyday lives.
What developments can we expect to see in deep learning in the next 5 years?
I expect we will see more companies and corporations offering AI-as-a-Service, to enable smaller companies, individuals without data science departments or backgrounds to leverage AI for their own needs. Think of IBM Watson-type offerings that enable wider fields of research and technology to be explored by organisations, no matter what their size.
As NVIDIA develops our DGX-1 supercomputing platform using the latest Pascal GPU architecture, we’re seeing increased demand for larger and larger datasets to be processed within server nodes and racks. That’s why we’ll be focusing on not only increasing the raw horsepower of our GPUs and developing our software offering, but also on how they interact at speed with each other. Technologies like NVLink and GPU-Direct RDMA between nodes (or even datacentres!) will be very important to realising the potential of the major deep learning frameworks.
Where do the main challenges lie in advancing the sector?
We’ve made big strides in compute horsepower – the next frontier is the availability of data. A neural network is only as good as the data on which it’s trained. There are already startups out there turning this challenge into an opportunity, building datasets and offering companies the option to buy in data to serve their research needs. This kind of ‘Data-as-a-Service’ model could be an extremely interesting offshoot of the AI revolution.
What advancements excite you most in the deep learning field?
I find the sheer variety of start-ups looking to create new platforms using deep learning really inspirational. Take AI Build for example – a team of architects and researchers who’ve spun out of UCL’s Bartlett School of Architecture and the well known Zaha Hadid Architecture Firm. They’re not only leveraging deep learning for path planning of large-scale 3D Printed structures, they are also creating a fantastic in-home appliance which can bridge the gap for all of your ‘smart home’ needs. We’ve seen a lot of early DL startups focused on image classification and natural language processing, so AI Build really stands out as something very different and exciting!
What are you looking forward to at the Deep Learning Summit in September?
I thoroughly enjoyed being part of the Deep Learning Summit last year and other summits hosted by RE•WORK around the globe. I find them a great place to meet with data scientists, researchers and business decision makers who just discovering AI or even on the brink of doing something really quite special with the technology that will impact our every-day lives.
The quality of speakers at the events is fantastic, covering a broad range of topics and addressing a diverse audience, from laymen to the hard-core developer. It’s a tremendous platform for startup companies to gain press coverage, exposure and to feel part of this ever-growing community.
Jack Watts will be speaking at the Deep Learning Summit in London on 22-23 September, where this year's event with also feature breakout sessions on Chatbots and FinTech. Previous events have sold out, so book early to avoid disappointment. For more information and to register, please visit the website here.We are holding events focused on AI, Deep Learning and Machine Intelligence in London, Amsterdam, Singapore, Hong Kong, New York, San Francisco and Boston - see all upcoming summits here.View a presentation by Axel Koehler, Principal Solution Architect, NVIDIA at the Machine Intelligence Summit 2016: