2016 saw some progressive advancements in AI technology, such as AlphaGo beating Go grandmaster Lee Sedol. We have seen other great developments such as with image recognition, where we can one day expect to see computers that will be able to read X-ray, MRI and CT scans more efficiently than radiologists, enabling the quicker diagnosis of cancer. This is just one example of how the progress of deep learning is rapidly advancing and impacting the world we live in, from the way we shop to predicting energy sources to shaping modes of transport.
We asked some of our influential speakers, who will be presenting at our deep learning summits this year, for their predictions for deep learning in 2017. Here are their forecasts:
Durk Kingma, Research Scientist, OpenAI
In 2017, we will probably see further rapid exploration of applications of current deep learning techniques, as well as further theoretical advances, improving robustness and sample efficiency. We will also see various fun new applications of deep learning to image and voice resynthesis. In 3 years, due to developments in special-purpose AI hardware, we may see orders of magnitude faster compute. This would enable application of unsupervised learning to video data, and fused with reinforcement and supervised learning, will bring us closer to general AI. It will still be mainly based on deep neural networks, backpropagation and SGD.
Durk will be sharing his latest work on Improving Variational Autoencoders with Inverse Autoregressive Flow at the Deep Learning Summit, 26-27 January in San Francisco. Tickets are now limited, confirm your place here.
Neil Lawrence, Professor of ML & Computational Biology, University of Sheffield
I think things are progressing much as we might expect. Deep learning methods are being intelligently deployed on very large data sets. The term Deep Learning itself is also becoming synonymous with Neural Networks. For smaller data sets, I think we'll see some interesting directions on model repurposing, i.e the reuse of pre-trained deep learning models. There are some interesting open questions around how best to do this. A big change over the year 2016 was how widely adopted Tensor Flow has become. With frameworks like this, also Torch and MXNet, it seems Deep Learning is moving from the domain of "science" to be more of an Engineering practice. Generative Adversarial Networks have garnered a lot of attention in 2016, an open question is how they can be deployed when there is a lot of missing data. Health is an application area where missing data is the norm, yet for many domains where we've seen neural networks be successful there is relatively little missing data. For health, attention will also need to turn to questions of interpretability and accountability in the learning as well. Although as we ask deeper questions about the reasoning behind our machines I think we'll realise we also need to ask deeper questions about the manner in which our human experts explain their conclusions. We will need to get much closer to understanding the complex interactions between expertise, intuition and explanation.Neil will be discussing the opportunities and challenges of machine learning in health at the Deep Learning in Healthcare Summit, 28 February - 01 March in London. Early Bird passes end this Friday - 6 January.
Roland Memisevic, Chief Scientist, Twenty Billion Neurons
Deep Learning has made incredible progress since 2012, most notably in image and speech recognition. I expect 2017 to be the year where our industry starts to fully embrace video and, consequently, replace image-based visual representations with a deeper, more fine-grained understanding of the world. Unlike images, videos can teach neural networks that the world is three-dimensional; that it contains more or less independent objects; that there are physical concepts such as gravity, material types or object permanence.
Over the next three years, a better understanding of how the world works will start to infect other domains, such as language processing, where it will lead to better natural language and dialog systems through proper grounding of linguistic concepts. This may start a feedback loop, by which better language capabilities make it easier to provide supervision signals for learning better systems themselves. But all of this can and will start with videos which, starting in 2017, will allow networks to learn much more about the world than they currently know.
At the Deep Learning Summit in San Francisco, Roland will show how neural networks can learn from data to make fine-grained predictions about actions and situations.
Chelsea Finn, PhD Student, UC Berkeley
In the next few years, we should expect to see deep learning systems that are capable of learning from fewer examples and less experience, and systems which learn online as more data and experience becomes available. I also expect to see much better generative models of the future. Prediction is central to human cognition and planning, and is immensely useful for artificial agents. Video prediction is an active area of research now, but there is still significant improvement needed, both in terms of the video length and the frame quality, before these methods can be generally useful.
Chelsea will be presenting at the Deep Learning Summit, 26-27 January in San Francisco. Chelsea will share her work on how robots can learn mental models of the visual world and imagine the outcomes of their actions, as well as her vision for the future of deep robotic learning.
Polina Mamoshina, Research Scientist, Insilico Medicine
I think that the growing community of engaged researchers and developers will bring a lot of interesting architecture and training solutions. But two trends I am seeing are the most exciting and promising: generative models and transfer learning. While the basic concept of both is not new, only now we are seeing the application of such methods to real-world problems such as the creation of new molecules with desired properties or 3-D photo reconstruction. I think we could accelerate drug development process significantly just by changing the lead generation process. Generative models with deep architecture have a potential to generate new targeted molecules and to replace blind screening process of lead compounds. Transfer knowledge could increase a translation rate from model organisms into the clinic.
At the Deep Learning in Healthcare Summit, 28 February - 01 March in London, Polina will be discussing the Application of Deep Neural Networks to Biomarker Development. Early Bird passes end this Friday - 6 January.
Adam Coates, Director, Baidu Silicon Valley AI Lab
2017 will be another banner year for deep learning and AI. We will see speech become popular for interacting with machines, made possible by deep learning technologies that keep boosting accuracy with more data and computing power. We’ll also see big changes in hardware specifically to support AI and deep learning. Chipmakers are already designing and integrating AI-specific features into their products and this will enable us to train and ship bigger neural networks than ever before. That will help make speech, vision, language and other AI technologies even better this year and make it possible to wire them into homes, cars, and mobile applications.
In the next 3 years we’ll see AI’s impact rippling through many industries. There will be advances everywhere — logistics, medical, finance and more. Enterprises will have access to cutting edge AI technology built by tech giants like Baidu through cloud platforms and APIs. Hiring of AI and Machine Learning talent will keep growing as the impact of AI expands. Machine learning skills are some of the most valuable in Silicon Valley today and that will remain true for years to come. Demand for AI education and the number of engineers with AI skills will grow dramatically. This will fuel the next wave of AI innovations in products and businesses.
Adam is the director of the Silicon Valley AI Lab at Baidu. Adam has published various deep learning research papers which you can discover here.
To hear more from Durk, Roland, Chelsea and Adam, as well as Shivon Zilis from Bloomberg and Andrew Tulloch from Facebook, register now for the Deep Learning Summit, San Francisco. Apply the discount code NEWYEAR by 6 January to get 20% off all summit tickets. View the full agenda here. Place are now very limited!
Other deep learning predictions:
- Predictions for Deep Learning in 2017
- 2017 Guide for Deep Learning Business Applications
- 10 Deep Learning Trends and Predictions for 2017
The Deep Learning Summit will also be running alongside the Virtual Assistant Summit, 26-27 January in San Francisco.
Upcoming Deep Learning Events Include:
- Deep Learning Summit, 26-27 January in San Francisco
- Deep Learning in Healthcare Summit, 28 February - 01 March in London
- Deep Learning Summit, 27-28 April in Singapore
- Deep Learning in Finance Summit, 27-28 April in Singapore
- Deep Learning Summit, 25-26 May in Boston
- Deep Learning in Healthcare Summit, 25-26 May in Boston