We sat down with Chief Scientist and Co-Founder of OpenAI, Ilya Sutskever, to discuss the current challenges faced across industry in regard to Deep Learning and some of the advancements on the horizon. We are delighted to be joined by Ilya again in San Francisco at the start of next year where he will be discussing whether any of the below predictions proved correct.
But first, for those who are not yet acquainted with either Ilya or OpenAI, a brief introduction. OpenAI, a non-profit company, was formed in 2015 by Ilya, alongside Elon Musk, Sam Altman and Greg Brockman with the goal of not only conducting groundbreaking research, but to develop what was dubbed 'friendly AI', that is, AI to benefit humanity as a whole and decrease wider societies fear of AI development. Just this year, industry giants Microsoft pledged a $1 Billion of investment for future development.
Ilya, having completed his PhD with the Machine Learning Group of the University of Toronto, under the watch of Geoff Hinton, Ilya went on to co-found DNNresearch (acquired by Google) with Hinton and fellow graduate Alex Krizhevsky, as well as completing postdoctoral work at Stanford University with Andrew Ng's group. Until his appointment at OpenAI, he was a member of the Google Brain team, working as a research scientist. In his talk, at the Deep Learning Summit in San Francisco in January, Ilya is set to discuss the latest technological advancements and application methods from OpenAI research, as well as joining Lex Fridman for a fireside chat on the Deep Learning stage.
What are the key factors that have enabled recent advancements in deep learning?
- Sufficiently fast computers
- The availability of sufficiently large, high-quality labelled datasets
- Algorithms, techniques, and skills for training large deep nets
What are the main types of problems now being addressed in the deep learning space?
At present, large and deep neural networks are applied to a very large variety of problems. For example, there have been nearly 50 product launches within Google, all to different problems.
What are the practical applications of your work and what sectors are most likely to be affected?
The practical applications are vast, mainly because deep learning algorithms are largely domain-agnostic. Perception has already been affected. In the near future, I think that robotics, finance, medicine, and human-computer interaction are very likely to be affected. I don't think that this list is exhaustive, however.
What developments can we expect to see in deep learning in the next 5 years?
We should expect to see much deeper models, models that can learn from many fewer training cases compared to today's models, and substantial advances in unsupervised learning. We should expect to see even more accurate and useful speech and visual recognition systems.
What advancements excite you most in the field?
I am very excited by the recently introduced attention models, due to their simplicity and due to the fact that they work so well. Although these models are new, I have no doubt that they are here to stay, and that they will play a very important role in the future of deep learning.
Watch a recent fireside chat with Ilya Sutskever and Lex Fridman here on our extensive AI Video Library.