Oriol Vinyals is a Senior Research Scientist at Google Brain, previously completing his PhD with the Electrical Engineering & Computer Science department at UC Berkeley. His name has appeared in the news many times in recent years for his work in deep learning, including the development of software to aid the blind, a chatbot that learns how to converse with you, and technology that can "translate" images into words (example image above). At the RE•WORK Deep Learning Summit on 28-29 January, Oriol will discuss a new paradigm which will change the way we treat sequences, such as language, in machine learning. His presentation will also review two established tasks: image captioning and conversational models, which are now in a Gmail feature called SmartReply, and a more speculative approach to problems with neural networks. We had a quick Q&A with Oriol ahead of the summit to learn more about his work.
Give us an overview of your work at Google. At Google I have been working in deep learning and sequence models -- i.e., Recurrent Neural Networks. One of the key advances in 2014 was our Sequence to Sequence learning paper which we presented at NIPS 2014. Since then, I have led several research projects:
- Image Captioning, which was presented at CVPR2015 and featured in the press (New York Times, BBC (video), MIT Technology Review)
- Conversational Agents, which started the SmartReply project, and which also was featured as an ICML workshop paper, and covered by the press (Wall Street Journal, Wired, Bloomberg)
- Learning to approximate NP-hard problems with a new neural architecture, to be presented at NIPS2015 as a spotlight
What are the key factors that have enabled recent advancements in deep learning? Scalability, and having a much larger community of researchers and enthusiasts that, through their creativity, discover new ways to both apply the research to unseen areas, and advance the technology by learning how to optimize, architect, and regularize deep learning models.
What further developments are essential for future progress in the field? Curriculum learning that actually works, and unsupervised learning which non-marginally helps other supervised tasks.
What are the main types of problems now being addressed in the deep learning space? Sequences have become first class citizens, joining images, and other modalities for which we had many successes in the last decade. This has enabled a rich set of models which can handle text and other interesting data structures, and has been my main focus of research.
What advancements excite you most in the field? The space of learning programs is very exciting, as it enables us to think of all sorts of problems which require writing a computer program, and which instead can be formulated from a machine learning perspective.
Oriol Vinyals will be speaking at the RE•WORK Deep Learning Summit in San Francisco, on 28-29 January 2016.
Other speakers include Andrew Ng, Baidu; Clement Farabet, Twitter; Naveen Rao, Nervana Systems; Pieter Abbeel, UC Berkeley; and Andrej Karpathy, Stanford University.