Previously named one of MIT Technology Review's 'Innovators Under 35', Pieter Abbeel is Co-Founder of Gradescope and Associate Professor at UC Berkeley, as well as working as a consultant for machine learning and robotics companies. At UC Berkeley, Pieter's current research is primarily focused on deep learning for robotics, where learning could be from demonstrations (apprenticeship learning) or through the robot's own trial and error (reinforcement learning), with targeted application domains including autonomous manipulation, flight, locomotion and driving. At the Deep Learning Summit in San Francisco next month, Pieter will present 'Deep Reinforcement Learning for Robotics', discussing major challenges for, as well as some preliminary promising results towards, making deep reinforcement learning applicable to real robotic problems. I caught up with him ahead of the event to hear more.What started your work in deep reinforcement learning? I have been working at the junction of robotics and machine learning for many years.  A lot of this work has been in apprenticeship learning, where a robot learns to perform a task by observing human demonstrations.  This work enabled autonomous helicopter aerobatics at a level only exceptional human pilots can perform, four-legged locomotion across challenging terrains, knot-tying, (scaled-up) surgical suturing, and folding laundry.  In reinforcement learning a robot has to learn to perform tasks from its own trial and error, while being scored (“rewarded”) for its performance.  A couple of years ago, seeing the rapid advances in supervised learning (where an input-output mapping has to be learned from examples) through the use of deep neural nets, I started to think that maybe reinforcement learning could undergo the same transformative leap forward—if it could be figured out how to get deep reinforcement learning to work. So that’s when we embarked on our deep reinforcement learning research efforts.   What are the key factors that have enabled recent advancements in this area? Major factors contributing to advances in deep supervised learning have been: more labeled data, more compute power, better optimization algorithms, and better neural net models and architectures. Deep reinforcement learning is still in its very early stages, but I foresee it benefiting just as well (and likely even more) from the ever-increasing compute power.  Reinforcement learning is a different machine learning problem than supervised learning, accordingly I anticipate new advances in optimization (for RL problems) and new models and architectures will have to emerge.   Naturally, as reinforcement learning is such a hard problem, for many application domains it can be expected to be bootstrapped from demonstrations.  Ever-increasing amounts of how-to videos could play a role in this.   What are the main types of problems now being addressed in the deep learning space? Deep supervised learning has resulted in significant advances, but still requires lots and lots of labeled data.  As in deep supervised learning more data thus far tends to consistently win, there are many efforts on how to cleverly / cheaply obtain large amounts of labeled data, including training data augmentation schemes.  This is especially true for verticals of potential commercial interest, such as speech recognition, image / video recognition and annotation, predictions based on medical records, etc.   However, aside from exploiting advances in deep supervised learning for various application domains, there are many active research directions, such as deep unsupervised learning (i.e., learning from unlabeled data) and deep reinforcement learning (i.e., learning to act).   What are the practical applications of your work and what sectors are most likely to be affected? Example application domains are robotics, AI for video games, advertising/marketing; more generally, any domain where a system is expected to make decisions that in turn will affect the situation that systems finds itself in.   What developments can we expect to see in deep learning in the next 5 years? Lots of verticals based on current deep supervised learning technology, as well as scaling to video, figuring out how to make deep learning outperform current approaches to natural language processing, and significant advances in deep unsupervised learning and deep reinforcement learning.   What advancements excite you most in the field? Personally, I am most intrigued by the potential of getting robots to become smart enough to do meaningful things in our every-day environments, such as our homes, offices, hospitals, etc.Pieter Abbeel will be speaking at the RE•WORK Deep Learning Summit in San Francisco, on 28-29 January 2016. The agenda also features Andrew Ng, Baidu; Clement Farabet, Twitter; Naveen Rao, Nervana Systems; Pieter Abbeel, UC Berkeley; and Andrej Karpathy, Stanford University.


Deep Learning Summit is taking place alongside the Virtual Assistant Summit. Previous events have sold out and places are now limited, book your space now to avoid disappointment - visit the event site here.