Deep learning has enabled countless successes on a number of applications, including computer vision, speech recognition, and machine translation. While many of such systems are already deployed, the field of robotics has yet to reap the benefits that other areas have experienced -- in general, standard deep learning methods are not directly applicable to robotic learning.

At the Deep Learning Summit in San Francisco back in January of this year Chelsea Finn, PhD Student at UC Berkeley spoke about how deep unsupervised and supervised learning techniques can enable robots to learn manipulation skills from raw pixel inputs.

Chelsea explained that the goal of her work is to ‘enable robots to autonomously acquire skills, and in particular skills that require sensory motor learnings’. Tasks that are seemingly simple for humans are challenging for robots as they haven’t been given the opportunity to learn in the same way as we have - over time and through exposure to examples. Robots can always be programmed to carry out these tasks, but this is ineffective: if any element changes that the robot doesn’t understand it will not be able to change its action accordingly. This means that the robot needs to understand its environment to be successful.  ‘Deep learning allows us to use a very flexible representation to represent these complex functions and we can learn this representation from experience’. In her presentation, Chelsea explained how robots can learn mental models of the visual world and imagine the outcomes of their actions, and how unsupervised learning can be used to allow robots to build internal representations of moving objects. At the Deep Learning for Robotics Summit in San Francisco next June 28 & 29, Chelsea will be joining us to share her newest research progressions.

Watch Chelsea’s presentation from this years summit here.

Also joining us in San Francisco next June is Sergey Levine, Assistant Professor at UC Berkeley, who will be discussing generalisation and efficiency in deep robotic learning. As Chelsea mentioned, deep learning has huge potential to enable machines to understand complex and unstructured environments, but the lack of available data can be a hindrance in applying traditional deep learning methods. Sergey will discuss the designing of algorithms for robotic deep learning that aim to overcome these challenges. One of the ingredients in overcoming these challenges is self-supervision: robots that can generate their own training data through autonomous exploration can alleviate much of the challenge of meticulous human labeling, and I will discuss how self-supervision can be employed for tasks ranging from robotic grasping to obstacle avoidance. He will also be exploring the recent advances in model-free reinforcement learning that make reinforcement learning more practical for real-world robotic learning.

We were joined by Sergey last January where he discussed end-to-end training of robotics to simplify perception and control problems, by allowing the perception and control mechanisms to adapt to one another and to the task. You can watch Sergey’s presentation here, and register for the event on June 28 & 29 to learn from Sergey, Chelsea, and other experts working in Deep Learning for Robotics.