From AI teaching itself about the world ‘just like a child’, setting its sights on taking on humans at StarCraft, and creating AI with ‘imagination’, you’d be hard pressed to have missed DeepMind in the news in recent years.

DeepMind are currently leveraging unsupervised learning and are teaching AI to learn and understand the goings on of the real world. Previously, machines have been programmed to learn from a sequence of instructions, looking inwards to understand how to interpret environments, categorise images, or grasp movements. This time, however, DeepMind have trained the model to look into the real world and learn as a human child does.

DeepMind have been able to teach a model to learn, by itself, to recognize visual and audio concepts. How? By watching videos. The programme DeepMind have created allows the skills to be learned one by one and remember how it solved past problems, implementing these strategies to tackle new problems.

This, however, isn’t the only way they are leveraging unsupervised learning. Speaking from DeepMind, Jörg Bornschein explained that whilst there have certainly been a vast amount of progressions in unsupervised learning in recent times ‘we have yet to see its full potential’. He went on to explain that ‘adversarial training, autoregressive models and progress in variational inference methods all allowed us to use high-capacity neural networks as components in our generative models. It is impossible to attribute the progress to a specific technique -- but it is probably fair to say that progress has been driven primarily by new ideas and insight rather than bigger datasets and faster computers.’

At the Deep Learning Summit in London September 21 & 22, Jörg will be presenting his current research with DeepMind and will focus on his work in unsupervised and semisupervised learning using deep architectures.

In short, unsupervised learning is a machine learning algorithm that draws inferences from datasets consisting of input data without labelled responses. Unlabeled data sets pose huge problems when working in deep learning, as the machine is unable to categorize and learn from the data. As unsupervised learning begins to overcome this, there are obstacles that need to be overcome along the way. Jorg explained that in his current work, there are several challenges to progressing with unsupervised and semisupervised learning using deep architectures.

When building generative models that produce samples for human consumption we face different challenges than when dealing with a semisupervised problem or when using unsupervised techniques for reinforcement learning. Understanding the trade-offs related to these applications and finding architectures that combine unsupervised and reinforcement learning seem like a worthwhile research direction that already receives a lot of attention.

Ahead of the Summit, we caught up with Jörg to ask a couple of quick questions about his work.

What started your work in deep learning?

I always had keen interest in anything related to information processing. On the one side there were practical aspects like open source software, distributed systems and reconfigurable computing architectures (e.g. FPGAs). On the other side were more theoretical concepts like information processing in dynamical systems (e.g., spiking neural networks) and the more general question of how to map the rather abstract concept of information processing onto our physical reality. From this perspective it was very natural to study artificial neural networks and to investigate how they represent and process information.

What developments can we expect to see in deep learning in the next 5 years?

I hope we will see a lot of progress related to rapid acquisition of knowledge: Few shot learning for example, where a generative or discriminative model has to generalize from only a few positive examples. Or architectures that retrieve additional information which is not stored in the models connection weights. I imagine these will have a big impact on both our theoretical understanding of what is possible, and also have profound impact on many practical applications.

As these models continue to become more intelligent and machines continue to learn for themselves, the push towards artificial general intelligence (AGI) continues. However, in order to achieve AGI, there is still a long way to go in perfecting unsupervised learning and creating a machine that can perform any intellectual task that humans can. DeepMind are leading the race to create the first AGI ‘brain’, and we will be watching closely to see how their exciting research progresses and evolves.

To learn more from DeepMind, join us in London on September 21 & 22 for the Deep Learning Summit, and AI Assistants Summit.Marta Garnelo, Research Scientist from DeepMind will also be joining Jörg to discuss 'Representations for Deep Learning'.

Do you know a startup working in AI, deep learning, or machine learning? RE•WORK are currently offering a complimentary pass for the London Summits when you recommend a startup disrupting the space. Get in touch here.