Google DeepMind, the ‘world leader in artificial intelligence research and its application for positive impact’, have created a AI that has learned to walk without any prior guidance. The figure moves through virtual environments with the goal of one day helping robots to navigate in complex and unfamiliar spaces environments.

But what does this mean?

DeepMind are constantly pushing the boundaries of AI, and this new avatar has learned to overcome a series of obstacles simply through incentives. The model has received no prior input of information or instructions. This unsupervised model has used reinforcement learning to come up with a somewhat unique method to travel through the designated course using a new neural network component called the Symbol-Concept Association Network (SCAN). This model has learned to mimic human vision to understand visual concept hierarchies. Previously, AI has been unable to learn in the same way as humans; it is able to retain and build on information but can’t make the mental leap of combining familiar concepts into something entirely new

Watch the avatar complete the obstacle course.

DeepMind have been trying to look at how it's AI learns and replicates the process of the human brain. Think about the way in which a child learns in its first few months of life - they’re unable to focus on anything further than an arm's length away, can’t make sense of language, and spend their time observing and learning. Like a baby in a cot, DeepMind’s avatar learns by observing ‘one of three possible objects presented to it against various coloured backgrounds - a hat, a suitcase or an ice lolly.’ The model is then able to learn basic structures of the visual world and how to represent objects ‘in terms of interpretable visual “primitives”. For example, when looking at an apple, the model will learn to represent it in terms of its colour, shape, size, position or lighting.’

Every movement the figure takes is self taught and is it’s solution of finding the best way of getting from A to B - the only thing the DeepMind engineers have given the avatar is a visual proximity sensor and the incentive to move forwards. It was then up to the computer to decide the best way to go about this.Interested in learning more about research breakthroughs at DeepMind?

Hear from Jörg Bornschien, Research Scientist at DeepMind who will be presenting his recent research at the Deep Learning Summit in London September 21 & 22. Early Bird discounted passes are on sale until July 28 - register now to guarantee your place at the summit.

Other confirmed speakers include: Eli David, CTO, Deep Instinct, Ed Newton-Rex, Founder & CEO, Jukedeck, Fabrizio Silvestri, Software Engineer, Facebook, Ankur Handa, Research Scientist, OpenAI, and more.

View other confirmed speakers here.