Jörg Bornschein is a Global Scholar with the Canadian Institute for Advanced Research (CIFAR) and postdoctoral researcher in Yoshua Bengio’s machine learning lab at the University of Montreal. Jörg’s research is focused on the principles behind the early stages of sensory information processing in both biological and artificial systems, and he is currently concentrating on unsupervised and semi-supervised learning using deep architectures. At the Deep Learning Summit in London this month, Jörg will present a new method for training deep models for unsupervised and semi supervised learning. The models consist of two neural networks with multiple layers of stochastic latent units.The first network supports fast approximate inference given some observed data. The other network is trained to approximately model the observed data using higher-level concepts and causes. The learning method is based on a new bound-for-the log likelihood and the trained models are automatically regularized to balance between the requirement of making the job for both these models as easy as possible.  We spoke with Jörg ahead of his presentation at the summit on 24-25 September to hear more about his work and what we can expect to see in the future of deep learning.What are the key factors that have enabled recent advancements in deep learning? A lot has been said about the increasing size of data that is available, about large manually labelled data sets and about the computational resources that are available to us nowadays. I think these were crucial factors because the techniques and methods used in deep learning typically scale quite well. But we should not neglect the human aspect: machine learning, and especially deep learning, attracted many motivated and smart people with diverse backgrounds who came to this field to "make a dent". The most basic ideas have been in place since the 1980s, but the community has learned a lot about how to make these methods shine on today’s tasks and how to set-up, regularize and train large models. There has also been tremendous progress on the theoretical side of things. For instance, we now  understand much better how to relate concepts from deep learning to probabilistic modelling and what obstacles to expect during gradient based training.  What are the main types of problems now being addressed in the deep learning space?  Natural language processing is certainly an area of high interest and we have seen a lot of progress in the last years. It seems that attention mechanisms and techniques that deal with very deep and recurrent neural networks play an important role here. I would not be surprised if these approaches would mature and if these would become part of every deep-learning practitioners toolbox.   What developments can we expect to see in deep learning in the next 5 years?  Predicting the future is always hard. I expect that unsupervised, semi-supervised and reinforcement-learning approaches will play much more prominent roles than today. When we consider machine learning as a component in larger systems, e.g. in robotic control systems or as parts that steer and focus the computational resources of larger systems, it just seems obvious that purely supervised approaches are conceptually too limited to appropriately solve these.   What advancements excite you most in the field?  Theoretical and practical advances that make unsupervised learning catch up with the success of supervised learning; and seeing systems that generate complex output that somehow "act" in the real world. The Deep Learning Summit is taking place in London on 24-25 September, alongside the Future Technology Summit. For more information and to register, please visit the event website here.