The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. However, despite their appeal, such models often fail to distinguish synthetic images from real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Dilip Krishnan, Research Scientist at Google, is working on two approaches to the problem of unsupervised visual domain adaptation (both of which outperform current state-of-the-art methods), which he will share alongside other insights and knowledge during his presentation at the Deep Learning Summit in Boston. I spoke to him ahead of the summit on 25-26 May to learn more about his work, and what we can expect in the next few years from the deep learning field. Tell us more about your work, and give us a short teaser to your session? I am a Research Scientist in Google's office in Cambridge, MA. I work on supervised and unsupervised deep learning for computer vision. In my talk, I will focus on my work in the area of Domain Adaptation, where networks trained for a task in one domain (e.g. computer graphics imagery) can generalize to other domains (e.g. real-world images). This allows us to leverage large amounts of synthetic data with ground truth labels. This work has applications in Robotics and other domains where labeled training data is expensive to collect.  What started your work in deep learning?   I studied for my PhD at New York University, under the supervision of Rob Fergus, and in the same lab as Yann LeCun, one of the pioneers of deep learning. I was a co-author on the first paper on deconvolutional networks, which are useful as visualization and synthesis tools for deep convolutional networks. What are the key factors that have enabled recent advancements in deep learning?   Clearly, large amounts of data and compute power are the biggest factors. This allows us to make larger models that are able to ingest larger amounts of training data. Better optimization methods (Adam, AdaDelta) and better tools (e.g. Tensorflow, distributed/asynchronous model training) have also played a role in allowing for more efficient engineering.  Which industries do you think deep learning will benefit the most and why?   Initially it will be industries/applications with large amounts of fairly clean labeled data. Examples are internet industries (Google, Facebook). Medical imaging applications can also benefit. We are seeing huge traction with intelligent voice-based assistants such as Amazon's Alexa and Google Home. In the medium term, self-driving cars powered by deep learning systems will arrive. Longer term, better generative models could impact fields such as art and music. What advancements in deep learning would you hope to see in the next 3 years?   Better models for unsupervised learning, and generative models. Also, more robust models for supervised learning, which are less susceptible to adversarial examples. Finally, better theory to explain models.Dilip Krishnan will be speaking at the Deep Learning Summit in Boston on 25-26 May, taking place alongside the Deep Learning in Healthcare Summit. Confirmed speakers include Carl Vondrick, PhD Student, MIT; Sanja Fidler, Assistant Professor, University of Toronto; Charlie Tang, Research Scientist, Apple; Andrew Tulloch, Research Engineer, Facebook; and Jie Feng, Founder, EyeStyle. View more speakers here.

Early Bird tickets are available until Friday 31 March for the summits in Boston. Register your place here.

[Image: Visual groupings applied to image patches, frames of a video, and a large scene dataset. Work by Dilip Krishnan, Daniel Zoran, Phillip Isola & Edward Adelson, more here.]