Are GANs the next step in Deep Learning?

Well, the subset of Machine Learning was once described by Yoshua Bengio as the most interesting idea in the last 10 years of ML, with the technique of using two neural networks against each other to generate new, synthetic instances of data that can pass for real data, opening many doors in the world of AI. That said, we wanted to explore some of the applications of GANs currently being used through the below 5 must-watch presentations from DeepMind, NASA, MIT, Insitro and Université de Montréal.

Application of GANs - Video Presentations

A Deep Generative Model Approach To The Genetic Analysis Of Medical Images

Francesco Paolo Casale, Data Scientist at Insitro

In this presentation, Francesco introduces a new deep generative model for the genetic analysis of medical imaging, combining both convolutional neural networks and structured linear mixed models to extract latent imaging features in the context of genetic association studies. The linked presentation includes an application of the method to brain MRI images from the Alzheimer's Disease Neuroimaging Initiative dataset, where we reveal novel and known risk genes for neurological and psychiatric disorders. Genetic association studies and the process of evaluation during study is covered before looking at both the phenotypes and genetic variants of participants. See more on the affect of deep generative models on medical image analysis here.

Do Deep Generative Models Know What They Don't Know

Balaji Lakshminarayanan, Staff Research Scientist at DeepMind

A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. Generative models are widely viewed to be a solution for detecting out-of-distribution (OOD) inputs and distributional skew, as they model the density of the input features p(x). Balaji and DeepMind challenge this assumption by presenting several counter-examples. In this presentation, Balaji explains his findings including that deep generative models, such as flow-based models, VAEs and PixelCNN, which are trained on one dataset (e.g. CIFAR-10) can assign higher likelihood to OOD inputs from another dataset (e.g. SVHN). Further investigation into some of these failure modes in detail, that help us better understand this surprising phenomenon, and potentially fix them are included in the above presentation. You can see Balaji's full presentation here.

Visualizing and Understanding Generative Adversarial Networks

David Bau, PhD Student at MIT CSAIL

The remarkable success of Generative Adversarial Networks in generating nearly photorealistic images leads to the question: how do they work? Are GAN just memorization machines, or do they learn semantic structures? What do these networks learn? In this talk, David introduces the method of Network Dissection to test the semantics captured by neurons in the middle layers of a network, and show how recent state-of-the-art GANs learn a remarkable amount of structure. Even without any labels in the training data, neurons in a GAN trained to draw scenes will separately code for objects such as trees, furniture, and other meaningful objects. The causal effects such neurons are strong enough that we can add and remove objects and paint pictures directly by manipulating the neurons of a GAN. These methods provide insights about the a GAN's errors as well as the contextual relationships learned by a GAN. By cracking open the black box, we can see how deep networks learn meaningful structure, and we can gain understandable insights about a network’s inner workings.See the full presentation from David here.

Building Generative Models Of Symptomatic Health Data for Autonomous Deep Space Missions

Krittika D'Silva , AI Researcher at NASA Frontier Development Lab

In this presentation, Krittika talks on her work at NASA FDL in which she examined how AI can be used to support medical care in space. Future NASA deep space missions will require advanced medical capabilities, including continuous monitoring of astronaut vital signs to ensure optimal crew health. Also discussed in this presentation is biosensor data collected from NASA analog missions can be used to train AI models to simulate various medical conditions that might affect astronauts. Other topics covered include the future of AI and space medicine, continuous monitoring using wearables, symptomatic data v unsymptomatic data and more. See the full presentation from Krittika here.

New Optimization Perspective On Generative Adversarial Networks

Simon Lacoste-Julien, VP Lab Director/Associate Professor at SAIT AI Lab/Université de Montréal

Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, Simon casted GAN optimization problems in the general variational inequality framework and investigate the effect of the noise in this context. This work, then, proposes to use a variance reduced version of the extragradient method, which shows very promising results for stabilizing the training of GANs. Other topics of interest covered include variance reduction, extragradient models & SVRE. See the full presentation here.

Interested to get first access to more content? Subscribe to our newsletter for our monthly newsletter! Register here.