Are GANs the next step in Deep Learning?
Well, the subset of Machine Learning was once described by Yoshua Bengio as the most interesting idea in the last 10 years of ML, with the technique of using two neural networks against each other to generate new, synthetic instances of data that can pass for real data, opening many doors in the world of AI. That said, we wanted to explore some of the applications of GANs currently being used through the below 5 must-watch presentations from DeepMind, NASA, MIT, Insitro and Université de Montréal.
Application of GANs - Video Presentations
Francesco Paolo Casale, Data Scientist at Insitro
In this presentation, Francesco introduces a new deep generative model for the genetic analysis of medical imaging, combining both convolutional neural networks and structured linear mixed models to extract latent imaging features in the context of genetic association studies. The linked presentation includes an application of the method to brain MRI images from the Alzheimer's Disease Neuroimaging Initiative dataset, where we reveal novel and known risk genes for neurological and psychiatric disorders. Genetic association studies and the process of evaluation during study is covered before looking at both the phenotypes and genetic variants of participants. See more on the affect of deep generative models on medical image analysis here.
Balaji Lakshminarayanan, Staff Research Scientist at DeepMind
A neural network deployed in the wild may be asked to make predictions for inputs that were drawn from a different distribution than that of the training data. Generative models are widely viewed to be a solution for detecting out-of-distribution (OOD) inputs and distributional skew, as they model the density of the input features p(x). Balaji and DeepMind challenge this assumption by presenting several counter-examples. In this presentation, Balaji explains his findings including that deep generative models, such as flow-based models, VAEs and PixelCNN, which are trained on one dataset (e.g. CIFAR-10) can assign higher likelihood to OOD inputs from another dataset (e.g. SVHN). Further investigation into some of these failure modes in detail, that help us better understand this surprising phenomenon, and potentially fix them are included in the above presentation. You can see Balaji's full presentation here.
David Bau, PhD Student at MIT CSAIL
The remarkable success of Generative Adversarial Networks in generating nearly photorealistic images leads to the question: how do they work? Are GAN just memorization machines, or do they learn semantic structures? What do these networks learn? In this talk, David introduces the method of Network Dissection to test the semantics captured by neurons in the middle layers of a network, and show how recent state-of-the-art GANs learn a remarkable amount of structure. Even without any labels in the training data, neurons in a GAN trained to draw scenes will separately code for objects such as trees, furniture, and other meaningful objects. The causal effects such neurons are strong enough that we can add and remove objects and paint pictures directly by manipulating the neurons of a GAN. These methods provide insights about the a GAN's errors as well as the contextual relationships learned by a GAN. By cracking open the black box, we can see how deep networks learn meaningful structure, and we can gain understandable insights about a network’s inner workings.See the full presentation from David here.
Krittika D'Silva , AI Researcher at NASA Frontier Development Lab
In this presentation, Krittika talks on her work at NASA FDL in which she examined how AI can be used to support medical care in space. Future NASA deep space missions will require advanced medical capabilities, including continuous monitoring of astronaut vital signs to ensure optimal crew health. Also discussed in this presentation is biosensor data collected from NASA analog missions can be used to train AI models to simulate various medical conditions that might affect astronauts. Other topics covered include the future of AI and space medicine, continuous monitoring using wearables, symptomatic data v unsymptomatic data and more. See the full presentation from Krittika here.
Simon Lacoste-Julien, VP Lab Director/Associate Professor at SAIT AI Lab/Université de Montréal
Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, Simon casted GAN optimization problems in the general variational inequality framework and investigate the effect of the noise in this context. This work, then, proposes to use a variance reduced version of the extragradient method, which shows very promising results for stabilizing the training of GANs. Other topics of interest covered include variance reduction, extragradient models & SVRE. See the full presentation here.
Interested in reading more leading AI content from RE•WORK and our community of AI experts? See our most-read blogs below:
Top AI Resources - Directory for Remote Learning
10 Must-Read AI Books in 2020
13 ‘Must-Read’ Papers from AI Experts
Top AI & Data Science Podcasts
30 Influential Women Advancing AI in 2019
‘Must-Read’ AI Papers Suggested by Experts - Pt 2
30 Influential AI Presentations from 2019
AI Across the World: Top 10 Cities in AI 2020
Female Pioneers in Computer Science You May Not Know
10 Must-Read AI Books in 2020 - Part 2
Top Women in AI 2020 - Texas Edition
2020 University/College Rankings - Computer Science, Engineering & Technology
How Netflix uses AI to Predict Your Next Series Binge - 2020
Top 5 Technical AI Presentation Videos from January 2020
20 Free AI Courses & eBooks
5 Applications of GANs - Video Presentations You Need To See
250+ Directory of Influential Women Advancing AI in 2020
The Isolation Insight - Top 50 AI Articles, Papers & Videos from Q1
Reinforcement Learning 101 - Experts Explain
The 5 Most in Demand Programming Languages in 2020