Neural networks have become proficient in many areas from image and voice recognition, to the understanding of natural language. Over the past years their accuracy has improved to a level almost comparable to humans. There are, however, still many tasks that neural networks are unable to carry out - think about human creativity for example. Training a machine to compose a piece of music or paint a picture requires a different set of training skills. Back in 2014, Ian Goodfellow first introduced Generative Adversarial Networks (GANs) which are able to build on their training in an unsupervised manner, analysing past mistakes or short fallings and building on them to improve the results. The GANs are built of two components, the generator neural network (the trainee), and the discriminator neural network (the trainer).
At the Deep Learning Summit in San Francisco this January 24 & 25, Ian will be sharing his most recent research progressions with attendees. Early bird discounted passes end this Friday 7th December, so register now to guarantee your place at a discounted price.
Most deep learning algorithms need thousands or millions of labelled examples in order to produce the desired results, and the introduction of adversarial training is helping to eliminate the necessity of so much data. Basically, AIs are able to learn complex tasks by imitating an ‘expert’ - the discriminator. GANs have been able to train two separate networks with competitive goals and can be used to draw images, categorise images, and also identify sentiment, rules and instructions. Facebook and Google are now among many companies who now rely heavily on GANs in their DL models, and after his presentation in San Francisco last year, Ian spoke to us about both his current work at Google Brain, and the progression of GANs.
Currently leading a small group of researchers studying adversarial techniques in machine learning, Ian’s work at Google Brain is exploring how it can be possible ‘to make algorithms that work well even when an adversary intentionally tries to make the algorithm fail.’ He explained how they are working on real-world scenarios for instance a spammer trying to send an email that will get through a filter, in other cases they study imaginary adversaries that we invented to give the machine learning algorithm more exercise and force it to get better. GANs for example ‘learn to generate realistic images by playing a game where a generator network has to make images that fool an object recognition network into thinking the fake images are real.’
With an undergraduate degree in computer science, Ian started off his work in AI with an internship in a neuroscience lab at the National Institutes of Health.
I ended up enjoying the machine learning aspect of the internship more than most of the other things I did there. When I came back to Stanford at the end of the summer, my academic advisor Jerry Cain suggested that I take Andrew Ng’s introduction to AI class. Before that class, I hadn’t taken AI seriously...Andrew’s class convinced me that machine learning was a real science and would be a good way to figure out how intelligence works.
Throughout his PhD, Ian studied under the supervision of both Aaron Courville and Yoshua Bengio, who spoke at the RE•WORK Deep Learning Summit in Montreal last week, and you can view the highlights here.
Upon speaking with RE•WORK, Ian answered four key questions about his current work and the impact of GANs in DL research and progressions:
What are the most recent key advancements in deep learning, and how have they come about?
As of July 2017, the most recent advance that’s really key in my opinion is the May 2017 announcement of the new generation of Google TPUs. Machine learning is always held back by limitations in the amount of computation we can use. The new Google TPU helps to bridge the gap between the amount of computation we can leverage in deep learning experiments and the amount of computation used in a biological nervous system. The previous generation of TPU was available only to Google engineers, but the new one will be available to Cloud customers, and researchers can apply to get access for free. The new TPU also supports training the machine learning model, which is a major advance over the previous generation, which could run a trained model but not carry out the training itself. These advances came about as the result of years of R&D following a foresighted investment in this area by Google leadership.
What are the practical applications of your work in neural networks/GANs, and which sectors are most likely to be affected?
One practical application of generative adversarial networks is semi-supervised learning. Most deep learning algorithms today need thousands or millions of examples of labeled examples: examples where the data shows a specific input and a specific output that the model should produce when it sees that input again. Semi-supervised learning algorithms can learn from both labeled examples and unlabeled examples---examples that include only the input. This means that they can learn from a small handful of labeled examples (maybe 100 or so) as long as there are still thousands of unlabeled examples available. GANs and other approaches to semi-supervised learning are likely to bring machine learning into a long tail of many different sectors that don’t have the massive investment in collection of labeled data that we’ve seen for object recognition.
What developments can we expect to see in deep learning in the next 5 years?
I want to highlight some developments that I think other people are likely to overlook:
- I think we’ll start to see a good set of best practices recommendations for how to make machine learning algorithms fair, when they’re used to make decisions that strongly affect people’s lives (like parole decisions, mortgage applications, etc.)
- I think we’ll start to see much stronger privacy guarantees, from techniques like differential privacy, federated learning, and maybe even homomorphic encryption.
- I think we’ll start to see machine learning algorithms that are very difficult for attackers to intentionally fool, but I don’t think we’ll see any security guarantees in the form of mathematical proofs of strong protection claims
What potential advancements of machine learning most excite you the most?
I’m very excited to see that machine learning for medicine is gathering more momentum. In particular, I was proud to see that differentially private GANs were used to demonstrate a system for sharing clinical data without compromising patient privacy. When I was an undergraduate studying neuroscience, I was interested both in figuring out how intelligence works and in figuring out how to treat diseases of the brain. Part of why I wanted to study AI was that I realized that if I could invent more capable AI algorithms, other people could use those algorithms to solve hard problems in human biology and other sciences.
At the Deep Learning Summit in San Francisco this January 24 & 25, Ian will be sharing his most recent work and progressions in GANs and deep learning at Google Brain.