To be a market leader in any industry it’s important to stay ahead of the curveball not only with your competitors, but also with the latest trends and cutting edge technologies. This is becoming ever more prominent as AI forges a more prevalent status in the business world.

Last week at the Machine Intelligence Summit in Amsterdam, the NVIDIA Deep Learning Institute ran a hands on workshop for developers, data scientists, and engineers designed to help attendees get started with training, optimising, and deploying neural networks to solve real-world problems across diverse industries such as self-driving cars, healthcare, online services, and robotics.

Before kicking off the workshop, Adam Grzywaczewski explained that ‘at NVIDIA we’re all about helping people solve challenging problems using AI and deep learning - we help developers, data scientists and engineers. As a deep learning solution architect I help companies build their product and try to get them talk about AI.’

The workshop was available to any level of experience, and focused on how to leverage deep neural networks (DNN) - specifically convolutional neural networks (CNN) - within the deep learning workflow to solve a real-world image classification problem using NVIDIA DIGITS on top of the Caffe framework and the MNIST hand-written digits dataset.

Adam began by stressing the importance of understanding the problem you’re trying to solve by exploring the definition of deep learning. He used the example of identifying an image in different states: if you have a car and it is missing a wheel or a roof, humans can still identify that it’s a car, but that’s a challenge for a machine before it learns these identifiers. Neural networks are competent in taking huge quantities of data and learning representations to identify these things without the errors and time scales that written computer programmes face.

Adam brought up the wielokat and asked the room if anyone knew what it was and started showing examples of images and asking everyone to guess whether the images are wielokat’s. He ‘trained’ us as he would a neural network to identify the image. This was to demonstrate how efficient humans are in learning in comparison to neural networks which need far more examples to learn. This was Adam’s example of supervised learning.

‘Currently neural networks require different models for different problems, although this should hopefully change over the next few years, but as it stands an untrained network without any data inputted doesn’t do anything.’

Once you have carried out the training by providing the model with numbers it is able to approximate the function and learns how to map x to y, ‘a label of a cat is your y and a picture of your cat is an x.’ All of these problems are supervised problems, and all the network does is learns these functions to solve the problem, and here we have a trained model with a new capability. When you take this model and integrate it into a new product you then have an application of the capability to the new data which is the inference.

Throughout the workshop, we focused on the problem of handwritten digit recognition which was a big problem in the 90s, and is a dataset is still heavily used. Computer vision and object classification isn’t the only problem that neural networks solves, it’s just that it’s the most mature domain, but it’s currently experiencing success in unsupervised machine learning.

We were taken through the NVIDIA online lab which walked us through the training beginning with the model evaluation looking at the accuracy obtained from validation dataset, the loss function (training) and loss function (validation). Adam then explained our results to identify whether our networks behaved well or behaved badly, and guided us through understanding the results once we’d trained them.

To train the deep learning network you need forward propagation which yields an inferred label for each training image.

‘It didn’t work for the last 40 years not because it was too complicated, but because we didn’t have enough data and there were some mathematical errors. The more data we have, the more accurate the results will be.

A cheat is to do data augmentation and take the data and tweak it to get more accurate results from more data.’ Data augmentation is a fairly cheap way to expand your dataset.

The online lab then asked us to continue and implement data augmentation to build a larger dataset to see how it effects the results - this introduction of inverted and altered images increased accuracy across the board. We also extended the depth of the neural network by adding an extra layer. This increased performance, but as we learned, it’s not always straightforward! Adam explained that there’s a lot of image processing issues to go through to get to this stage.

We wrapped up the workshop with a look back at the training vs. programming of the neural networks to increase performance, and also looked towards the next steps experimenting with image classification using different datasets in increase performance and learning to train existing networks with data for other challenges.

Missed out on learning from NVIDIA in Amsterdam?
Catch them in London, Montreal and San Francisco as our Global Deep Learning Summit Series continues: