We were delighted to be joined by Hugo Larochelle, Director at Google Brain at the Deep Learning Summit in Montreal. Hugo took to the stage with his presentation focussed on Few-Shot Learning (FSL), discussing not only the background of this topic, but also the progression that we should see in the coming months and years through research developments. Few-Shot learning is something which has gained significant interest in the last few years with an increasing number of methods proposed each month solely focussed on development.

What is Few-shot learning I hear you ask? Few-shot learning is the problem posed through learning new skills and abilities for tasks from small amounts of labelled data. This somewhat differs from the standard injection of mass data commonly seen in Deep Learning, which makes it a suitable fit for subsections of AI which can provide accurate results on substantially fewer data points, a great example of which being computer vision.

Prior to getting into the technical capabilities of FSL, Hugo discussed the advancements in the field since his last presentation at RE•WORK.

“Since I last spoke, miniImageNet 5-way, 5-shot, has made significant improvements. This is through both various method developments in recent months alongside an influx of extremely talented individuals choosing to focus their work and research in this topic area."

Further to this, Hugo suggested that the ablation studies, both those published and currently underway, have bridged large gaps in research which were previously evident as recently as two years ago.

It was evident from Hugo’s talk that there are still a great number of admin based tasks when it comes to developing algorithms for FSL, with the irritating task of data collection for emotional image classifiers still present. The example given of the large data sets needed for visual classifications referenced the need for 64 classes of sourced metadata, with an additional 20 classes separately collated for testing purposes. From this, only 5 classes were drawn from source data. This process is then regularly repeated in aid of enhancing generalisation learning for problem solving. Sound complicated? It is exactly that! Hugo continued to explain that in meta-learning, the processes in place mean that a meta-learning data set can be fed support data in each ‘episode’ which then goes on to perform a 5-way classification problem. The meta-learning then creates a predictor of emotional recognition. The reality is, as can be seen through the huge amount of data needed for incremental improvement in accuracy, that the disparity between machine intelligence and human conceptual understanding is still vast.

The measurement of success on this comes from performance on meta test sets with no overlapping data. In short, what we are finding at this moment in time is that it is still possible to get high accuracy when not using labelled data, in fact, the accuracy of results in few-shot learning, both with and without labelled data, is very high. For training using supervised accuracy, Google Brain uses prototypical networks and for unsupervised they use certoi networks, however, when tested, Hugo stated the need to cluster support sets, which in turn are used as labels for classifying. Hugo then suggested that without labelled data, accuracy of up to 55% has been seen and with labelled data it increases a mere 15%. Due to this, it was suggested that there is room for improvement in the level of task difficulty for meta training and an increased standard for meta evaluation. The caveat to the above promotion of unlabelled data is that when provided with a support set, the images used in classifiers are increasingly useful.

Despite the recent developments, the question arose whether we should be increasing the level of benchmarking for developments and results. Could there be greater benchmarks for few-shot learning? Are the labelled support sets actually useful? "I don’t really need those labels because I can infer them from the support set…. So we propose a class semantics consistency criterion(CSCC)." Hugo then suggested that Google Brain have proposed a benchmark dubbed ‘meta-dataset’. How can we create a better benchmark? It was suggested that this would require various classes per episode with variety also present in the number of examples of tasks. It was further noted that stripping the data from one data set would not be suitable if the end goal in mind is to broaden the ability of few-shot learning. This was suggestively due to the need to create as many classification possibilities as possible. The necessity for a rich data was again inferred as the ability to learn, suggested to hinge on the ability to draw from several data sets.

Few-shot learning, whilst progressing in recent times, has shown a trade-off between the ability to learn and the ability to learn whilst also retaining knowledge. Hugo suggested that implicitly, FSL learns to retain the knowledge and skills needed for a problem and then discards that which it deems unnecessary for problem solving. That said, Hugo suggested that this will improve over time, becoming increasingly more effective. Currently, most of the methods we have, rely on the same convolutional neural network (CNN) backbone which would need an update to see major development.  

What are the key ways to better benchmarking? Learning across multiple datasets, varying the number of classes per episode, varying the number of examples per episode, and also using variation in frequency of each class per episode.

Hugo finalised his presentation with three key takeaways for attendees to further research after the summit:

  1. It is possible to achieve surprisingly high accuracy on current few-shot learning benchmarks without using labelled data
  2. The most popular few-shot learning methods aren't robust to training on heterogeneous sets of tasks
  3. As with most algorithms, none of these few-shot learning methods dominate in all settings (e.g. of number of shots).

Speaker Bio

Hugo Larochelle is Research Scientist at Google and Assistant Professor at the Université de Sherbrooke (UdeS). Before, he was working with Twitter and he also spent two years in the machine learning group at the University of Toronto, as a postdoctoral fellow under the supervision of Geoffrey Hinton. He obtained his Ph.D. at Université de Montréal, under the supervision of Yoshua Bengio. He is the recipient of two Google Faculty Awards. His professional involvement includes associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), member of the editorial board of the Journal of Artificial Intelligence Research (JAIR) and program chair for the International Conference on Learning Representations (ICLR) of 2015 and 2016.

Join us at the next edition of the global Deep Learning Summit Series or start a free trial of our extensive Video Library here.

https://videos.re-work.co/discover