As we approach the end of 2021, we wanted to share 13 of the most important AI papers of the year, as selected by the experts in the RE•WORK community who will be speaking at the Deep Learning Hybrid Summit in San Francisco in February 2022.

These papers are free to access and cover a range of topics from computer vision to the way deep learning is helping to uncover the mysteries of space.

Like what you see? You can join us and connect with our experts discussing trends and industry updates in the Deep Learning Hybrid Summit. Get your ticket here to join us in-person or virtually.

Vera Serdiukova, Senior AI Product Manager – NLP at Salesforce

Before joining Salesforce as Senior AI Product Manager, Vera Serdiukova built edge computing machine learning capabilities as a part of LG’s Silicon Valley Lab Advanced AI Team. Before that, she developed speech-enabled interfaces for Bosch’s Robotics, Connected Car, and Smart Home products.

1.    On the Opportunities and Risks of Foundation Models (2021) - Rishi Bommasani et al

AI is undergoing a paradigm shift with the rise of models that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. This paper calls these models ‘foundation models’ to underscore their critically central yet incomplete character. This paper provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities and technical principles to their applications and societal impact. Read the paper here.

Alexandra Ross, Senior Data Protection, Use & Ethics Counsel at Autodesk

Before becoming a Director of Global Privacy and Data Security Counsel at Autodesk, Inc., Alexandra Ross was Senior Counsel at Paragon Legal and Associate General Counsel for Wal-Mart Stores. In 2019, she was the recipient of the Bay Area Corporate Counsel Award.

2.    Privacy Laws, Genomic Data and Non-Fungible Tokens (2020) - Gisele Waters, Ph.D. and Daniel Uribe

The first paper selected by Alexandra analyses the main legal requirements in the California Consumer Protection Act (CCPA), general data protection regulation (GDPR), and the intersections between privacy laws, genomic data, and smart contracts (such as fungible and non-fungible tokens). The CCPA and GDPR laws impose several restrictions on the storing, accessing, processing, and transferring of personal data. Read the paper in full here.

3.    Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence (2020) - Frederik Zuiderveen Borgesius

The second paper highlighted by Ross addresses the current legal protections in Europe protecting against discriminatory algorithmic decision-making. The paper shows that existing regulatory instruments have severe weaknesses when applied to artificial intelligence and suggests how the enforcement of current rules can be improved. The paper also explores whether additional rules are needed. Read the full paper here.

Maithra Raghu, Senior Research Scientist at Google Brain

At Google Brain, Maithra Raghua works on machine learning design and human-AI collaboration. She holds a Master’s Degree in Mathematics from the University of Cambridge, and a Ph.D. in Philosophy from Cornell University.

4.    MLP-Mixer: An All-MLP Architecture for Vision (2021) - Ilya Tolstikhin et al

The first paper chosen by Raghu presents an architecture based exclusively on multi-layer perceptrons called MLP-Mixer. MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches, and one with MLPs applied across patches. When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference costs comparable to state-of-the-art models. Read the full paper here.

5.    Learning Transferable Visual Models From Natural Language Supervision (2021) - Alec Radford et al

Raghua’s second recommended paper investigates a potential alternative to modern computer vision systems. The paper demonstrates that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. You can access the full paper here.

Ryan Alimo, Lead ML Scientist at NASA Jet Propulsion Laboratory

Dr. Ryan Alimo is a ML/AI scientist at NASA’s JPL and founder of OPAL AI Inc, a technology startup up in the Silicon Beach. Dr. Alimo’s research interests span theory and practice of data-driven optimization, machine vision, and swarm autonomy.

6.    Anomaly Detection for Data Accountability of Mars Telemetry Data (2020) – Dounia Lakhmiri et al

The first of five papers recommended by Alimo presents a derivative-free optimization hybrid algorithm designed to quickly produce efficient variational autoencoders to assist the ground data system analysis for the Mars Curiosity Rover team in their mission. Variational autoencoders are powerful, deep neural networks that can learn to isolate such anomalies yet, as any deep network, they require an architectural search and fine-tuning to yield exploitable performance. Read the whole paper here.

7.    Deep Space Network Scheduling via Mixed-Integer Linear Programming (2021) – Alex Sabol et al

Alimo’s second highlighted paper describes the process by which the authors created an algorithm to study NASA’s Deep Space Network (DSN)’s scheduling problem. The DSN is oversubscribed, meaning that only a subset of the activities can be scheduled. You can read the paper here.

8.    Multi-Agent Motion Planning using Deep Learning for Space Applications (2020) - Kyongsik Yun et al

Paper number three from Ryan Alimo outlines the application of a deep neural network to transform computationally demanding mathematical motion planning problems into deep learning-based numerical problems. The paper shows that optimal motion trajectories can be accurately replicated using deep learning-based numerical models in several 2D and 3D systems with multiple agents. Read the full paper here.

9.    AI Agents in Emergency Response Applications (2021) – Aryan Naim et al

Alimo’s fourth recommended paper proposes an agent-based architecture for the deployment of AI agents to address the need for low-latency, reliable analytics in mission-critical ‘edge AI’ support for emergency services personnel. Read the full paper here.

10.    Assistive Relative Pose Estimation for On-orbit Assembly using Convolutional Neural Networks (2020) – Sonawani et al

In the last of Alimo’s recommended papers, a convolutional neural network is used to uniquely determine the translation and rotation of an object of interest relative to a camera in space. Unlike many current approaches for spacecraft or object in space pose estimation, the model does not rely on hand-crafted object-specific features which makes this model more robust and easier to apply to other types of spacecraft. Read the full paper here.

Access the event brochure to discover our speaker line-up and topics we will cover

Sebastian Raschka, Assistant Professor of Statistics at the University Wisconsin-Madison

11.    Highly Accurate Protein Structure Prediction with AlphaFold (2021) - John Jumper et al

Raschka’s first recommended paper looks at how deep learning has been applied for state-of-the-art protein structure models. The paper provides the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. The paper validates an entirely redesigned version of their neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)15. Read the full paper here.

12.    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2021) – Alexey Dosovitskiy et al

The second paper recommended by Raschka also addresses means to apply image recognition technology at scale. While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. Read the full paper here.

13.    Self-supervised Pretraining of Visual Features in the Wild (2021) – Priya Goyal et al

Raschka’s final highlighted paper addresses self-supervised learning at scale. Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods. In this work, the authors explore whether self-supervision lives up to its expectations by training large models on random, uncurated images with no supervision. You can read the full paper here.

Did you enjoy this blog? If so, join us in San Francisco for more!