Wouldn't it be great if AI could tell us what it has in store for this year? For some time we have been told that we could be only years away from AGI, however, we have become more accustomed to the incremental, yet incredible advancements happening in AI over the last few years. It is suggested by many that the leaps and bounds made in 2019 are to be outshone this year with yet another 12 months of research, experimentation and trial of applications, therefore, we thought we would ask some previous speakers to give us their thoughts on what 2020 has in store!
Jeff Clune, Senior Research Manager, Uber AI Labs
Jeff supplied some great suggestions for 2020 including the challenge of solving the remaining undefeated Atari games, an increased emphasis on each of the three pillars of AI-generating algorithms, self-learning meta-learning algorithms and automatic generation of learning via challenge. Jeff also suggested that things will continue to grow with greater amounts of compute power thrown at training ever-larger neural networks. A suggested billion-plus-parameter models will become a new norm for the largest models alongside a greater amount of work with self-supervised and unsupervised learning to be undertaken.
One thing I predict is that we're going to have to as a community come up with a new set of grand challenges. We've been solving grand challenges at such a quick rate that all of the ones that were universally agreed upon are no longer challenges because they've been solved, which is quite interesting. A few years back if you asked people what are the open challenges in AI you would have had many listed by most people, namely Go, Starcraft, Montezuma's Revenge, and multiplayer poker, but all of those have fallen in the last year or recent years. It will be interesting to see if we as a community find consensus on a new set of specific challenges or instead simply start pursuing the many hard problems that remain in AI, but without clear, agreed upon, grand challenges like those in the above list. One grand challenge that does remain is solving the entire set of Atari games, as a few of them are still unsolved. That is unfinished business for our community as in many ways the Atari benchmark is the face that launched a thousand RL ships.
The second pillar is self-taught meta-learning algorithms. I expect that more and more of the game-changing results will come from this school of machine learning rather than the traditional kind that uses hand-designed algorithms. The open AI Rubik's cube project was one example from this year of using meta-learned algorithms versus hand-designed algorithms, but we'll see more and more of it.
Finally, I predict that we will see increased emphasis on Pillar Three, which is automatically generating the learning challenges themselves, meaning an algorithm automatically generates its own challenges and solves them all at the same time. That allows an open-ended exploration of learning a variety of different skills and an algorithm can in theory create a never-ending process of learning a variety of different skills and how to solve an increasingly diverse and challenging set of problems. Our paper on POET is the most salient version of work in this direction, but there is also work by the starcraft team at DeepMind and there will be lots more to come in future years. Here is a talk summarizing POET.
Rana el Kaliouby, CEO, Affectiva
Rana was kind enough to give us three predictions for 2020. Firstly, Rana expects AI companies to be held to a stricter standard in regard to transparency:
Consumers are waking up to the fact that their personal data is being used for corporate gain - this has been aggravated by recent data scandals and data privacy hacks. Tech companies - especially AI companies that require massive amounts of data to fuel their deep learning algorithms - are not transparent about their use of data. Specifically, how they are collecting data, where they are storing it, who has access to it, what it’s being used for and ultimately, what that means for the end-user.
The second prediction from Rana for 2020? Power asymmetry. Powerful technologies like AI are often in the hands of large corporations and governments. This poses a number of challenges. The value that users get out of these technologies does not always measure up to the value that those deploying the technology gain from access to user data. Also, corporations or governments in control of technology can impact distribution and access to these technologies. This can have adverse implications for social and economic mobility. People with access to certain types of AI will be able to work more efficiently and will have a leg-up on those who don’t have access. I worry about the impact this can have on communities and populations that are already disadvantaged, as AI could continue to widen that gap.
Finally Rana suggested that we’ll see a rise of data synthesis methodologies to combat data challenges in AI:
Deep learning techniques are data hungry, meaning that AI algorithms built on deep learning can only work accurately when they’re trained and then validated on massive amounts of data. But companies developing AI often are challenged getting access to the right kinds of data, and the necessary volumes of data. To combat this, many researchers in the AI space are beginning to test and use emerging data synthesis methodologies to combat the limitations of real-world data available to them. With these methodologies, companies can take data that has already been collected and synthesize it to create new data.
Shalini Ghosh, Machine Learning Research Team Lead, Samsung AI
Consumers apps are seeing increased user interaction with rich media. These interactions can be enhanced by multimodal AI-powered systems for recommendation and content understanding. In 2020, I believe we will see the rise of multimodal AI. The research community is making strong advances in multimodal ML models trained on a combination of text, image, video and audio data. In parallel, engineering advances in specialized AI chips and accelerators are making it feasible to train large multimodal ML models. Even consumer devices are getting powerful enough to run inference of multimodal ML models on the device itself. This synergy of consumer need, along with advances in research and engineering, is well-poised to make 2020 the year of multimodal AI. Shalini was also kind enough to provide us with a full-length blog on multimodal AI which you can read here.
David Gunkel, Professor, Northern Illinois University
This past year has seen unprecedented investment in AI ethics and governance. 2020 will see amplification of this effort as stakeholders in Europe, China, and North America compete to dominate the AI policy and governance market. Europe might be odds-on favourite, since it was first to exit the starting block, but China and the US are not far behind. The technology of AI might be global in scope and controlled by borderless multinationals. But tech policy and governance is still a matter of nation states, and 2020 will see increasing involvement as the empires strike back.
Sonja Reid, CEO, OMGitsfirefoxx
As we are entering wider spread adoption of AI, particularly autonomous vehicles – policy, accountability and ethics should be a key focus in 2020. While AI transcends from industry buzzword to business standard, questions that should be considered include: Who holds accountability and how do we decide a standard for ethics in AI? What positions are at risk to be ‘phased out’ and what is needed to prepare for this shift? Will there be policies to encourage businesses to have a balance in their human vs AI workforce? It's exciting to be a part of the modern day Industrial Revolution!
Nikita Johnson, Founder, RE•WORK
In 2019, we have witnessed breakthroughs in a number of areas that have allowed more widespread adoption of AI, on an unprecedented level. The advancement in software techniques such as transfer learning and reinforcement learning have aided in breakthroughs and forward adoption, helping to separate system improvements with the constraints of our knowledge as humans. In 2020 I see a greater shift toward 'Explainable AI' which will progress the industry toward greater transparency, accountability, and reproducibility of AI models and techniques.That said, we need to increase our knowledge of the limitations, as well as the advantages and disadvantages of each tool which is a never-ending topic of discussion. Enhanced learning will increase our ability to build trust with the products we use, as well as allowing more justifiable decision making by AI!
Sadid Hasan, Senior Director of AI, CVS Health
I think in 2020 AI researchers would continue to build novel effective models with better contextual learning based on incorporated common sense, situational, and imaginative knowledge, while putting more emphasis on enhanced explainability, reproducibility, and energy efficiency through optimal time and memory complexity for flexible on-device deployment. We would see further advancements in approaches to self-supervised language model-based pre-training and transfer learning for difficult natural language understanding tasks, and meta learning, multitask learning, and multimodal learning towards solving more complex real-world problems – especially, machine learning research for healthcare domain applications would reach new impactful milestones. In the coming years, we would also notice increased AI adoption e.g. towards achieving operational efficiency across different industry verticals.
Anirudh Koul, Head of AI, Aira
Everyone should expect a growing ecosystem of practical tools and opportunities for even greater adoption of AI. Firstly, mobile AI adoption by developers will grow exponentially. With growing capabilities available, like well-known models already shipping in the operating system (eg. BERT in Core ML 3 from Apple), on-device mobile training, easier to use frameworks like Fritz which ship Keras models directly to user’s devices, plus practical tooling to reduce model sizes, as well as dedicated hardware acceleration in smartphones, more developers can include AI in their apps. Speaking about hardware, with the rise in accelerators and miniature GPUs like Google Coral, NVIDIA Jetson Nano, Intel Movidius, the number of DIY and industry projects are going to witness people unleashing their creativity with AI. For reference, with all this computing power available, the fastest miniature DIY Robocars are already racing close to 100 MPH (when scaled to a real car size).
With increasing options to learn reinforcement learning in virtual environments and transfer to real-world robots, from AWS DeepRacer, to AWS JPL Open-Source Rover Challenge, to real-sized autonomous cars like RoboRace, more opportunities will open to learn and compete. As barriers to enter the AI field are lowered, obviously, more people would take a plunge, see things work and get hooked to learn further. For beginners and especially artists, emerging #NoCode tools like Runway ML which allow remixing outputs from several models into a pipeline will help unleash creativity. Similarly, emphasis will shift from training AI to productizing AI at scale, especially as platforms like KubeFlow mature and become a standard across all cloud companies. For those with elite skills, the AI Residency programs will continue to grow (even Oil Giant Shell is opening an AI residency program).
What do you think will be the hottest trends of 2020? We would be more than interested to hear your thoughts on the year ahead! I, myself am looking forward to hearing more on Deep Reinforcement Learning, I was lucky enough to hear some great presentations on it in 2019 and can't wait to hear what could be in store! If you have any thoughts on the above blog please do not hesitate to email me on [email protected]!