Having asked our experts what their personal 'must-read' paper choice would be, we also wanted to find out their opinion on the current state of AI, with specific emphasis on the biggest roadblocks or barriers seen in the AI space, both now and in the future.

Didn't manage to catch our last experts blogs? You can see our experts must-read paper suggestions part 1 and part 2. The below touches on the concerns around deepfakes, limited compute, model size and more.

Jane Wang, Senior Research Scientist, DeepMind

I get worried that the societal implications and ethical consequences of the newest AI technologies are not being thought out as well as they should be. There's kind of this culture with researchers in the field right now of "let's just see what's possible," in spite of the possible negative ramifications. Some particularly worrying examples include facial recognition for surveillance, and generating increasingly realistic deepfakes.

There is recognition that this is an important issue, and some steps are being taken in that direction. For instance NeurIPS this year for the first time asked all submissions to come with a "broader impacts" section to comment on the wider societal implications of the work, and groups such as Partnership on AI have been formed specifically to tackle the problem of how to implement ethical and safe AI. But unfortunately I don't think that these kinds of perspectives are all that prevalent in the wider research community.


Alexia Jolicoeur-Martineau, PhD Researcher, MILA

The biggest roadblock to generative modelling is 1) computing power, 2) the lack of long-term coherence, and 3) the lack of ability to understand what it doesn't know. Let's take a model that generates text and interacts with humans, such as GPT-2; it can say things that make sense and even answer hard questions correctly. However, if you ask nonsensical questions, it will try to give an answer rather than say "what you are asking doesn't make sense" or "I don't know". Furthermore, if you make it generate a large amount of text, there will be no long term coherence, it will go from one thing to another. If we wanted to make models that generate tv shows or movies, we would need a model that is coherent over time, which we don't have right now. Current AIs are really powerful, but still lack a true understanding of things. A lot more computing power will be needed if we ever want to generate high-resolution videos.


Jekaterina Novikova, Director of Machine Learning, WinterLight Labs

It is close to impossible to name one biggest roadblock in general AI. The term "artificial intelligence" does not have a concrete accepted definition. We can't agree on the definition of human intelligence, let alone the artificial one, and it's difficult to say what is blocking us from achieving something we can't even name. However, if we speak about narrow AI (AI which is able to handle just one particular task) then it is an umbrella of many different subtopics, such as Machine Learning, Robotics, Natural Language Processing etc. Each of those subtopics has their own specific roadblocks and challenges that should be overcome in order to make progress in AI research. In addition, when it comes to productionalization and real-life application of AI, non-research-specific roadblocks come into play that are related to AI implementation, adoption and scaling.

My current research is focused on Machine Learning in Healthcare, and here I see some common challenges researchers are constantly dealing with. I've recently gave a talk on real-life challenges in detecting cognitive diseases from human speech using ML (link to the slides), where I mentioned several roadblocks, such as for example:

  • Lack of relevant and appropriate data that is unbiased and does not compromise user privacy.
  • Limitations of English-only models. Most current developments in NLP are made in English and are not necessarily generalizable to non-English languages.
  • In the slides, I discuss more of the challenges and also present the research we are currently doing to overcome each of them.

Andriy Burkov, Director of Data Science, Gartner

If we talk about the narrow AI, the one we currently use in business, the biggest roadblock is the size of the models that provide the state of the art performance. The biggest Transformer ever trained, GPT-3, contains 175 billion parameters and costs millions of dollars to train. Only a handful of companies in the world can afford training such models and using them in production. If, in turn, the term AI is used in a more general sense, like artificial general intelligence (AGI), then the roadblock (and it's a huge one) is that we don't know exactly what science we need to develop to reach that level of machine intelligence. This is almost certainly not deep neural networks, as they are starting to reach their limits.


Tamanna Haque, Senior Data Scientist at Jaguar Land Rover

Amongst stakeholders, business alignment on AI strategy relies on; a shared vision of effectively using data science to add business value, a mutual agenda on short and long-term business objectives and priorities, and clarity on which departments own or provide resource on different aspects of projects.

A lack of alignment creates challenges for data scientists who are tasked with opportunity scanning and delivering growth, cost-savings and value through analytics, with pressures heightened in the wake of negative economic impacts (the reality today). Between idea generation and delivery, data scientists aim to influence stakeholders to action and champion findings, but misalignment on AI strategy might effect apathy and put data scientists on the back foot to begin with.

Data scientists can build relationships with wider or cross-functional stakeholders to increase the likelihood of traction whilst increasing their knowledge of different business areas. They can also expand their network externally. A wide and diverse network keeps technical and commercial knowledge fresh whilst improving influence and communication skills- a practice which uses the help and expertise of others to enrich and get projects over the line.


Oana Frunza, Vice President, NLP and ML Researcher at Morgan Stanley

Not necessarily a roadblock, but a direction we need to follow more is bridging the gap with other disciplines, especially the humanities. It is time to put philosophy back in doctor of philosophy. While we are good at building amazing technology, we need to start bringing in more knowledge from the world outside our disciplines to increase adoption and build trust. We will continue to advance and push forward the field, but we have to make sure that it is fit for the world around us and that the world around us is ready to adopt the solutions we build.

AI is a “gift of fire” – we must continue focusing on it enhancing our capabilities without getting burnt by the fire.


Eric Charton, Senior Director, AI Science, National Bank of Canada

Massive usage of (costly) computation power for minor improvement of performances is the elephant in the room. It is also difficult to have a clear view of the limitations of Deep Learning in various application context like unbalanced data (used in credit scoring), language models (dialog systems, question answering systems, text classification). What we see in mainstream publications is difficult to reproduce. We just begin to see some scientific communications with rigorous analysis of the pro and the cons of DL techniques.


Jack Brzezinski, Chief AI Scientist, AI Systems and Strategy Lab

Since the inception of Artificial Intelligence there were periods of enthusiasm followed by "AI winters." Once you attempt to solve hard problems, unexpected roadblocks will appear. In a few cases, those problems were attacked head-on. Douglas Lenat's approach to building the CYC system (https://www.cyc.com) is an example of such a frontal attack on the common sense reasoning problem, one of the hard ones in AI. In other cases, further research was necessary to move the needle in the right direction. Today, it is difficult to predict the single biggest issue that will be a roadblock. Elon Musk suggested that the interface between computers and the human brain needs a lot of improving.

His solution is to invent a more direct connection. Tom Mitchell from Carnegie Mellon suggests that machine learning algorithms should be in a state of continuous learning as opposed to a limited training phase carefully defined by the number of epochs. Perhaps, the biggest obstacle exists outside the high-tech field. Over the past decade, an enormous amount of investment was redirected from R&D and used for stock buybacks and other financial operations. The legal environment often dictates the most optimal course of action for big corporations. It might take a new AI maverick similar to SpaceX in the aerospace area, to shift the focus from financial markets into research and development. Perhaps, another space race between superpowers will spark innovation. Artificial Intelligence, nuclear fusion, solid-state batteries, quantum computing, pharmaceutical research are only a few examples of many areas that are facing roadblocks.


Abhishek Gupta, Founder, Montreal AI Ethics Institute

While there is emerging consensus on some of the fundamental principles as they relate to the ethics of AI, what is still starkly missing are operational guidelines for people to actually put these principles into practice. We need to have more discussions on moving from theory to practice and that requires people who have cross-domain expertise who are able to straddle both the fields of ethics and the technical domain of AI. Such individuals, especially the ones who are able to seamlessly integrate the findings from the fields will be essential in removing this roadblock.


Jeff Clune, Research Team Leader, OpenAI

I had the pleasure of speaking with Jeff back in January. During his quick fire questions interview, he suggested that limited compute was a huge roadblock at this time - "Limited compute, we think we have a lot now, but we're going to need a lot more." Nearly seven months on, I thought I would check in to see if this was still the case. The answer? Unchanged and simple:

"Compute. Compute. Compute. In that order".


Interested in hearing more from our experts? you can see our previous experts blog series below:

Interested to hear from our experts in person? The Deep Learning 2.0 Virtual Summit takes place in January and bring together over 50 speakers to discuss the latest research methods in AI. See more on this summit, including the speakers and agenda here.