With AI adoption growing year on year and the emergence of new software and tools, we're excited to see what this year has in store for the AI world. So we thought who better to ask than the leading AI experts from Toyota, Meta, LinkedIn, and Johnson & Johnson, to name a few more, on what they predict will be the key trends to take AI to the next level in 2023. Check out their top 3 trends below:

Vivek Verma

Mid Data Scientist (Innovation)

Toyota Connected North America

The first trend is an alternate product to chatGPT which will lead to competition in this space.

Secondly, an improved version of the current Stable diffusion model v2.0 will mostly be released in 2023.

Thirdly, a new MLOps platform for NLP and NLU might be developed so as to solve business problems in market research and healthcare among others.

Apostol Vassilev

Research Team Lead; AI & Cybersecurity Expert

National Institute of Standards and Technology (NIST)

Harnessing the generative power of AI models to create synthetic data to feed into creating better models.

Enabling a single model to do more. Right now, we have models that tend to specialize for tasks, hundreds of them. Developing multi-task models is building upon the emergent skills of large language models that can do different tasks without having a specific design for any of them. Examples of this is the ability of models trained on multi-modal data (e.g., images and text) to find links between the different data modalities.

The progress in vision transformers. They show exciting capabilities by combining self-attention and convolution. The recent progress we witnessed with the generative power of text-to-image systems based on diffusion models provides evidence of this.

Reshma Lal Jagadheesh

Senior Data Scientist, Enterprise Intelligence, Online

The Home Depot

Improving Natural Language Understanding and asking the right questions to customers to identify what they are looking for.

More research areas to improve automation related to Agent Assist.

Bots capable of handling complex multi turn conversations by themselves with less supervision and training.

Find out more here

Aayush Prakash

Engineering Manager

Meta Reality Labs

I would like to see more developments in synthetic data. Data is the hard problem of AI, and synthetic data is one of the most promising solutions to address this issue.  

Generative AI with diffusion models where we can create large amounts of unseen content with simple textual cues is another strong development going into 2023. This has already been extended to 3D via Magic3D and DreamFusion. This will unlock many content generation needs in AI, game, entertainment industries.

NeRF (Neural Radiance Fields) enables realistic 3D representation of real world scenes . This can help unlock the generation of highly realistic 3D worlds at scale in Metaverse.

Karl Willis

Senior Research Manager

Autodesk

A new architecture to finally replace Transformers and some of their known weaknesses such as dealing with long range dependencies.

Reduced compute costs and the commoditization of training and fine-tuning large models.

Continued breakthroughs with multi-modal models, especially text-to-3D and text-to-video with higher fidelity and temporal consistency.

Sam Stone

Director of Product Management, Pricing & Data

Opendoor

In 2023, I have my eye on generative models, ML Ops, and interpretability. Generative AI is one of the most buzzed-about technologies right now, and for good reason. Large language models   ("LLMs") like OpenAI's GPT3 have led the way, but generative models are increasingly multi-modal, meaning they accept multiple types of inputs, like images and text, and return different types of output, like a video with text labels. There's an enormous opportunity to help businesses by building   specific applications on top of the large, general models.

Second, ML Ops is becoming increasingly beneficial for data scientists because it enables them to focus where they excel: on research to build and improve models. It applies to tried-and-true software development practices including automation, testing, and diagnostics across the full model lifecycle. At Opendoor, we’ve adapted the structure and processes for our technical teams to allow and encourage them to pursue ML Ops.

Lastly, I see interpretability gaining momentum across industries. Deep neural networks have become increasingly powerful and widely used, but these models have become harder to introspect. This has motivated ML researchers and engineers to invest in new methods, like interpretability, to understand the "why" behind their model outputs. At Opendoor, interpretability is key, as our customers, who are home sellers and buyers, want to understand the “why” behind the prices we provide for homes that we are buying and selling.

Mathew Teoh

Senior Machine Learning Engineer

LinkedIn

It’s always hard to predict the future. I can’t say that any prediction you see here is accurate, but I imagine that treating all the predictions across this entire blog post as an ensemble model will give you a better answer than random guessing ;)

Real time ML: shrinking the time gap between model training, feature computation, and the present means that your model predictions better reflect reality.

Human in the loop: data is abundant, but high quality, labeled data is not. These labels are typically expensive if they come from human judgement. Determining the best places to allocate human labeling effort will continue to be a problem worth solving.

Embedding-based retrieval: ML systems such as Search that depend on a retrieval step usually depend on strict conditions to be met, such as literal term matches. Embedding-based retrieval relaxes this constraint, and let us show better results.

View the Summit here

Roopnath Grandhi

Product Leader, Entrepreneur, AI Leadership

Johnson & Johnson

More advancements in large language models like ChatGPT will be released and more startups will explore customizing these large language models to different verticals and applications.

Custom Silicon and chips like AWS Inferentia and Trainium will gain more traction in bringing the cost down for training and inference of large-scale deep learning models and enabling more innovation.

Ethical and Responsible AI will gain more prominence and organizations and businesses will increase investment in these areas.

Richard Socher

CEO

You.com

Chat Interfaces - Search and chat interfaces that help users find answers or take an action in a conversational way.
 
Generative AI for Images. They will get better and faster and expand   into multi-modal outputs. Instead of a single image or a single text, you will have dialogues, videos, sound, and music. Your creativity will no longer be limited by your execution skills of how well you can paint, draw, or sketch; or how well you can play an instrument or sing with your voice. Your creativity will be mostly restricted by your ideas. Your ideas will be able to come to life much more easily which will mean an explosion in creative   outputs.
 
A good example is the printing press. Yes, the printing press stole the jobs of monks who copied books, but millions more books were created once we didn't have to copy them anymore manually. Likewise, we will see millions more art pieces, songs, books, and poems being created by the people.
 
In some ways it'll take more work to create something truly novel. In the past, you could've created something novel by being inspired or copying someone else's style. That will become so easy that no one will be impressed by your painting like Van Gogh, Matisse or Dalí, and others. It will be much more impressive to come up with new styles and ideas.
 
Generative AI for medicine - it will change protein structures and that might change all of medicine.

Amey Dharwadker

Machine Learning Tech Lead

Meta

Graph neural networks (GNN): GNNs are a class of AI models that can learn from complex graph-structured data, making them highly suitable for various real-world applications such as recommender systems, social network analysis, drug discovery, and even intelligent transportation.

No-code/low-code AI tools: No-code/low-code AI tools are gaining popularity as   they enable rapid development and prototyping of deep learning models without   writing complex code. This makes it easier and faster for businesses, researchers, and hobbyists to build and deploy deep learning applications than ever before.

On-device deep learning: To make deep learning algorithms more widely available, there is an increasing demand to build, train, and deploy models that can run on low-power devices without relying on the cloud for processing power. This eliminates the need for data transmission, meaning applications can be more energy efficient, privacy-friendly and mobile.

Claim your 30% discount

All of these experts will be speaking at our upcoming Virtual AI Summit (Online, Jan 30-31) and AI Summit West (San Francisco, February 15-16).

To learn more about these predictions and how you can take your AI projects to the next level in 2023, you can join one of our events with our New Year Sale. Just register using the code NEWYEAR and SAVE 30% on all pass types.

Check out all of our 2023 events here!