Omdia Analysts’ Take: Key AI Trends to Watch in 2024

Omdia experts weigh in on upcoming AI trends: synthetic data, AI chips in PCs, GPU 'glut' and more

Ben Wodecki, Jr. Editor

December 25, 2023

5 Min Read
illustration of year 2024 in a yellow background
Getty Images

This year has been a pivotal year for AI after the emergence of generative AI that is transforming how people and machines communicate.

AI Business sat down with analysts from sister research firm Omdia to find out 2024’s emerging trends and topics in AI.

1. Synthetic data to drive innovation

Synthetic data is already driving a great deal of innovation that we are seeing coming out of the generative AI space itself, wherein you have so many of these smaller models that are right now wowing people through their capabilities that match those of frontier models like OpenAI’s GPT. And they're doing so because they're being trained on synthetic data generated by larger models – Bradley Shimmin, chief analyst, AI and data analytics

In health care, there is the idea of using synthetic data generation of patient cohorts – we have seen organizations generating synthetic data on patient populations off underlying real patient data. This helps with issues around privacy and you can curate these towards a particular clinical trial. There are also some interesting use cases around synthetic data generation in the medical imaging space as well. – Andrew Brosnan, principal analyst, AI applications in life sciences

2. More AI model training and development by enterprises

This was the year the enterprise built an AI training capability. … We should expect to see a whole lot more model training and development because the infrastructure is out there. On the other hand, this obviously cannot go on forever, because the only point of training an AI model is then to run inference against it. We are going to see, at some point in 2025, inferencing becoming more important.

Although people are inferencing bigger models, the cutting edge of development has shifted from mega labs like OpenAI or DeepMind to open source projects. And this has resulted in a surge of exciting, smaller, usually domain-specific projects − a lot of work on the technology of fine-tuning and partial training. I think QLoRA will prove to be the most important AI paper in 2023 and perhaps beyond. As a result, I think we are going to see even more democratization of development. – Alexander Harrowell, principal analyst, advanced computing for AI

We have seen in the last few months of the year this emphasis on right-sizing models for the task at hand. We have seen companies that are invested in the big frontier models like Microsoft building diminutively sized models, like Phi, which has a very small footprint memory-wise for inferencing.

We are also seeing hardware companies, such as IBM, starting to build architectures both in the data center and chips that focus on this right-sizing of models to process a smaller, lower resolution for transactions that would normally take a lot of time and a lot of memory. – Bradley Shimmin, chief analyst, AI and data analytics

3. ‘The year of the graph database’

What is going to drive LLMs and all enterprise analytical endeavors to the next level is graph functionality merged with vector search functionality and vectorization. We can already see that starting to bubble to the surface - at Microsoft Ignite they talked quite a bit about the value of Microsoft Graph in supporting large language models, and they wanted everyone to be able to build their own copilot. And the reason why they are thinking this is going to work is because it combines the knowledge graph that you have of a given user in a given company for a given workflow on a given day, coupled with use cases like RAG for helping the models understand the context of a given query. – Bradley Shimmin, chief analyst, AI and data analytics

4. More AI chips in PCs

We are going to start seeing more PCs with AI chips in them. Intel and Apple have basically decided this for everybody else. Now AMD has an option, Intel has an option and Qualcomm is making a great deal of noise about this. I expect we will see lots of people developing interesting stuff to run in terms of local AI and I think not least because there is the Mac ecosystem, which is right there as a market to start with. It is going to be interesting to see what impact this has on accelerators. I could see a category of AI developer workstation PCs appearing or the existing workstation category getting more acceleration – Alexander Harrowell, principal analyst, advanced computing for AI

5. GPU shortage to ease

There will not be another chip shortage; if anything the risk is on the other side. There has been a lot of work this year on finding second sources for various things. The big blocker on getting the GPUs was TSMC's chip on wafer on substrate packaging process, CoWoS. If you look further on from that, TSMC was both building more capacity and outsourcing some of the assembly and testing. For example, TSMC and Nvidia contracted with UMC in Taiwan to make interposers. Samsung's meant to be opening up a 3D packaging process next year too.

If anything, I think we will suddenly discover we have got too many chips. It will not be 2020 again, but it will be 2022 again – we have suddenly run into a GPU glut. But I am not thinking quite next year for that. I think there is probably enough momentum and enough people on back order to consume everything that comes out. – Alexander Harrowell, principal analyst, advanced computing for AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like