How LLMs on the Edge Could Help Solve the AI Data Center Problem

Locally run AI systems, or LLMs on the edge, could ease data center strain, but it may take time for this approach to go mainstream

Drew Robb

September 23, 2024

1 Min Read
Letters and rays flying out from the center. Concept of artificial intelligence technology, machine learning, AI, big data, large language model.
Getty Images

There has been plenty of coverage on the problem AI poses to data center power. One way to ease the strain is through the use of "LLMs on the edge," which enables AI systems to run natively on PCs, tablets, laptops, and smartphones.

The obvious benefits of LLMs on the edge include lowering the cost of LLM training, reduced latency in querying the LLM, enhanced user privacy, and improved reliability.

If they’re able to ease the pressure on data centers by reducing processing power needs, LLMs on the edge could have the potential to eliminate the need for multi-gigawatt-scale AI data center factories. But is this approach really feasible?With growing discussions around moving the LLMs that underpin generative AI to the edge, we take a closer look at whether this shift can truly reduce the data center strain.

Read the full article from IoT World Today's sister publication Data Center Knowledge.

About the Author

Drew Robb

Data Center Knowledge

Drew Robb has been a full-time professional writer and editor for more than twenty years. He currently works freelance for a number of IT publications.


Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like