This Week's Most Read: Meet the AI Software Engineer and SXSW Updates

Also, Meta unveils the hardware it is using to train Llama 3 and Inflection upgrades its Pi chatbot

Ben Wodecki, Jr. Editor

March 14, 2024

5 Min Read

Here are this week's top trending stories:

Devin: AI Software Engineer that Codes Entire Projects from Single Prompt

An AI startup has launched what it claims to be the world’s first AI software engineer.

Cognition AI has unveiled Devin, an autonomous agent that can plan and execute complex software engineering tasks from a single prompt.

Housed in its own sandbox environment, Devin can solve tasks using its own code editor and web browser. It can even recall relevant context, learn over time and fix mistakes. For example, Devin can benchmark an AI model on different APIs.

Cognition showcased the model testing Meta’s Llama 2 on Replicate, Perplexity and Together. The system was able to build the entire project, even fixing errors.

Businesses could use Devin to build and deploy web apps, fix bugs in codebases and even train and finetune AI models.

Cognition is not marketing Devin as a replacement for human software engineers, describing it as a “teammate.”

Devin reports its progress in real time and works with human engineers, accepting feedback on projects.

Read more

DeepMind Co-founder on AGI and the AI Race - SXSW 2024

Artificial general intelligence might be here in a few years, but the full spectrum of practical applications is “decades away,” according to the co-founder of DeepMind.

Related:AI Startup Roundup: Musk’s XAI to Open Source Grok Model

Speaking on the sidelines of SXSW 2024, Shane Legg told a group of attendees that while AGI might be achieved in foundation models “soon,” more factors have to align for it to be practically deployed and used.

He said the cost of AI has to come down and its use in robotics has to mature, among other factors. If it is not economically feasible, companies will not adopt it broadly no matter how mind-blowing AGI can be. In the meantime, near-term applications of AGI are emerging, including AI-powered scientific research assistants.

Legg, who is the chief AGI scientist at Google DeepMind, suggested the term artificial general intelligence years ago after meeting an author who needed a title for his book on an AI system with broad capabilities, not just excel at one thing.

Legg suggested inserting the word ‘general’ between artificial and intelligence. He and a few others started popularizing the term in online forums. Four years later, Legg said someone else claimed to have coined the term before him.

Read more

AMD CEO Gets Down at SXSW 2024

The version of Lisa Su who came out on stage in jeans, T-shirt and cowboy boots was decidedly not the usually serious, get-down-to-business AMD CEO that technologists recognize from past conferences.

Related:AI News Roundup: Is Nvidia’s New Chip a 1000-watt Power Hog?

Smiling broadly, relaxed and even a bit boisterous, Su welcomed attendees of SXSW 2024 to Austin, Texas. She has reason to do so: Su lives in Austin even though the semiconductor company she runs is based in Silicon Valley.

“Is this a great crowd?” said Su, to a roar of assent from the audience. “Wonderful, welcome to Austin!”

The MIT graduate, who has helmed AMD for a decade, even brought a little bit of Hollywood glamor to SXSW: David Conley, whose firm was involved in the Oscar-winning animated short ‘War is Over,’ flew in after partying until 4:30 am at post-Oscar soirees the night before. His firm uses AMD chips.

AMD chips were also used in the production of Pixar’s ‘Elemental’ animated film and James Cameron’s Avatar 2. “Special effects require a tremendous amount of compute,” Su said.

Attendees also heard more about Su’s personal life. She described herself as a “nerd at heart” and her first job was doing “grunt” work at a semiconductor lab when chips were the size of a dime or quarter.

“I was in semiconductors when it wasn’t sexy,” she said. “I don’t know if it’s sexy now but it’s sexier.”

Read more

Meta Reveals GPU Clusters Used to Train Llama 3

Meta has shared details on its AI infrastructure and unveiled new GPU clusters it is using to support next-generation model training, including Llama 3.

In a blog post, Meta provided information on two new data center-scale clusters. The new clusters are designed to support larger and more complex models than its previous hardware could.

The clusters each contain 24,576 Nvidia H100 GPUs. Previously, Meta’s original clusters contained around 16,000 Nvidia A100 GPUs. Omdia research published in 2023 placed Meta as one of Nvidia’s largest clients, snapping up thousands of its flagship H100s.

Meta will use the hardware to train current and future AI systems, with the company again referencing Llama 3, the successor to its Llama 2 model, in its blog post. The company had not published any concrete information on Llama 3 at the time of writing. However, the blog post mentions that Llama 3 training is “ongoing.”

Meta said it will also use the infrastructure for AI research and development.

Meta’s long-term goal is to build AGI, or advanced machine intelligence as its chief scientist Yann LeCun prefers to call it. Meta’s blog post states that it is scaling its clusters to power its AGI ambitions.

Meta plans to continue building out its AI infrastructure, with the company announcing that by the end of 2024, it will have 350,000 Nvidia H100 GPUs that, combined with techniques like clustering across its overall portfolio, will feature compute power equivalent to nearly 600,000 H100s.

Read more

Inflection's Pi Chatbot Gets Major Upgrade in Challenge to OpenAI

Inflection has upgraded its Pi chatbot with a new underlying model that approaches GPT-4’s performance but uses only 40% of the compute for training.

Inflection-2.5 is designed to power natural language conversations that are empathetic and safe. Compared to the previous model, Inflection 2.5 boasts improved coding and mathematics abilities.

The new model enables Pi users to discuss a wider range of topics, including current events, drafting business plans and getting local restaurant recommendations.

Inflection 2.5 achieves near GPT-4 level results on benchmarks including the popular MMLU, used to evaluate a model’s language understanding capabilities.

While it does not beat GPT-4, Inflection 2.5 surpasses Inflection-1 in terms of performance.

Read more

For the full roster of AI news, go to AI Business or sign up for our email newsletter.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like