Google Turns to AI to Write New Code; Workforce Reduced

The new code is reviewed and accepted by engineers, helping them do more and move faster

Heidi Vella, Contributing Writer

November 4, 2024

2 Min Read
Coding concept. Digital background with an abstract particles and parts of code strings.
Getty Images

More than a quarter of all new code at Google is generated by AI, company CEO Sundar Pichai said during a third quarter earnings call. 

The code is then reviewed and accepted by engineers, helping them do more and move faster, he said. 

“We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency,” he said.

Google reduced its workforce over the last year and now has more than 1,000 fewer employees than at the same time in 2023. 

Pichai said Google is uniquely positioned to lead in the era of AI because of its differentiated full stack approach to AI innovation, which the company is seeing operate at scale.  

This includes robust AI infrastructure, world class research teams and broad global reach through products and platforms, he added.

The call also noted that Google’s profits are up across the board, from cloud, services and Alphabet revenue, with total revenue reaching $88.27 billion, a 15% increase from the previous year's $76.69 billion, led by search and cloud services. 

“Using a combination of our TPUs and GPUs, LG AI Research reduced inference processing time for its multimodal model by more than 50% and operating costs by 72%,” Pichai said.

It was also revealed that YouTube's ad and subscription revenue for the past four quarters topped $50 billion for the first time.

Related:Cybercriminals Tap Generative AI to Write Malware Code: Study

Pichai also highlighted that Google had significantly lowered machine costs per search query through its AI Overview application, which provides an AI-generated snapshot of searches with key information and links. 

In the 18 months since first testing AI Overviews, the company has reduced costs for these queries by more than 90% through hardware, engineering and technical breakthroughs, while doubling the size of its custom Gemini model, he said. The latter is a family of multimodal large language models developed by Google DeepMind.

It was also announced that AI Overviews has started rolling out to more than 100 new countries and territories and will now reach more than 1 billion users monthly.

All seven of Google’s products and platforms with more than 2 billion monthly users use Gemini models. That includes Google Maps, the latest product to surpass the billion-user milestone.

Beyond Google’s own platforms, following strong demand, Pichai said the company is making Gemini available on GitHub Copilot.

He also said the company is building experiences where AI can see and reason about the world around the user, including with a future product called Project Astra that should be ready in 2025.

Related:Mistral Launches AI Models for Localized Code Generation, Math Reasoning

About the Author

Heidi Vella

Contributing Writer, Freelance

Heidi is an experienced freelance journalist and copywriter with over 12 years of experience covering industry, technology and everything in between.

Her specialisms are climate change, decarbonisation and energy transition and she also regularly covers everything from AI and antibiotic resistance to digital transformation. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like