AI News Roundup: Google to Cut Staff Amid Shift to AI

Also - A premium version of Amazon’s Alexa is in the works and China’s plans to standardize the AI industry

Ben Wodecki, Jr. Editor

January 19, 2024

4 Min Read
Getty Images

AI Business brings you the latest news and insights from across the AI world.

To keep up to date with coverage of all things AI, subscribe to the AI Business newsletter to get content straight to your inbox and follow the AI Business Podcast on Apple and Spotify.

Google to lay off staff to focus on AI

Google staff can expect more layoffs this year as the company shifts towards AI investments, according to The Verge.

CEO Sundar Pichai sent a memo to staff in which he said the company has “ambitious goals and will be investing in our big priorities this year.”

However, the company will look to make “tough choices” − letting go staff, with Pichai writing that some roles will be removed to “simplify execution and drive velocity.”

The Google CEO’s memo comes after Google cut 12,000 jobs last January as it began its shift towards incorporating AI into high-priority product areas.

The layoffs expected in 2024 will not be as drastic as those last year, with Pichai writing that the reductions “will not touch every team.”

A premium Alexa is on the way

Amazon is reportedly moving to introduce a premium version of Alexa that is more conversational and more personalized.

Business Insider reported that the e-commerce giant is testing the offering with customers, with plans to launch a subscription service later this year.

Tests have reportedly hit snags, with the voice assistant returning lengthy and even false responses. After integrating Alexa in its Home devices, Amazon has sought to expand the voice assistant to other offerings with middling success.

China moves to standardize AI

China is moving to implement more than 50 standards for AI by 2026. The country’s Industry Ministry published draft guidelines for standardizing the AI industry that included both national and industry-focused standards.

Reuters reports that China wants to participate in the creation of around 20 international standards for AI, with the new guidelines focusing on "seizing the early opportunities from the development of the AI industry."

The ministry said that around 60% of the prospective standards should focus on serving "general key technologies and application development projects."

Local officials want more than 1,000 companies to adopt and advocate for these new standards.

China’s plans for standardizing the emerging AI market come as the country tries to stake its claim as an AI leader while fighting restrictions on importing hardware from the likes of Nvidia.

Capgemini tapping AWS AI tech

Capgemini is teaming up with AWS to accelerate generative AI adoptions.

The pair signed a multi-year strategic collaboration agreement which will see the parties build industry-specific solutions to optimize large language models from Amazon Bedrock. The industries they will target include aerospace, automotive and financial services. The new offerings will help clients “achieve the best generative AI production costs.”

Some 30,000 Capgemini employees will be trained over the next 3 years on AWS tech.

“With generative AI presenting new opportunities to accelerate innovation, it is imperative for clients to be able to scale their AI implementations quickly to drive tangible value, optimize investments, and meet the specific needs of their own industry,” said Jerome Simeon, head of global industries and group executive board member at Capgemini.

British standards body launches AI management standard

The British Standards Institute (BSI) has launched an AI management system designed to enable the safe and responsible use of AI.

The international standard (BS ISO/IEC 42001) was designed to address considerations like non-transparent automatic decision-making and  the utilization of machine learning instead of human-coded logic for system design.

The standard contains measures to implement as well as continually improve an AI management system over time.

BSI CEO Susan Taylor Martin said: “AI is a transformational technology. For it to be a powerful force for good, trust is critical.”

WHO issues multi-modal model guidance

The World Health Organization (WHO) has released new guidance on the ethics and governance of large multi-modal models in health care.

WHO created over 40 recommendations for health care providers to ensure their use of multi-modal AI models are ethically sound and promotes and protects the health of populations.

The guidance covers areas including diagnosis and clinical care, patient-guided uses and clerical and administrative tasks. It outlines risks to health systems that providers must watch out for, including accessibility, affordability of models and automation bias.

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr. Jeremy Farrar, WHO chief scientist. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities.”

UK data watchdog examining generative AI

The U.K.’s data watchdog has launched a consultation into how the country’s data protection laws should apply to generative AI.

The Information Commissioner’s Office (ICO) is looking to examine the development and use of the technology, including the legality around training models on personal data scraped from the web.

The ICO wants to hear from stakeholders including developers and users, as well as legal advisors, consultants, civil society groups and other public bodies.

“The impact of generative AI can be transformative for society if it is developed and deployed responsibly, said Stephen Almond, executive director for regulatory risk at the ICO. “This call for views will help the ICO provide industry with certainty regarding its obligations and safeguard people’s information rights and freedoms.”

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like