Most Read: Fake AI-Generated Meta Gala Images Fool Fans

Also inside, the Biden administration’s new international cyber strategy, plus detailed reports on cybersecurity concerns and AI deployments

Ben Wodecki, Jr. Editor

May 10, 2024

5 Min Read

Here are this week's most-read stories on AI Business:

Met Gala AI-Generated Images of Katy Perry, Rihanna Deceive Fans

Despite the absence of Katy Perry and Rihanna at the annual Met Gala in New York, their AI-generated viral images circulating on social media deceived many into believing they were at the star-studded event.

Viral images of the pair began circulating on social media, showing the singers donning green and white gowns adorned with flowers.

The images were, in fact, AI-generated. Even Katy Perry’s mother, Mary, fell for them, according to screenshots of a conversation posted on the singer’s Instagram.

Considered one of the biggest events in fashion, the Met Gala features A-list celebrities flaunting intricate fashion showpieces.

There were some giveaways that the images were fake. The carpet in the photo was one. The images showed blue and red carpets, while a green fabric was used at this year’s event.

The images also had distorted faces and body parts of photographers.

But the main giveaway was that neither singer was in attendance. Perry said later she was busy with work while Rihanna missed the event due to flu.

Check out the photos in detail

Biden Administration Launches International Cybersecurity Strategy

The State Department has launched a new strategy designed to foster solidarity with international allies on cybersecurity and emerging technologies.

Related:AI Gun Detection System Monitors Crowds for Firearms

The U.S. International Cyberspace and Digital Strategy details the Biden administration’s framework for encouraging partners to more securely deploy technologies including AI and quantum computing.

Developed in tandem with other federal agencies, the strategy has three guiding principles:

  • Promote an inclusive cyberspace founded on international law, including human rights.

  • Integrate cybersecurity, sustainable development and technological innovation.

  • Adopt a policy approach that leverages diplomatic tools across the entire digital landscape.

The strategy mandates that the Biden administration will collaborate with allies to establish an open and secure digital ecosystem and forge coalitions to thwart technological threats aimed at critical infrastructure.

The State Department is tasked with spearheading the strategy’s principles, working with global partners to foster collaboration.

Secretary of State Antony Blinken unveiled the strategy during a speech at the annual RSA conference this week.

“The U.S. will work with any country or actor that is committed to developing and deploying technology that is open, safe and secure, that promotes inclusive growth, that fosters resilient and democratic societies and that empowers all people,” Blinken said.

Read more about the new cybersecurity strategy

AI Use in Cyberattacks Raises Worker Cybersecurity Concerns

More than half of U.S. workers are concerned their organization will be hit by a cyberattack, according to a new EY survey.

The 2024 Human Risk in Cybersecurity Report surveyed 1,000 U.S.-based full and part-time workers across the public and private sectors whose work requires the use of a computer.

The consulting firm found what it described as “widespread concerns” from workers about cyber attacks and the potential for AI to heighten risks.

The survey says around one-third of respondents (34%) expressed concern that their own actions could leave their employers open to an attack.

EY found younger workers, Gen Z and Millennials, were the most fearful of causing potential risks. Younger respondents also admitted to feeling unequipped to identify potential cyber threats in the workplace.

EY’s survey asked workers their views on AI’s impact on cyber attacks. The consulting firm found that 85% of respondents said they felt AI was making cyberattacks more sophisticated, while 78% expressed concern over AI causing increased amounts of attacks.

Explore the survey’s findings in detail

Firms Not Prepared to Deploy AI Models, Report

A new Hewlett Packard Enterprise (HPE) report reveals that businesses eager to implement AI are struggling with the necessary processes required to deploy models effectively.

The report, titled Architect an AI Advantage surveyed more than 2,400 IT leaders from 14 countries. Respondents worked at organizations with more than 500 staff and spanned industries including health care, manufacturing and financial services.

HPE found that businesses looking to implement AI are struggling to perform basic processes vital for preparing data for use in AI models.

Findings showed that just 7% of the 2,400 IT professionals surveyed can currently run real-time data synchronization, while only 26% can run advanced analytics applications.

Less than 60% of respondents reported that their business is currently capable of handling functions like accessing, storing and recovering data, which could slow down a model’s development process.

Learn more about HPE’s report

Microsoft Blocks Police Use of OpenAI for Facial Recognition Cameras

Microsoft has updated the list of acceptable uses of its Azure OpenAI Service, barring law enforcement from using it in facial recognition cameras.

The Azure OpenAI Services provides Microsoft’s cloud customers with access to models like GPT-4 Turbo and DALL-E from OpenAI.

Microsoft has now barred U.S. police departments from accessing the OpenAI models for “facial recognition purposes.”

While facial recognition systems rely on visual data inputs, OpenAI models like GPT-4 could be used to augment related processes. For example, a large language model could be used to improve user interfaces for facial recognition systems, generate natural language responses to queries or produce usage reports.

Just last month, Axon Enterprise, which develops technology for law enforcement and the military, including the first-ever Taser, unveiled an AI-powered tool that summarizes audio from an officer’s body camera.

Under Microsoft’s new rules, a law enforcement agency hoping to use OpenAI’s models for such a use case would be barred.

The Code of Conduct page for the Azure OpenAI service states that its models “must not be used for any real-time facial recognition technology on mobile cameras used by any law enforcement globally to attempt to identify individuals in uncontrolled, ‘in the wild’ environments.”

Read more about Microsoft’s new rule changes

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like