Vice President Harris met with the CEOs of Google, Microsoft, OpenAI and Anthropic

Deborah Yao, Editor

May 5, 2023

3 Min Read
Drew Angerer/Getty Images

At a Glance

  • The U.S. will let "thousands" of community members and researchers scrutinize the AI models of tech giants to find flaws.
  • The public will vet the models for features that could harm society and developers will have to fix them.
  • The White House also announced $140 million in additional funding to set up seven more National AI Research Institutes.

This week, the Biden administration said it will let the public vet the AI models of top AI companies including Google, Microsoft, OpenAI and Anthropic to make sure these will not cause harm to society.

Vice President Kamala Harris met with the CEOs of these four companies at the White House to emphasize “the importance of driving responsible, trustworthy and ethical innovation” in AI.

The White House has set up a process for the public to vet the AI models created by these four companies − plus Hugging Face, Nvidia, and Stability AI and others − to make sure their models adhere to the White House’s AI Bill of Rights, released last fall, and its AI Risk Management Framework.

The White House said “thousands” of community partners and AI experts will vet these AI models, which will use an evaluation platform developed by Scale AI. The public vetting will be at the AI Village at Defcon 31, one of the largest hacker conventions in the world.

The government will not be involved in the vetting since “testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation,” the White House said.

The evaluation will help researchers and the public spot any harmful features in the models so the developers can fix them, the administration said.

Related:A Giant in AI Leaves Google, Fearing a Coming Dystopia

A senior White House official, which briefed reporters on background, said these harms include civil rights violations due to embedded biases for housing and hiring, risks to privacy by enabling real-time surveillance, risks to trust in democracy due to deepfakes and risks to jobs and the economy as automation comes to white-collar jobs.

And since AI is a global technology, the official said it is “essential” for the U.S. to cooperate with the EU to scrutinize the impact on the public of AI. Concurrently, the U.K.'s competition watchdog is set to conduct a major review into AI to look for such things as foundation models posing a risk to competition and consumer protection.

The Biden administration’s announcements come as a giant in AI resigned from Google this week to warn about the existential threat posed by a technology he has spent his life advancing.

White House to vet U.S. use of AI, too

Also, this summer the Office of Management and Budget is releasing a draft of its policies on the government’s use of AI systems. The policies, which will be open to public comment, will do the following:

  • Set specific policies for federal agencies and departments to ensure safe development and use of AI systems

  • Empower federal agencies to use AI to advance their work and set an example for state and local governments and businesses.

Related:Going Beyond Sci-Fi: Why AI Poses an Existential Threat

But while the administration is stepping up scrutiny of AI models, it also is raising its funding for them.

The National Science Foundation is launching seven new National AI Research Institutes, funded with $140 million. The U.S. will now have a total of 25 such institutes with a budget of around $500 million. These offices will orchestrate collaboration among universities, federal agencies, business and others to further develop AI in a responsible manner.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like