Nvidia, Adobe and IBM to Let Third Parties Vet their AI Models

Nvidia, IBM and Salesforce are among the eight new companies signing up for the White House's responsible AI pledge.

Ben Wodecki, Jr. Editor

September 13, 2023

2 Min Read
White House image
Getty Images

At a Glance

  • More major AI players including Nvidia and Adobe agree to the White House's pledge to build safe AI systems.

Nvidia, Adobe and IBM are among the new companies that have committed to the White House’s voluntary responsible AI pledge.

Eight new names join the likes of OpenAI, Meta and Google in agreeing to allow independent experts to evaluate their AI products before release.

The full list of new companies joining are Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability AI.

The White House said the addition of the new names will “help drive safe, secure and trustworthy development of AI technology.”

Leaders from the eight companies met with Commerce Secretary Gina Raimondo, White House Chief of Staff Jeff Zients and other senior administration officials at the White House to announce the plan.

The likes of Adobe and Nvidia have now agreed to the following:

Ensuring product safety before public release – including allowing independent exports to perform tests on new AI systems before release. Also, the signatories pledge to share information − including best practices to ensure safety as well as technical collaboration – with governments, civil society, and academia.

Prioritizing security when building systems – including investing in cybersecurity to prevent internal circumvention of unreleased models. They also agreed to third-party discovery and reporting of vulnerabilities in their AI systems.

Related:White House, Tech Leaders Reach Accord on AI Guardrails

Earning public trust – covering the development of ‘robust’ technical mechanisms such as watermarking to identify AI-generated content. Companies agreeing to the pledge would also have to report on their AI systems’ limitations and “areas of appropriate and inappropriate use,” including both security risks and societal risks, such as the impact on fairness and bias.

The commitments are designed to advance common standards and best practices to ensure the safety of generative AI systems until regulations are in place.

They also are voluntary – meaning companies developing AI systems don’t have to follow them. They are effectively a placeholder as the Biden administration is drafting legislation around safe AI.

Among the items being drafted is an Executive Order "on AI to protect Americans’ rights and safety.” The White House did not elaborate further.

The Biden administration said it would look to pursue bipartisan legislation on responsible AI – though plans may be pushed back as a government shutdown looms.

The pledge from the eight comes as the U.S. lawmaker leading the legislative push, Sen. Chuck Schumer (D-NY), is hosting his own AI Summit today. The Senate majority leader, who is behind the SAFE Innovation Framework, is holding a closed-doors event to discuss plans to legislate AI. Attendees include Tesla and X owner Elon Musk, Meta CEO Mark Zuckerberg and Microsoft co-founder Bill Gates.

Related:US to Let the Public Vet AI Models

Schumer billed the meeting as a way of "talking about how and why Congress must act, what questions to ask and how to build a consensus for safe innovation.”

Stay updated. Subscribe to the AI Business newsletter.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like