AI Luminaries Make Case for Open Source AI

A statement supporting open source AI from Mozilla was signed by 150 AI luminaries and experts

Ben Wodecki, Jr. Editor

November 2, 2023

2 Min Read
Unlocked lock
Getty Images

At a Glance

  • AI stars like Meta's Yann LeCun and Google Brain's Andrew Ng advocate for open source AI as nation's consider AI regulation.
  • They are among the 150 signatories to a statement that says "openness is an antidote, not a poison."
  • The statement comes as heads of state, AI leaders and experts convene in the U.K. to discuss AI safety and regulations.

Amid a week of discussions on AI safety and regulation, a group of AI luminaries advocated for open source AI, saying “openness is an antidote, not a poison.”

The likes of Meta’s Chief AI Scientist Yann LeCun, Google Brain co-founder Andrew Ng, Hugging Face co-founder Julien Chaumond and France’s Digital Affairs minister, Jean-Noël Barrot, signed on to the joint statement penned by Mozilla, a free software community behind the Firefox browser.

“Yes, openly available models come with risks and vulnerabilities — AI models can be abused by malicious actors or deployed by ill-equipped developers,” the statement reads.

“However, we have seen time and time again that the same holds true for proprietary technologies — and that increasing public access and scrutiny makes technology safer, not more dangerous.”

Over 150 experts have signed the statement so far, including scientists, policymakers, engineers, entrepreneurs and activists. They aim to make openness a global priority in AI.

Attendees of the U.K. government’s AI Safety Summit have been calling for “urgent” action and tougher governance on future AI model development. But signatories to Mozilla’s statement argue that rushing towards the wrong kind of regulation could “lead to concentrations of power in ways that hurt competition and innovation.”

Related:UK's AI Safety Summit: What They Discussed

“Open models can inform an open debate and improve policy making. If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there.”

“We are in the midst of a dynamic discourse about what 'open' signifies in the AI era. This important debate should not slow us down. Rather, it should speed us up, encouraging us to experiment, learn and develop new ways to leverage openness in a race to AI safety,” the statement reads.

LeCun signs on

“Openness, transparency, and broad access makes software platforms safer and more secure,” LeCun said, in a tweet. “This open letter from the Mozilla Foundation, which I signed, makes the case for open AI platforms and systems.”

LeCun’s signature comes after a week of Twitter spats with other AI luminaries over governance around AI safety.

The Meta chief AI scientist took issue with the likes of Yoshua Bengio, Geoffrey Hinton and Andrew Yao, who this week called for tougher governance on AI in order to reduce existential threats.

In a recent post, he argued that search engines still produce more accurate information than large language models.

“They both use the same public data. Search engines index it. Llama summarizes it approximately,” he said.

Related:AI Leaders Warn About Existential Risks Again - Now Armed with Facts

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like