OpenAI Launches Global Contest to Help it Craft AI Rules
It plans to award 10 winning teams with $100,000 each
At a Glance
- OpenAI is launching a global contest to help it develop governance rules for AI. Each of the 10 winners gets $100,000.
OpenAI is launching a global contest to help it design a democratic process that would let the public weigh in on rules that AI models must abide by - over and above legal restrictions to be imposed by governments. Ten winners will get $100,000 each.
“Beyond a legal framework, AI, much like society, needs more intricate and adaptive guidelines for its conduct,” according to a company blog post.
Many questions are too nuanced for laws to address, such as “Under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures?” the authors wrote. Another is, “How should disputed views be represented in AI outputs?”
OpenAI is asking individuals, teams and organizations to submit proofs-of-concepts for a “democratic process that could answer questions about what rules AI systems should follow.”
Stay updated. Subscribe to the AI Business newsletter
The company defines democratic process as a mechanism in which “a broadly representative group of people exchange opinions, engage in deliberative discussions, and ultimately decide on an outcome via a transparent decision making process,” the authors said.
As examples, it cited the crowdsourced models of Wikipedia, Twitter Community Notes, MetaGov, pol.is, and others, as ones to follow.
OpenAI encouraged teams to be innovative, build off known methodologies and come up with “wholly new approaches.”
The deadline to apply is June 24, 2023 at 9 p.m. PST. The winning teams will be asked to develop a proof of concept or prototype for their idea while engaging with at least 500 participants. The findings must be published by Oct. 20, 2023. Any tech developed must become open source. Apply here.
Vetting the applications will be Colin Megill, co-founder of pol.is, Helene Landemore, political science professor at Yale, and Aviv Ovadya from Harvard’s Berkman Klein Center.
“Ultimately, designing truly democratic processes is a high bar to meet, and we view our efforts as complements rather than substitutes for regulation of AI by governments,” the OpenAI authors said.
OpenAI recently called for the urgent need to properly govern the coming AI superintelligence. Its recommendations included crafting a democratic process to let the public weigh in on guardrails.
Collective Intelligence
OpenAI also said it is working with the Collective Intelligence Project (CIP), which develops crowdsourced governance models for emerging technology such as generative AI. CIP's main premise is that new tech can be safe without sacrificing progress or limiting participation, maintaining that it is a false choice to say that these three are mutually exclusive.
CIP has created what it calls ‘Alignment Assemblies,’ or collaborations to help develop a people-centric AI. It believes that traditional ways of handling emerging tech are no longer enough because of the “society-scale consequences” that could arise from AI.
Traditional ways typically “leave the task of building technology up to technologists, directing what to build up to investors, and steering technology away from risks up to regulators, and then assume something like collective good will result,” according to the CIP.
However, this approach “breaks down when dealing with society-scale consequences that are increasingly likely to result from new technologies" and it "won’t be enough to deal with AI.”
“The speed, scale, and impact of AI don’t bode well for sticking with the status quo and hoping it ends up in a place that incorporates the public good by default,” the group added.
The CIP said pilots of these assemblies will launch in June and September.
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like