Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
In a rare letter, Sam Altman, Greg Brockman and Ilya Sutskever outline the ‘special treatment’ needed to curb its powers
In a rare letter, top executives of ChatGPT-maker OpenAI jointly penned a warning about a coming “superintelligence” they compare to nuclear energy, and suggested outside-the-box ways to curb its power – before it is too late.
“Given the picture as we see it now, it’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains,” wrote CEO Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever, who are also the co-founders.
This “superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” they said. “Given the possibility of existential risk, we can’t just be reactive.”
While AI pioneers such as Geoffrey Hinton and Stuart Russell have warned against the serious threat AI poses to society, OpenAI’s co-founders should know more than anyone how much of an actual threat it is since they developed the branch of AI that is giving rise to these questions.
The advent of the GPT-4 large language model, in particular, sparked fears in the AI community, resulting in an open letter calling for a 6-month hiatus in developing powerful AI systems.
With its 170 trillion parameters, GPT-4 is much more powerful than its predecessors (GPT-3 has 175 billion parameters.) GPT-4 is much better at understanding natural language and is highly versatile. Microsoft researchers say it can even reason.
“We can have a dramatically more prosperous future, but we have to manage risk to get there,” OpenAI’s co-founders wrote.
The authors believe that this superintelligence needs “special treatment and coordination” to curb its powers, hinting that it is too dangerous and too smart to be reined in properly through conventional regulatory methods.
They propose the following paths:
Major governments around the world can set up a ‘project’ that current AI efforts could join, or the AI community could agree to limit the growth pace per year of AI at the frontier. Companies would be held to an “extremely high standard” of acting responsibly.
The International Atomic Energy Agency was created to manage the safe use of nuclear power, after scientists called for a global regime to rein in the technology to avoid a future nuclear war.
OpenAI's co-founders called for an IAEA-like agency to rein in this coming superintelligence – technically called artificial general intelligence – because it poses the same existential threat to humanity as nuclear war.
This agency would oversee AI efforts above a certain capability, as measured by compute required or energy usage, for example. The agency can inspect the AI systems, require audits and test for compliance with safety standards, restrict deployment, among other responsibilities.
Companies and countries can get a head start by putting together and implementing “elements” the agency could require.
“It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say," the wrote.
The three want people all over the world to “democratically decide” on the “bounds and defaults” for AI systems.
While they do not yet know how to design such a mechanism, they confirmed OpenAI is planning on experimenting with its development. “Individual users should have a lot of control over how the AI they use behaves,” they said.
Around the same time as the blog post was published, Brockman was speaking at AI Forward in San Francisco, an event put on by Goldman Sachs and SV Angel.
Brockman said OpenAI was looking at Wikipedia for inspiration. Wikipedia is a free online platform where any user can edit information and use citations to affirm entries. Brockman said such a concept could help “democratic decision-making” about AI.
The newly published post from OpenAI’s senior leaders comes amid a flurry of global interest in AI governance in the wake of the rise of generative AI.
Most recently, G7 leaders will form a working group to establish the 'Hiroshima AI process' to set guardrails around AI. This is arguably the fastest global regulatory coordination ever seen for emergent technology.
Altman told Congress to pass more regulations on AI during a recent congressional hearing. He was also among the AI leaders who met with Vice President Kamala Harris where it was proposed that the public should be allowed to vet models from major players in the AI space.
In terms of legislation, Senate Majority Leader Chuck Schumer has already proposed potential rules on AI deployments, which would force companies to have prospective AI tools audited by a team of independent experts.
The jurisdiction that is furthermost ahead in terms of AI governance is the EU, whose AI Act is nearing fruition. The legislation would impose strict rules on systems that impact citizens’ rights.
The U.K., meanwhile, has left governance up to individual regulators, opting for a ‘light touch’ approach, much to the dismay of some workers' unions.
Read more about:
ChatGPT / Generative AIYou May Also Like