UN Adopts First Global Artificial Intelligence Rules
The U.N. General Assembly adopted a resolution to promote rights-respecting AI systems and increase equitable access
The United Nations General Assembly Thursday adopted a resolution designed to promote trustworthy AI, marking the first time it has adopted a resolution regulating AI.
The text, which was adopted without a vote, calls for U.N. members to promote “safe, secure and trustworthy” AI systems and to not use systems that could “pose undue risks to the enjoyment of human rights.”
“The same rights that people have offline must also be protected online, including throughout the life cycle of AI systems,” the text reads.
The resolution was brought to the table by the U.S. and was backed by more than 120 other member nations, including South Korea, the U.K. and Germany. China and Russia also voted in favor of the resolution.
The U.S.-led resolution aims to ensure “equitable access” to AI technologies to “promote, not hinder, digital transformation.”
It also calls on member states to increase funding for AI research that relates to its Sustainable Development Goals, a U.N. initiative that aims to target practical sustainable strategies.
Linda Thomas-Greenfield, U.S. ambassador and permanent representative to the U.N., said the international community should “govern this technology rather than let it govern us.”
“Let us reaffirm that AI will be created and deployed through the lens of humanity and dignity, safety and security, human rights and fundamental freedoms,” Thomas-Greenfield said.
The U.N.’s latest AI efforts involved the general assembly call on the private sector and civil society to support governance plans covering safe AI. U.N. Secretary-General António Guterres previously called for a global AI watchdog that can monitor AI akin to how it conducts nuclear weapons surveillance.
The resolution adoption follows growing efforts from world leaders to regulate AI. The EU’s AI Act, which imposes strict rules on systems that could harm citizen’s rights, completed its lengthy legislative journey earlier this month, passing into law.
The EU’s AI Act is stricter than other global governance efforts. In the U.K., individual regulators assume responsibility, while in the U.S., companies bear the burden of ensuring their systems are safe, with federal agencies having authority to vet AI models before release.
About the Author
You May Also Like