Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
June 1, 2023
To make sure the public and regulators are aware of the existential dangers posed by AI, global leaders and AI luminaries signed yet another open letter warning of future risks.
Around the same time, China’s Xinhua news agency reported that Chinese President Xi Jinping is calling for an acceleration in the modernization of China’s national security system and capacity. This includes improving guardrails around the use of AI and online data.
China is widely considered the West’s potential AI adversary and competitor. Xi's latest mandates ostensibly is given for its own sovereign security, rather than global safety.
Xi made the remarks at the first meeting of the National Security Commission under the 20th CPC Central Committee. At the meeting, members urged “dedicated efforts” to improve the security governance of internet data and artificial intelligence.
The mandates come as China’s national security issues are “considerably more complex and much more difficult to be resolved,” the group said. As such, China must be prepared to handle “high winds, choppy waters, and even dangerous storms.”
At the meeting, members called for proactively shaping a “favorable” external security environment for China to “better safeguard its opening up and push for a deep integration of development and security.”
In San Francisco, the nonprofit Center for AI Safety announced on May 30 that a “historic” collection of AI experts signed its open letter, including Turing award winners Geoffrey Hinton and Yoshua Bengio, as well as OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Microsoft co-founder Bill Gates and many others.
The letter has one statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Notably, the nonprofit said Meta leaders did not sign the letter.
In late March, many of the same signatories signed an open letter from the Future of Life Institute warning of similar existential risks posted by advanced AI. Altman himself and other AI leaders have been on a world tour to urge stronger regulation of powerful AI above a certain threshold. And the U.S. has called for the public vetting of advanced AI models from OpenAI, Google, Microsoft and others.
Read more about:ChatGPT / Generative AI
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024