This Week's Most Read Stories: Apple's Gen AI plans, G7 and the Pentagon 'Fire'This Week's Most Read Stories: Apple's Gen AI plans, G7 and the Pentagon 'Fire'
Catch up on this week's top news in a nanosecond
May 26, 2023
Here are the most popular stories of the week:
1. Apple, Amazon Ramping Up Generative AI Plans
Apple is ramping up its generative AI plans, with the company posting a wave of new job listings for machine learning specialists.
First spotted by TechCrunch, the iPhone maker is looking for staff with experience in inputs for natural language processing and integrated systems.
Among the prospective roles include ‘Visual Generative Modeling Research Engineer’ with new hires potentially tasked with working on generative modeling to power applications for video editing, motion reconstruction and avatar generation.
Amazon too is looking to increase its generative AI work as evidenced through its new job listings, as first reported by Bloomberg.
Amazon wants engineers to improve the search functionality of its flagship e-commerce platform. The requisite new hires would work on creating “interactive conversational experience(s)” for Amazon that would allow customers to compare products and offer personalized suggestions.
2. G7, Generative AI and the ‘Hiroshima AI process’
In what could possibly one of the fastest regulatory responses to an emerging technology, the industrialized nations making up the G7 called for global cooperation to regulate generative AI.
Leaders from the U.S., U.K., France, Germany, Japan, Canada, Italy and the EU will be setting up the ‘Hiroshima AI process’ this year, in collaboration with the OECD and the Global Partnership on AI. This year's summit was held in Hiroshima, Japan.
The group is tasked with discussing AI governance, IP rights protections, transparency, action plans to address harms such as misinformation perpetrated by foreign agents, and the responsible use of AI.
3. Intel, HPE to Develop Generative AI Models
Chipmaker Intel is partnering with HPE, Argonne National Laboratory and others to develop a series of generative AI models for scientific research.
Announced at the ISC High Performance Conference in Germany, they plan to use the Aurora exascale supercomputer for the project once it goes online this year. The AI models will have as many as one trillion parameters.
Stay updated. Subscribe to the AI Business newsletter
The generative AI models will be trained on general text, code, scientific texts and structured scientific data spanning disciplines such as biology, chemistry, materials science, physics, medicine and other sources.
The models will be used for varied scientific applications, such as design of molecules and materials, synthesizing of knowledge across millions of sources to yield new experiments in systems biology as well as polymer chemistry, climate science and cosmology.
4. Pentagon Explosion Image is Fake and AI-generated
An AI-generated image of an explosion near the Pentagon has caused a brief stock market dip in the latest deepfake incident to hit social media.
Images showing a grey plume of smoke near the headquarters of the U.S. Defense Department quickly spread on social media.
The DoD came out and said the image was fake, but that did not stop it from spreading. The local fire department tweeted that no such incident took place.
However, the reaction saw a brief dip in the stock market, with the S&P 500 falling by 0.3% before recovering later in the day.
There has been no indication as to where the image originated. However, it was widely shared by verified accounts on Twitter, including the Russian state-owned media outlet RT. (Anyone can now be verified on Twitter thanks to Blue, the Elon Musk-designed subscription service.)
5. OpenAI Founders Warn AI ‘Superintelligence’ is Like Nuclear Power
In a rare letter, top executives of ChatGPT-maker OpenAI jointly penned a warning about a coming “superintelligence” they compare to nuclear energy, and suggested outside-the-box ways to curb its power – before it is too late.
“Given the picture as we see it now, it’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains,” wrote CEO Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever, who are also the co-founders.
This “superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” they said. “Given the possibility of existential risk, we can’t just be reactive."
The authors believe that this superintelligence needs “special treatment and coordination” to curb its powers, hinting that it is too dangerous and too smart to be reined in properly through conventional regulatory methods.
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like