Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Stories AI Business readers liked the most for the week
Here are the most popular stories of the week:
In 2011, Gert-Jan Oskam became paralyzed from the hips down after a motorcycle accident due to a spinal cord injury. But the Dutch patient has regained the ability to walk again after getting an implant that acted as a 'digital bridge' between his brain and spinal cord. AI was the key.
The device he uses is called a brain-spine interface that can pick up Oskam's thoughts about wanting to walk through electrical activity in the cortex. This signal travels to an external computer he wears and then sent to a spinal implant in Oskam.
The key was using an ML-powered thought decoder, which linked Oskam's thoughts to the implants in his skull and spine. "It's still very much the early days, but as a proof of concept in a human being, I think it's a huge step forward," says Nandan Lad, a neurosurgeon at Duke University, told Science.
Nvidia CEO Jensen Huang said the CPU-dominated computing era is ending and two new trends are rising simultaneously: Accelerated computing and generative AI.
He believes each data center will transition from CPU-based infrastructure for general purpose computing to GPU-based accelerated computing to handle the heavy workload of generative AI capabilities for specific domains. He credits this shift to a “new way of doing software” that is deep learning.
“This is really one of the first major times in history a new computing model has been developed and created,” said the chipmaker’s CEO at the recent Computex conference in Taiwan.
He showed off a new AI supercomputer, the new Grace Hopper Superchip, announced partnerships with SoftBank and WPP, among other unveilings in a two-hour keynote.
OpenAI is launching a global contest to help it design a democratic process that would let the public weigh in on rules that AI models must abide by - over and above any legal restrictions to be imposed by governments.
Many questions are too nuanced for laws to address, such as “Under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures?” Another is, “How should disputed views be represented in AI outputs?”
OpenAI is asking individuals, teams and organizations to submit proofs-of-concepts for a “democratic process that could answer questions about what rules AI systems should follow.”
Ten winners will get $100,000 each. Application deadline is June 24.
In an essay for AI Business, Marko Pukkila, VP analyst at Gartner, writes that “while the ChatGPT AI capabilities are impressive, there are pitfalls, and its output is far from perfect. IT leaders should gain an understanding of the tool’s utility, supervision requirements and trust issues, and then start with a data quality assessment and master data clean-up effort.”
He said IT leaders must be aware of the following ChatGPT risks:
-Utility of responses depend on prompt quality
-It requires end-user supervision
-Real-life encounters
AI Business provides a summary of the latest developments in AI regulations in the EU, U.S., U.K. and China plus links to key readings.
You May Also Like