Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
The hubs will support British AI expertise in health care, chemistry, mathematics and other fields
The U.K. government announced today that it will spend nearly £90 million ($113 million) on nine new research hubs and a partnership with the U.S. on responsible AI.
The hubs will be tasked with supporting British AI expertise in health care, chemistry, mathematics and other fields.
Also, some £2 million ($2.5 million) of Arts and Humanities Research Council funding will be put towards supporting new research projects to define responsible AI across areas such as education, policing and the creative industries. Another £19 million ($23.8 million) will go towards 21 projects to develop responsible AI and machine learning solutions.
The government unveiled plans as well to launch a steering committee to support regulatory activities within the government. The Committee will commence in the Spring.
The government announcements follow the £100 million ($125.5 million) AI Safety Institute, unveiled at last year’s AI Safety Summit, which will be tasked with evaluating the risks of new AI models.
Tamara Quinn, IP and AI partner at law firm Osborne Clarke said “anyone hoping for fireworks from the government will have been disappointed. After nine months cogitating about this game-changing technology, the government's response could be viewed as underwhelming.”
“Rather than fireworks, the focus in today's announcement is on existing regulators applying their existing powers to tackle AI. It is unlikely that there would be sufficient time for new legislation in any case, with a general election expected later this year and priority already given to three major digital technology-focused bills” − the Digital Markets, Competition and Consumer Bill, the Data Protection and Digital Information Bill, and the Media Bill.
The U.K. government also set aside £10 million ($12.5 million) to upskill departmental regulators as it prepares to hand over AI governance duties.
The regulators are tasked with enforcing governance on AI and the funds will be used to help them develop R&D tools to monitor risks in their respective areas.
Regulatory bodies such as Ofcom and the Competition and Markets Authority now have until April 30 to publish their approach to managing AI. The regulators will have to disclose AI-related risks in their areas, detail their current skillsets and share plans for how they will regulate AI over the coming year.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan said, “AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately.”
The U.K. government’s latest funding announcements come as it continues to position itself as a global AI leader.
To that end, Prime Minister Rishi Sunak hosted the AI Safety Summit last November, bringing together world leaders to reach an accord.
The government has also sanctioned a project to build one of Europe’s most powerful supercomputers to train AI. And AI luminaries like Turing Award winner Yoshua Bengio are among the AI experts advising the Prime Minister on next-gen AI models.
But a recent report from the House of Lords warned that the government’s approach to AI safety is too focused on large language models.
The House of Lords Communications and Digital Committee said the U.K. “must rebalance towards boosting opportunities while tackling near-term security and societal risks.”
Without broadening its safety efforts, the Lords’ report warned that the U.K. will “fail to keep pace with competitors, lose international influence and become strategically dependent on overseas tech firms for a critical technology.”
Read more about:
ChatGPT / Generative AIYou May Also Like