Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
December 8, 2023
AI Business brings you the latest news and insights from across the AI world.
This week’s roundup covers new RAI benchmarks for companies testing the trustworthiness of their models and more.
The Responsible AI Institute (RAI) has launched a series of benchmarks for companies to test their AI products.
The nonprofit’s Responsible AI Safety and Effectiveness (RAISE) benchmarks are designed to assist companies in assessing AI product integrity.
Three initial benchmarks have been released:
RAISE Corporate AI Policy Benchmark: For measuring the scope and alignment of a company’s AI policies with RAI Institute's model enterprise AI policy. It’s designed to guide organizations in framing their AI policies ensuring trustworthiness and risk considerations are included.
RAISE LLM Hallucinations Benchmark: Designed for organizations using large language models, this benchmark assesses the risk of hallucinations in a system.
RAISE Vendor Alignment Benchmark: For assessing the policies of supplier organizations and whether they align with the ethical and responsible AI policies of their purchasing counterparts.
The first three RAISE Benchmarks are available in private preview and will be generally available to RAI Institute Members in Q2 2024.
“In an era of accelerating AI advancements and increasing regulatory scrutiny, our RAISE Benchmarks provide organizations with the compass they need to chart a course of innovating and scaling AI responsibly, guiding them towards compliance with evolving global standards,” said Var Shankar, executive director of RAI Institute.
You can now try out Meta's AI image generator, which was previously only available through chats on the company’s social media apps. Users simply type in the textbox what they want to see, hit generate and the system will create four versions of the desired output.
All images feature a watermark that reads ‘imagined with AI’ - Meta has been working on watermark systems to ensure the origin of AI-generated images can be traced. In October, Meta researchers revealed they’ve created a watermarking system invisible to the human eye that can’t be cropped or edited out of images.
Meta Imagine is currently only available to users in the U.S., however. To access it, you’ll need a Meta account, with users able to log in via their Facebook or Instagram account or using their associated Meta account’s email.
Imagine is based on Emu, the AI image generation tool shown off at the company’s Connect event back in September. There, Meta CEO Mark Zuckerberg touted the speed of the company’s image generator, claiming it can create content in just five seconds.
Researchers at MIT have developed a customized onboarding process designed to help humans learn when an AI model’s output is trustworthy.
Built via the MIT-IBM Watson AI Lab, the system is designed to assist professionals in making decisions based on AI output.
The system, through a combination of data analysis and natural language descriptions, teaches users how to work effectively with AI and provides feedback on performance and reliability.
It’s designed to be fluid - capable of being used in various fields. MIT News offers a scenario of it being used in health care, with radiologists using it to trust AI advice on X-rays.
“One could imagine, for example, that doctors making treatment decisions with the help of AI will first have to do training similar to what we propose,” said senior author David Sontag, professor and member of the MIT-IBM Watson AI Lab. “We may need to rethink everything from continuing medical education to the way clinical trials are designed.”
The nonprofit Girls Who Code has launched an AI songwriting experience to empower girls to get creative with AI.
Dubbed GirlJams, it’s designed to help girls write and create a song using multiple AI technologies, while simultaneously teaching them the basics of AI.
Created in partnership with Mojo Supermarket and digital studio Buttermax, the offering is designed to inspire girls while having fun with AI at the same time.
Tarika Barrett, CEO of Girls Who Code, said: “Artificial intelligence – with all its potential and power – is simply too important to be left solely in the hands of men.
“We’ve already experienced the consequences of a lack of diversity in various tech sectors, from bias to ineffective products to unchecked hate speech. But with AI in its early stages, we have the opportunity to make sure that never happens again.”
Investment giant BlackRock is set to launch a generative AI tool in January 2024.
The FT reports that BlackRock has built a copilot for its risk management systems, Aladdin and eFront. The tool can be used to unlock insights in a bid to boost efficiency and productivity.
The company is reportedly also building AI tools to help gather data for research reports and investment proposals, the FT reports.
BlackRock is also tapping Microsoft’s 365 AI systems across the company from early 2024.
Read more about:ChatGPT / Generative AI
Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024