Meta, IBM Working With Startup to Test AI Model SafetyMeta, IBM Working With Startup to Test AI Model Safety

HydroX AI is working with big-name tech firms and the AI Alliance to evaluate generative AI models in industries such as health care and finance

Ben Wodecki, Jr. Editor

July 12, 2024

2 Min Read
Digital representation of AI security
Getty Images

HydroX AI, a startup developing tools to secure AI models and services, is teaming with Meta and IBM to evaluate generative AI models deployed in high-risk industries.

Founded in 2023, the San Jose, California-based company built an evaluation platform that lets businesses test their language models to determine their safety and security.

HydroX will work with Meta and IBM to evaluate language models across sectors including health care, financial services and legal.

The trio will work to create benchmark tests and toolsets to help business developers ensure their language models are safe and compliant before being used in industry-specific deployments.

“Each domain presents unique challenges and requirements, including the need for precision/safety, adherence to strict regulatory standards and ethical considerations,” said Victor Bian HydroX’s chief of staff. “Evaluating large language models within these contexts ensures they are safe, effective, and ethical for domain-specific applications, ultimately fostering trust and facilitating broader adoption in industries where errors can have significant consequences.”

Benchmarks and related tools are designed to evaluate the performance of a language model, providing model owners with an assessment of their model's outputs on specific tasks.

Related:Meta, IBM Lead New AI Alliance to Support Open Innovation

HydroX claims there are not enough tests and tools out there to allow model owners to ensure their systems are safe for use in high-risk industries.

The startup is now working with two major tech companies that have experience working on AI safety.

Meta previously built Purple Llama, a suite of tools designed to ensure its Llama line of AI models is deployed securely. IBM meanwhile was among the tech companies that pledged to publish the safety measures they’ve taken when developing foundation models at the recent AI Safety Summit in Korea.

Meta and IBM were founding members of the AI Alliance, an industry group looking to foster responsible open AI research. HydroX has also joined and will contribute its evaluation resources while working alongside other member organizations.

“Through our work and conversations with the rest of the industry, we recognize that addressing AI safety and security concerns is a complex challenge while collaboration is key in unlocking the true power of this transformative technology,” Bian said. “ It is a proud moment for all of us at HydroX AI and we are hyper-energized for what is to come.”

Other members of the AI Alliance include AMD, Intel, Hugging Face and universities including Cornell and Yale.

Related:Salesforce Launches AI Benchmark to Evaluate CRM Deployments

Read more about:

ChatGPT / Generative AI

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Sign Up for the Newsletter
The most up-to-date AI news and insights delivered right to your inbox!

You May Also Like