Experts Offer Tips for Securely Deploying Generative AI in the Enterprise

A recent webcast explored tools and frameworks to try when deploying generative AI

Ben Wodecki, Jr. Editor

August 30, 2023

2 Min Read

At a Glance

  • Omdia analysts pick their tools to try when looking to securely deploy AI in the enterprise.

Analysts from Omdia joined AI Business for an interactive webcast this week on Generative AI in the Enterprise – looking at ensuring data privacy and security when utilizing generative AI technologies.

The panel offered tools and frameworks to try when deploying generative AI. Here are their unbiased picks:

--- Missed the live webcast? Sign up to receive it on-demand.

Bradley Shimmin, chief analyst, AI and data analytics

Appen - https://appen.com/

“Focused on fine-tuning. Also, Appen offers Red-Teaming services with domain experts testing for vulnerabilities and stressing established safeguards.”

Robust Intelligence - https://www.robustintelligence.com/

“Focuses on privacy/security through tools like differential privacy and federated learning.”

Dromedary - https://github.com/IBM/Dromedary

“A large language model and large language model development framework designed to help companies create self-aligned models using local data.”

Aporia - https://www.aporia.com/

“Specializes in regulatory requirement compliance across security, privacy and large language model output/outcomes.”

Arthur - https://www.arthur.ai/

“Using a pre-built test suite, customers can weigh the impact changes will have at a token level before moving to deployment; this same mechanism also provides gating for model outputs that breach established thresholds—a literal firewall for large language models.”

Related:Privacy Reigns: Analysts Emphasize Oversight of Generative AI

Curtis Franklin, principal analyst, enterprise security management

Darktrace - https://darktrace.com/

“The Antigena AI uses artificial intelligence to protect applications including generative AI.”

Microsoft Counterfit - https://www.microsoft.com/en-us/security/blog/2021/05/03/ai-security-risk-assessment-using-counterfit/

“Aids in security testing for AI infrastructure. Used by Microsoft red teams.”

Dynatrace AIOps - https://www.dynatrace.com/platform/aiops/

“While this can be used to protect any number of cloud-based applications and services, it is being deployed as a tool for protecting cloud-hosted AI instances.”

Andrew Brosnan, principal analyst, AI applications in life sciences

Federated and swarm learning tools:

Nvidia FLARE - https://developer.nvidia.com/flare

HPE Swarm Learning - https://www.hpe.com/us/en/hpe-swarm-learning.html

Owkin Substra - https://www.owkin.com/substra

Rhino Health - https://www.rhinohealth.com/

IBM and Microsoft also have solutions. 

Ben Wodecki, junior editor, AI Business

Project IDX from Google - https://idx.dev/

“IDX is a new browser-based workspace for developers. You get to try out AI coding tools and you can access existing applications from GitHub. There’s no Python as of yet, but that’s coming – just sign up to the waitlist to gain access.”

Related:Half of Businesses Using AI Report Positive Results

--- Missed the live webcast? Sign up to receive it on-demand.

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like