Large Number of LLM Servers Reveal Sensitive Corporate, Health, Online Data

LLM automation tools and vector databases can be rife with sensitive data and vulnerable to theft

Nate Nelson, Contributing Writer

August 29, 2024

1 Min Read
A hacker using a laptop
Getty Images

Hundreds of open source large language model (LLM) builder servers and dozens of vector databases are leaking highly sensitive information to the open web.

As companies rush to integrate AI into their business workflows, they occasionally pay insufficient attention to how to secure these tools and the information they trust them with. In a new report, Legit security researcher Naphtali Deutsch demonstrated as much by scanning the web for two kinds of potentially vulnerable open source (OSS) AI services: vector databases — which store data for AI tools — and LLM application builders, specifically, the open source program Flowise. The investigation unearthed a bevy of sensitive personal and corporate data, unknowingly exposed by organizations stumbling to get in on the generative AI revolution.

"A lot of programmers see these tools on the internet, then try to set them up in their environment," Deutsch says, but those same programmers are leaving security considerations behind.

Read the full story on AI Business' sister publication Dark Reading >>>

About the Author

Nate Nelson

Contributing Writer, Dark Reading

Nate Nelson is a freelance writer based in New York City. Formerly a reporter at Threatpost, he contributes to a number of cybersecurity blogs and podcasts. He writes "Malicious Life" -- an award-winning Top 20 tech podcast on Apple and Spotify -- and hosts every other episode, featuring interviews with leading voices in security. He also co-hosts "The Industrial Security Podcast," the most popular show in its field.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like