Privacy Reigns: Analysts Emphasize Oversight of Generative AI

Regulatory pressures add to the headache of generative AI responsibilities

Ben Wodecki, Jr. Editor

June 29, 2023

4 Min Read
Getty images

At a Glance

  • Omdia analysts stress the importance of transparency and accountability measures in building trust for generative AI systems.
  • Analysts discuss the importance of human oversight, scrutiny, and fine-tuning models using proprietary data.

Fine-tuning large language models on proprietary data and the need for human oversight and scrutiny of generative AI outputs are among the key privacy focuses for generative AI, according to analysts from Omdia.

During a recent webcast on applying generative AI in the enterprise, Omdia analysts stressed the need for transparency, governance and accountability measures to build trust in generative AI systems and address potential regulatory pressures.

Curt Franklin, principal analyst for enterprise security management at Omdia, said external data sources “will be critical” to making a “richer” model but warned that enterprises must be mindful before implementation.

“If you are going to be using the results in any number of ways, you have to be able to screen it to make sure that you do not inadvertently violate copyright, trademark, or compromised trade secrets from someone.

“For example, what Microsoft has done with their AI Bing search results (is) they actually give the reference where a particular part of an answer comes from. Something like that is very useful.”

Bradley Shimmin, Omdia’s chief analyst for AI and data analytics, agreed, saying such an idea of referencing would offer “a verify, then trust scenario that users need to understand what it is to interact with a large language model.”

Shimmin continued: “The big issue in analytics and data management is how to democratize the understanding of data and elevate literacy for data in the enterprise.

“We've been working on that for some time, and we're still a long way away. My suggestion is for those considering building large language models to focus on the output and put all of their attention on ensuring quality, security, privacy accuracy.”

Industry focus: Health care and life sciences

Offering an expert example of industry-specific applications was Andrew Brosnan, Omdia principal analyst covering AI applications in life sciences.

He said that patient privacy is “the number one issue” in using AI tools in a health care setting.

Brosnan said some medical associations, such as the Australian Medical Association, have banned the use of ChatGPT for doctors.

“There's a whole host of issues currently with some of these models in health care, privacy being one of them," as well as “accuracy, transparency, bias, confidentiality of intellectual property, patient privacy  and safety and so on," Brosnan said.

“A lot of these models we see coming in from the health care sector have the ability to go through large swaths of unstructured text and pull out patterns and correlations in that data, helping to find these correlations and compile reports rather than having to do it manually.”

Brosnan outlined several examples of emerging use cases of tools like ChatGPT in the life sciences industry, including a company integrating knowledge graphs with a conversational ChatGPT-like interface to help with hypothesis generation for drug discovery.

“There's a company using ChatGPT and fine-tuning it with proprietary pharma data,” he added. "They're using that to help with clinical trial feasibility and in clinical trial design to help them identify patterns in past clinical trials that led to successful completion of trials in this space.”

AI model fine-tuning gains traction

Brosnan’s reference to fine-tuning models with proprietary data was a key concept raised by the Omdia analysts.

By focusing on data quality and prompt engineering, enterprises could achieve consistent and reliable outputs from generative AI models, they advised.

Shimmin said risk adverse enterprises seeking to safely explore and learn about LLMs should turn their heads away from large-scale AI models like Bloom and GPT-4 and instead focus on smaller instruction-led models that can be fine-tuned.

"Don't think that just because they're super sexy and cool and do a lot of things that Bard and ChatGPT are going to be the right solution,” the chief AI analyst said. “You have to pick the solution that’s going to give you the results you want repeatedly.”

Franklin added that for those enterprises wanting repeated results from their AI models, considerations around prompts need to be front and center.

He said, “The vaguer your prompt, the more likely it is that you're going to get awfully interesting output. That can vary from instance to instance. This is where the actual engineering of prompts comes in as consistent. Make your prompts consistent and you're much more likely to get consistent output.”

Catch the next webinar in the series

Omdia analysts were taking part in the first in a series of online webcasts in conjunction with AI Business. Watch the full session on demand.

The next webcast on generative AI will continue the discussion on enterprise adoption and privacy. Stay tuned for the date of the webcast.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like