De-risking Generative AI for the Enterprise

An opinion piece by 3M's Northern European R&D operations leader

Tony Stanford, 3M Northern European R&D Operations Leader

October 5, 2023

3 Min Read
Getty Images

Generative AI has emerged as one of the most polarizing topics of our time: The optimism of enthusiasts for the technology is often met with calls for caution about its well-known shortfalls even as usage continues to grow.

However, the vacuum left by the lack of regulation and understanding of the technology has put the onus on leading tech companies – the creators of generative AI − to consider its ethical applications.

That means each business must do its part by embedding responsible AI practices and a robust AI compliance framework into the company. This includes controls for assessing the potential risk of generative AI use cases at the design stage and a means to embed responsible AI approaches across the business.

Set guidelines for employee use of generative AI

To ensure trust in AI systems, organizations must define ethical principles for AI development and build an effective governance structure around them that is led from the top.

For instance, 3M’s Health Information Systems business has adopted an approach that puts guardrails in place such as having a human review of content before being presented to customers or caregivers.

It is also important to introduce policies and guidelines for use of generative AI applications by employees. These policies need to capture not only the use of proprietary business AI tools, but also potential use of third-party AI applications by employees using company data. In addition to such policies, we have introduced training across the organization to ensure all employees understand the implications of using company data with generative AI applications.

Another core component of the AI governance strategy is to ensure it does not breach key compliance and privacy requirements by feeding sensitive data into external APIs provided by major AI platforms. This is particularly important in health care where the integrity of patient data is critical, so any AI use that can pose even the slightest risk to patient data needs to be prevented.

Open-source LLMs cheaper but riskier

The best way to mitigate this risk is to avoid feeding any sensitive data into open-platform AI solutions or if an organization is looking to experiment with that, the data needs to be completely anonymized and used in a controlled test environment to prevent accidental data leakage.

A safer way to adopt generative AI is to develop in-house solutions that train Large Language Models (LLMs) on company data without using open-source solutions. However, this approach is typically more costly and more complex to execute and depends on the technical capabilities of the business.

There are different types of LLMs and businesses need to determine which approach best meets their needs. In many cases LLMs are not needed to operationalize data insights, so generative AI adoption needs to be limited to the actual business needs and used only when required.

Our use and understanding of generative AI is still in its infancy, so organizations need to implement a variety of controls to manage AI risk and adopt effective mechanisms for de-risking the technology. Most importantly, they need to use the technology with caution and ensure data integrity and ethics sit at the heart of their AI adoption strategie

Read more about:

ChatGPT / Generative AI

About the Author(s)

Tony Stanford

3M Northern European R&D Operations Leader

Tony Stanford is 3M's Northern European R&D Operations leader.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like