Avoid FOMO: 4 Key Steps to Implementing Generative AI

An opinion piece by the chief technologist of CDW U.K.

Tim Russell, Chief Technologist, CDW UK

December 22, 2023

5 Min Read
Getty Images

It has been just a year since the world was introduced to ChatGPT − and the rush to implement generative artificial intelligence (AI) capabilities within companies is already on. One-third of organizations are already using gen AI regularly in at least one business function and there is pressure for businesses to evaluate, implement and expand its use. However, AI implementation comes with its own set of risks — from managing costs and ROI to inaccuracies in output, data security issues and critically, its impact on employees and customers.

Business leaders must avoid adopting AI out of a ‘fear of missing out’ (FOMO) and should instead develop a strategic decision-making process to evaluate where and how AI can be most successfully integrated into the organization.

There are effectively four steps involved in enabling and safely using AI in your environments.

  • Establish and evaluate AI readiness: Organizations should develop a readiness assessment that looks at the systems, data sets and processes they have in place to define their AI readiness. 

  • Educate users: Training must be available and consumed by employees. This information must highlight the responsibility of ownership when it comes to AI content. Even if AI has created content at your request, it is still an individual’s responsibility to ensure the data is correct, unbiased and suitable for use. 

  • Ensure data protection: There are already tools in place to prevent data loss. Organizations must ensure that any information submitted by employees into an AI engine passes through a data landing portal (DLP), one that ideally can distinguish between content destined for a public or private AI engine. 

  • Understand private AI capability: A lot of the consumer AI engines in use are public-based capabilities that have used modelling sets defined by the creator. A private AI capability can be modeled utilizing internal and pre-defined data sets, keeping the results aligned to a specific business or sector and providing a lower potential hallucination risk. 

Establishing readiness

Any investment in technology must support a specific business outcome; just because it is the latest and greatest piece of tech does not mean your business needs it. As businesses navigate the AI landscape, they often face pressure from others within the organization or from customers demanding the newest and most innovative technologies. The key is for organizations not to give in to the hype.

There are a few key questions that every business leader should ask themselves prior to implementing AI solutions. First, what are the businesses’ overall strategies and goals? These should be aligned with the AI investment. Next, where are the areas that AI could offer quick value, is it user enablement AI or system based AI?

Also, which type of AI implementations offer maximum gain? Do you gain advantage from providing users an AI tool, or do you apply this to customer interaction and central data analysis? Globally, only a small percentage of data stored is analyzed. What could AI tell us if it were to analyze more of this data? Ultimately, organizations will have their own metrics of success. There is no ‘one size fits all’ but there are some simple concepts to understand and adopt. 

Educating users

AI adoption in the workplace should be a positive employee experience, granting users new capabilities and skills. But first, everyone engaging with AI should understand its capabilities and limitations. There have been several data mismanagement issues over the past year related to generative AI that underscore the need to train employees to maintain a vigilant attitude toward data ownership when using AI for content creation.

Ultimately, AI requires human oversight. Employees should also be educated on when and how AI should be applied. This includes providing employees with a list of suitable scenarios and establishing checkpoints to ensure data accuracy. Employees must understand the need to secure data generated by AI to prevent sensitive information from leaking into the public domain.

Ensuring data protection

The results of generative AI are only as good as the data on which it is trained. Rushing into AI implementation without proper preparation can introduce incorrect outputs, resulting in more significant damage than the initial cost of implementation. Running an entire business or specific processes off flawed AI models can have severe consequences for the business.

If you can clearly articulate the three Ps − Purpose, Policies, and Protections − you have in place around AI, the acceptance by both internal and external audiences will be simplified. 

Adopting technology for the sake of it can lead to fragmented understandings of its capabilities. Individuals may gravitate towards familiar technologies rather than those best suited to the organization's needs. The key to successful AI implementation lies in determining the correct framework by measuring its value and impact on specific organizational goals.

Implementing private AI

Organizations rightly do not want their data shared with public cloud AI providers who will use it to train their own models. Private AI keeps data within the organization and that lets companies reap the benefits of AI for process efficiency while maintaining ownership of their data. 

With private AI, users can purpose-build an AI model to deliver the results they need, trained on the data they have and able to perform the behaviors they want. The data never escapes their control. Users get unique models and their data benefits only them and their customers, not their competitors or a public cloud provider.

In a landscape driven by AI hype, it is easy to get caught up in a race to implement AI before the organization is ready to adopt it. By establishing a framework for evaluating adoption, including understanding the goals of the business, preparing the workforce and ensuring the systems are secure and the right solutions for the business — they can ensure AI becomes a transformative, rather than disruptive solution.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Tim Russell

Chief Technologist, CDW UK

Tim Russell is the chief technologist of CDW.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like