Sponsored By

The Risk of Placing Too Much Faith In AI

An opinion piece from the chief innovation officer of ServiceNow

Getty Images

The hype around artificial intelligence has reached a fever pitch, and a day does not seem to go by without another business leader touting how AI will solve everything from marketing to the supply chain.

But hang on. Are we placing too much faith in this technology without considering its current limitations?

There is no doubt that AI has improved at an unforeseen rate. But before AI can live up to the sky-high expectations, we need to talk about its accuracy and transparency. And before rushing out to implement AI at the workplace, businesses need to establish complete visibility into their company’s technology suite.

Far from infallible

Recently I had ChatGPT draft me a bio for a conference. It did a decent job, but in the middle of the paragraph, it said I had an MBA from the University of Texas (which I do not) and that I was one of the founders of ServiceNow — which I am not.

The inaccuracies were not such a big deal in this little generative AI exercise. I know which of those facts are true and which are not, so I can easily fix them. But imagine applying the same technology to data that could be read by thousands of employees and millions of customers — especially when they cannot tell fact from fiction.

What would a company’s liability be if their AI confidently served clients responses that are not entirely factual? How would they ensure AI is not providing their workers with completely wrong information at critical moments like, say, while performing an emergency repair? Also, let us not forget that as a company, you cannot afford AI’s tendency to have fever dream-like hallucinations at unexpected moments.

Part of the reason why people are so enchanted with ChatGPT is its ability to answer questions with certainty. Many business leaders assume AI is just searching a database or documents on the web (like Google does).

It does not. ChatGPT is trained to be a mimic that asks itself "what do I predict the next part of this conversation would sound like?" and does a remarkable job serving plausible responses.

Accuracy is not its primary goal.

Peering into the black box

In my AI-generated bio, I could guess how the GPT large language model arrived at those two wrong conclusions.

Since I have been with ServiceNow for over a decade, I have appeared often enough in the news with the founder, Fred Luddy, so that the AI might have misinterpreted that I was a co-founder. And since many executives have MBAs, it might have made an educated guess that I do, too. Maybe there is even a number of people who share my name who have MBAs from that particular university named by ChatGPT.

When I asked how it had come to these conclusions, however, ChatGPT could only give me a generic response about it being a large language model learning from massive amounts of data. It declined to name specific sources.

This underlines the fundamental difference between AI and an internet search.

This opaqueness is one of the things I worry the most about using AI at work. Right now, users cannot exactly pinpoint how specific responses are generated. As more companies implement AI, we may see misinformation proliferating.

Without proper guardrails, enterprise AI might serve something not unlike Facebook feeds: relevant information popping up alongside deepfakes, without a clear view into how both are being generated and recommended.

Worth the growing pains

Despite its limitations, I am all for putting AI to work. It is already transforming the business landscape by improving efficiency, personalization, and decision-making capabilities. Automation has already eliminated many repetitive, mundane tasks from manufacturing to accounting. AI is freeing up time for employees to focus on more creative tasks.

Personalization will change the way we think of customer and employee experiences. Using user preferences and data, GPT can provide personalized offers, recommendations, and communication. With GPT-powered virtual assistants, customers and employees will generally get faster and better support.

Beyond AI chatbots, predictive analytics will get better at accurately forecasting data trends, from customer behavior to supply chain disruptions, and other crucial business intelligence that will help businesses steer − even in fast-changing, uncertain times.

Any disruptive technology comes with potential pitfalls, and AI has more benefits to outweigh its drawbacks. To take advantage of AI’s power to improve efficiency, personalization, and decision-making, it will be more crucial than ever for IT leaders to have clear, real-time visibility of all the technologies their company is using.

To ethically and effectively use AI at work, companies need to think about how to coordinate and govern across the enterprise. And while AI may at times appear like a completely new horizon, some of the ways to keep it in check might actually be the less glamorous, existing workhorses of enterprise technology.

Companies should embrace the following methods when harnessing AI:

  • Access control list (ACL): As AI becomes a depository of all knowledge within a company, it will be imperative to define who can access what kind of information. And there should be no way to cheat the limitation by asking AI to hypothesize. We have seen lots of conversations starting with ‘imagine’ or ‘pretend’ that show people bypassing controls. This cannot be allowed to happen in an enterprise environment.

  • Bias mitigation: Systems your company deploys could lead to unfair outcomes that reinforce existing biases. Not only is this unethical, but also just plain bad business sense. Ongoing monitoring and adjustments to the AI models are needed to address any bias in the algorithms used to make decisions.

  • Content lifespan: As more companies digitize knowledge, it will be important to classify content and its shelf life based on the frequency of updates. Prevent outdated information from overwhelming up-to-date knowledge. I foresee a day when each company has one or more of its own AIs, trained specifically from up-to-date data specific to the company’s internal mission.

  • Data privacy: Like opting out of cookies on the internet, companies will eventually need to give users the ability to decide how much of their data can be collected, tracked, retained, and shared.

This alphabetic list is far from being exhaustive. There will be many more ways to mitigate AI so that its inner workings are transparent while it also remains secure from malicious attacks.

Today’s AI is extremely good at writing text that sounds right, even though parts of it are inaccurate. It is a bit like having an enthusiastic intern on your team who works faster than any human but lacks detail-oriented experience and has a tendency to make stuff up. This is still very helpful, as long as an expert human is ready to curate and edit what AI produces.

Fundamentally, having a platform that affords complete visibility into a company’s technology suite will be paramount to success. As you can see, the alphabet of AI governance is just the beginning. With ongoing collaboration — between teams, but also between humans and AI — we can build a safer, more transparent AI ecosystem that will work for any business.

About the Author(s)

Dave Wright, Chief Innovation Officer of ServiceNow

Dave Wright serves as chief innovation officer of ServiceNow, provider of a cloud-based workflow automation platform.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like