Getting Started With Artificial IntelligenceGetting Started With Artificial Intelligence

The challenge for those new to AI is that it may not be obvious how a system learns or how to interpret its responses

Adan K. Pope, co-founder, Taraxa Labs

October 10, 2024

8 Min Read
A blue brain on a chip
Getty images

Artificial Intelligence (AI) presents the opportunity to organize chaos and leverage much-needed force multipliers, but given all the hype surrounding such a relatively opaque, black-box technology, AI requires a new level of technical awareness and accountability from leadership than anything we’ve seen in recent years. Given that many leaders see AI as an inevitability – ignore it at your competitive peril – a getting-started guide might help to cut through all the noise to get to the vital technology adoption and risk management success factors.

Artificial intelligence is abuzz for good reason. As the cloud and the Internet of Things (IoT) continue to be leveraged to create and collect larger and potentially more valuable sets of data, it’s becoming harder for the humans that we employ to get their arms around all that information and to make sense of it. While the connected/IoT world generates massive amounts of data around a product or process, much of the data collected today remains fallow in a database archive or is pulled situationally for ad hoc reporting. AI promises to create actionable insights from otherwise disparate and untapped data.

Unlike information technologies that you may be using today, artificial intelligence comes wrapped with a host of uncertainties that must be appreciated before considering blanket adoption. Artificial intelligence may employ technologies like deep learning or large language models to ingest vast amounts of data, identify patterns and extrapolate recommendations. The challenge for those new to AI is that it may not be obvious how such a machine “learns” and how to interpret its outputs or responses. We observe this phenomenon today when an AI creates an image of a person with three hands or suggests adding glue to a homemade pizza sauce to keep the cheese from sliding off. While the latter story was actually a Reddit urban legend, the fact that it seemed plausible is reason enough to do some more homework.

Related:Unlocking Safe AI Development

Be that as it may, some of the ramifications of AI learning models may not be so obvious. Users worry about implicit bias in the data sets used to train an AI that may skew an analysis toward historical inequities. An AI may even suffer “hallucinations,” as observed by IBM, “wherein a large language model…perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.” Is the AI that you employ trying to learn how to get to the correct answer or find a solution that it perceives will satisfy the user?

Related:How AI Can Help Tackle the Learning Gap

To manage these uncertainties, leaders must invest time into understanding AI technologies, vendors and service providers and the implications for their brand trust. This is not a technology that can be wholesale delegated to your IT or engineering organizations. AI demands a higher level of (1) acumen, (2) control and (3) governance before you let it become embedded into your architecture. Senior leadership must be clearly in charge of the technology use and application at all times and they must understand explicitly how artificial intelligence processes, learns and makes recommendations.

Control: Start With Assistive AI

A good place to start is understanding the different types of AI and which forms might best allow you to experiment and learn how to adapt it to your strategic needs. Ceding a process to a machine may offer a quick fix to a difficult problem, but usurping the human element probably means letting go of the reins too soon.

When people think of AI, they probably envision generative AI. ChatGPT is an example of generative AI, where it ingests tons of data and can then create (generate) outputs such as job application cover letters, realistic 3D images, or stock portfolios.

Similarly, large repositories of data can also be used to predict outcomes, trends, behaviors, or the like, known as predictive AI. Predictive AI uses machine learning—a computer system that employs algorithms and statistical models to identify patterns and extrapolations—to make recommendations and suggest actions.

Like predictive AI, another category of AI, assistive AI, finds patterns and variances in large data sets made ready to assist users in their given tasks. Assistive AI operates in real-time rather than leveraging experiential or stored data. As the name implies, assistive AI works alongside human operators to help them fill skills or resource gaps. Examples of assistive AI might include foreign language translation or image/object recognition in video surveillance.

With assistive AI, the responsibility for decision-making stays with a human, preferably one who understands what the tool is recommending or not recommending, insisting on other pieces of information where necessary and acting upon all the available information, including those provided by AI. Assistive AI applications can be implemented to declutter the inputs constructively without filtering the content from sight altogether.

Assistive AI can be best channeled by first considering the use cases where it can be transparently deployed. Look for places in your organization or processes where staff is overworked, or workflows are becoming disjointed. Perhaps you have a technical discipline where experienced staff are close to retirement and an AI could capture key learnings for the training and development of new team members. Automating non-critical or triage processes may also present an opportunity for an assistive AI, but only where strict governance principles are devised, such that you know that you’re still in charge of how your business processes are informed and executed.

Governance Engenders Trust

AI governance seeks to ensure that systems are developed ethically and responsibly and that security and risk mitigation strategies are employed to protect all stakeholders. For the most part, artificial intelligence today is a self-regulated industry; your customers and partners will expect you to apply the technology ethically and responsibly. Governance practices – including safety, privacy and transparency measures – must be designed to ensure trust and accountable scalability. To the end user of your AI-enabled product or service, the technology may be viewed as a “black box” feature that may add value or capability but may only be accepted if the vendor can explain clearly how it works and its guardrails of operation.

People seem to still trust other people as sources of authority more than machines. While an AI may do a better job of reading X-rays, patients still want to hear a diagnosis from their doctor. During an emergency, we expect a 9-1-1 call to be answered by a human being with the care and empathy we look for in a time of great distress. Though the process of dispatching resources may be optimized by a machine, trust in an algorithm isn’t part of our culture yet and perhaps it should never be.

Governance, as a demonstration of responsibility and duty of care to your customers, may require a greater level of openness than many organizations have been willing to expose in the past. If you’re trying to sell an autonomous vehicle, for example, one that requires 50 or 100 million lines of code to operate safely, you probably would never consider revealing that software architecture for competitive reasons. But with the AI you employ, how will your customers trust the technology enough to take their hands off the wheel? Someone in a windowless office in Silicon Valley somewhere has to write the algorithm that decides whether a vehicle in a rush-hour pile-up should either rear-end a school bus or swerve off onto a sidewalk of pedestrians. Do your customers have a right to know who is making such decisions and how their morality will be interpreted by machine learning? Governance casts a pretty wide net.

Rules of Thumb

Every leader needs to spend some time today to consider and anticipate the market and organizational tensions that may arise from the implementation of artificial intelligence into their products or processes. Start by thinking about some fundamental principles that will guide your thinking, such as the following:

  1. AI management is not a static process. Data sets will change over time, as will the models that an AI will use to learn. Make sure that your teams not only know how to test for trustworthiness over time, but what to test.

  2. AI may impact the culture of the organization, as your teams learn to understand and embrace human-to-machine collaboration. Beware of organizational “laziness,” as staff may become too reliant on AI for tasks that you want to remain in human hands. Remember that change is very hard for people even when it is beneficial.

  3. As with any digital transformation, keep your teams focused on the end goal—improving the customer experience within the use case, not implementing a technology for technology’s sake. Adapt, not adopt.

Lastly, keep an open mind. Start small, preferably with an assistive AI for a non-critical operation and learn as you go. Put stakes in the ground on your controls and governance, but don’t be afraid to pick them up and move them strategically with observation, testing and project learnings.

AI technology will change the world we live in today. It is up to us all to decide whether that change will be for the better and more prosperous future or for cost-cutting, profit-taking and further dehumanizing our interactions with our customers and each other. Adapt the technology to fit the culture you want—don't adopt AI in a way that eliminates the soul of your business. 

About the Author

Adan K. Pope

co-founder, Taraxa Labs, Taraxa Labs

A venerated technologist and serial CTO, Adan trusts that the answers to your most difficult challenges exist somewhere in your marketplace and he makes it his mission to help you chart your path to success. If you’re a CEO, you turn to Adan when technology hype starts to look more like a panacea than an enabler. If you’re his peer, he will run toward the fire with you as you try to reinvent your business. If you’re a team leader, he will show you how to make informed judgments so you can confidently drive measurable business performance.

Sign Up for the Newsletter
The most up-to-date AI news and insights delivered right to your inbox!

You May Also Like