Companies can't afford the built-in delay or risk that comes with continuously moving or copying data. A trustworthy AI foundation allows them to act on trusted data in the moment, anywhere it lives and as it grows

October 28, 2021

5 Min Read

There has been a steady shift to digital underway for over a decade.

Companies in every industry have been trying to reinvent operations while drastically shifting how they engage customers. They did it with mixed success — until the pandemic hit.

The rapid pace of market changes brought about by COVID-19 forced leaders to react with both speed and agility.

The resulting momentum collapsed an anticipated decade of transformation into just one year.

AI is only as good as it is trusted

Companies expect to use AI in 90% of their digital projects, but most of those AI projects never make it into production. Two of the biggest reasons have to do with a lack of trust and a failure to focus on those who are most impacted. That’s why it’s important organizations evaluate their digitization plans to ensure they’re operationalizing trustworthy AI through a human lens.

So where to begin? Start by asking three essential questions: Do you trust your data as it grows?  Will customers trust AI outcomes? Can you balance innovation with regulations?

An ever-increasing data deluge, shifting customer expectations, and a need to adhere to AI regulations call for companies to establish a trustworthy AI foundation. Companies can't afford the built-in delay or risk that comes with continuously moving or copying data. A trustworthy AI foundation allows them to act on trusted data in the moment, anywhere it lives and as it grows.

Putting trustworthy AI into practice

We hold firm to the belief that AI should benefit the many, not just the few. When companies put the humans and trust in AI first, they can avoid regulatory, reputational and financial risk and boost their shareholder value – all while benefiting more.

Here at IBM, we’ve made significant moves to develop our own trustworthy AI framework and it’s comprised of three tenants – AI ethics at the core, governed data and AI technology, and an open and diverse ecosystem.

We internalize this as a company through our AI Ethics Board that reviews all AI efforts across the company, a move that garnered us recognition by the World Economic Forum, who featured IBM in a recent case study about companies embracing AI ethics as a core business tenant. To help clients put trustworthy AI into practice we also formed a team that uses human-centered design to build AI strategies that treat trust as a first-class citizen.

The five pillars of trustworthy AI

We ensure our technology is designed and used responsibly by making sure it adheres to five pillars of governed data and AI technology: transparency, explainability, fairness, robustness, and privacy. 

  • AI should be transparent, because transparency reinforces trust. The best way to promote transparency is through disclosure. People need to see how AI works, evaluate it, and understand its strengths and limitations. Transparency provides insight into the who, what, where, how and why, including who has access, what data is collected, where and how it will be used. It also needs to justified, answering why it is even being built in the first place.

  • AI should be explainable. Users should be able to understand how and why AI arrived at a decision, especially if that decision has implications to what a person values — whether it’s employment and creditworthiness, or health and wellness.

  • AI should be fair and help counter our human biases and promote equitable treatment. With AI increasingly used to inform decisions about people and their livelihood, it's essential that businesses work to mitigate bias. 

  • AI should also be robust. As AI becomes more a part of our daily lives it also at greater risk for attack. AI systems should be actively defended to minimize security risks so all stakeholders are confident in the outcomes. This means handling exceptional conditions and being built to withstand intentional and unintentional interference.

  • AI should preserve privacy. AI systems need to safeguard consumers’ privacy and data rights and provide explicit assurances to users about how their personal data will be used and protected.

We believe that for AI to be successful it must be built in an open and diverse ecosystem. Delivering on that means fostering a culture where diversity, inclusion, and shared responsibility are imperative. This includes a diversity of datasets, diversity in practitioners, and a diverse partner ecosystem to enable continuous feedback and improvement.

Trust is one essential ingredient of AI for business

Companies now have the opportunity to apply the tools, shaped from best practices, to help bring trustworthy AI into practice without sacrificing the speed and efficiency the digital shift has ushered in. This means they can work toward sustaining their digital advantage while at the same time using AI as a force for positive change, both for their companies and for society at large.

Learn more about how to grow your digital advantage built upon a trustworthy AI foundation. Visit: https://www.ibm.com/watson/trustworthy-ai

Dr. Seth Dobrin is Global Chief AI Officer at IBM and leads the corporate AI Strategy. In his role, Seth is responsible for connecting AI development with a systemic creation of business value via a design-driven strategy that enables a transformation of the business's core operations. Seth is fundamentally changing every area of the company, from business operations to product development, by adopting a human-centered approach to AI through a new methodology that he created that ensures continuous and responsible delivery of AI-based business outcomes across the company. In 2021, Dr. Dobrin was recognized as The AIconics Solution Provider Innovator of the Year.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like