An opinion piece by the co-head of AI at Virtualitics, an advanced AI analytics company.

In February 2021, real estate firm Zillow began using its AI-generated ‘Zestimate’ to predict what a home is worth and then use the information to make buyout offers. The idea behind Zillow Offers was to acquire homes for resale. But this business collapsed because it was not able to sell enough of these properties. One of the culprits is the inaccurate prediction of prices three to six months into the future.

But doesn’t this conflict with the promise of AI — that automated predictive power can make business decisions airtight? The reality is that predictive AI models, even developed by experts with the best of intentions, are vulnerable to everything from gross miscalculations and misuse to other unforeseen consequences. 

Zillow might have prevented the mishap — or better yet, achieved great success — with a platform based on Explainable AI. Here’s a primer on how AI transparency lets users better interpret what their data models are saying, and how it is a key component of responsible practices, especially as use of ‘no-code AI’ blows up and rapidly transforms aspects of society.

No-code AI aims to put the power of data scientists into everyday Joes and Jills. Tools are sprouting up for people to build their own predictive systems without knowing how to write code. As such, no-code AI is expected to accelerate adoption and reliance on the technology.

But with this proliferation comes responsible use of AI. Explainable AI is the way to achieve this goal.

Why Explainable AI matters

Would you trust a doctor who recommends a treatment but could not say why? If she tried to explain it, how would you feel if she used medical jargon? What if she could not point to clinical trials or examples of the treatment being effective? Trusting an AI system and feeling comfortable with its recommendations work the same way.

Explainable AI is a set of processes and methods that lets human users comprehend and question the results from machine learning algorithms. Ultimately, it enables humans to trust all the data crunching done by AI because people can understand what they are based on.

Consider a bank making predictions about the probability of a customer defaulting on a credit card. If the probability of default gets too high, the customer management team may be alerted to review the case more closely or try to contact the customer.

Ideally, an AI platform would provide an explanation of these predictions because the customer will want to know why he or she is being flagged. The company also should take action based on the data and calculations so they can take the right action. (To be sure, there is room to let human experts exercise judgment and even overrule the model at times.)

It is important to know why a prediction has been made and whether the scenario is within the expected use of the AI model. These are key items to evaluate before you take action. It would be irresponsible — and financially risky — to omit these types of checks and blindly accept results.

Adaptive AI systems, which continuously update as new data comes in, are especially in need of sustained monitoring and review. They are susceptible to data drift that compromises the accuracy of predictions.

Moreover, the data used to train AI models could unintentionally contain bias that will then flow into corrosive predictive results. For example, in 2017, Google apologized after its Natural Language Processing-based analysis tool gave negative sentiment scores for words like ‘gay’ and ‘homosexual’ while maintaining a neutral score for ‘straight.’ An application originally meant to democratize NLP applications instead exposed how important it was to review training data for pre-existing biases.

Regulatory scrutiny

Explainable AI enables responsible data use, and this can help with burgeoning corporate responsibility and regulatory requirements around the technology. The European Union’s tough General Data Protection Regulation (GDPR) mandates ethical responsibility in the use of AI. Violations can mean hefty fines.

In the U.S., at least 17 states considered AI-related bills or resolutions in 2021, with some becoming law in Alabama, Colorado, Illinois and Mississippi. Expect more regulation requiring organizations to explain just what is in the black box behind their models as AI gets incorporated into more operations.

One solution is adding data visualizations and explanations to provide the necessary context and rigor that make AI trustworthy and responsible. Data visualization depicts information graphically to make it easier to interpret. The technique also makes it simpler to find patterns and trends in datasets. A practical AI platform delivers on these and offers the capacity to scale, as no-code AI becomes more popular.

Returning to our Zillow Offers example, imagine if Explainable AI had been paired with visualizations and made available to consumers of the Zillow Offers and Zestimate system. These insights could have brought transparency to bad assumptions or biased data. Stakeholders would have had a more complete picture of the story playing out in this predictive ecosystem. The information could have served as a crucial aid for decisions about buying, selling, and flipping homes – improving the success rate of the business.

About the Author(s)

Aakash Indurkhya, Virtualitics co-head of AI

Aakash Indurkhya is co-head of AI at Virtualitics, an advanced AI analytics company.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like