An opinion piece by the president of Infosys, a major digital services and consulting company in India.

4 Min Read

It is a well-known secret among data scientists that sometimes, good models go to waste.

The best data and the most elegant AI models amount to nothing if humans do not believe the system is effective, fair, and adaptable. Unless base models are designed to actively avoid bias, it is likely they will inherit the prejudices of their creators − both conscious and subconscious − and lead to damaging consequences. If an application is known to have high levels of human bias, developers need to be aware that AI models are likely to perpetuate it.

There are many instances where AI has contributed to scaling bias. In the United States, for example, many regions have begun experimenting with tools to predict the rate of recidivism for criminal defendants. This has had a damaging impact on racial minorities, who have been disadvantaged in this assessment process due to human prejudice and historical statistics against them.

Therefore, the most valuable and most used AI systems must be designed to actively combat unfairness. Quality AI systems need to instill trust as they operate, otherwise they risk ending up highly developed but ultimately unused.

Higher satisfaction and trust

AI experts have always considered this factor. For example Infosys’ Data+AI Radar 2022 survey uncovered how companies that develop strong ethics and bias management capabilities report higher satisfaction and trust for their data and AI use cases.

Interestingly, this holds true for every measure of ethics and bias control, proving the importance of this element. Importantly, the report found a direct correlation between companies with high confidence in AI ethics and bias management capabilities and satisfaction levels.

Though many are aware of this, no one is getting it right across the board. While AI has been used to take human preference out of hiring pipelines, it does not take into account the need for human interaction in these processes. By removing all personal interaction from hiring, applicants risk being reduced to a set of characteristics. AI could ignore many of the less tangible assets − such as good ‘people’ skills − that make somebody right for a professional position.

Many AI practitioners know their bias management practices need to improve across all regions. But the reality is that some types of bias are more challenging to mitigate because of inherent societal barriers. Perfectly equal representation of demographic groups, for instance, is often not present in various populations so the data is skewed.

Participation bias

Our study found that scientists in the U.S. are most challenged by managing participation bias. Participation bias can be defined as data that is not representative of the population being studied. This includes making sure that data is a representative sample of the population being studied. If this can be accounted for and AI systems are built to counter this trend, users will ultimately be more satisfied.

U.S. data scientists did show the best performance globally in avoiding two other types of bias. These are putting more emphasis on one variable than the rest (overfitting the model, in statistics-speak) and not properly managing outliers in their data. This shows that it is possible to counter human partiality with AI, if it is considered.

All the respondents to Infosys’ survey rated themselves as average at ethical AI, but some regional variation across seven measures of AI ethics was detected. For example, AI practitioners in the U.S. and U.K. reported comparatively strong confidence in ethical practices, and this appears to be due to incentives where employees are encouraged to report potential biases or issues that could contribute to inequity. This method, in conjunction with other practices, can be used more broadly to increase confidence in ethical AI.

Trust and scaling AI

Another challenge that organizations face is that nearly three out of every four AI practitioners surveyed by Infosys want to scale AI across their enterprise, but only 7% say trust is a top challenge in scaling. If AI is to be harnessed to its full potential, then this needs to change. There are many examples of how AI can be used to solve some of the biggest challenges facing us today, including climate change, disease, and the rising cost of living.

However, with economic uncertainty looming, there is an added challenge for companies focused on managing costs to not neglect the trust factor as they engage more with AI.

Ultimately, trust is the next big horizon in implementing AI systems and forms a crucial part of the non-financial governance issues investors are demanding from companies. I certainly welcome this effort and look forward to what can be achieved.

About the Author(s)

Mohit Joshi, President of Infosys

Mohit Joshi is the president of Infosys, a major digital services and consulting company in India. Brand Finance's Global 500 report named Infosys the third most valuable IT services brand in 2023.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like