by Dr. Iain Brown


LONDON – The future will be fashioned by artificial intelligence (AI). Governments are implementing advanced analytics and AI into their digital transformation plans, and AI already plays a major role in many industries. AI-enabled technology is changing how people interact both with business and the state.

Yet while there’s enormous potential for greater efficiency and better outcomes, pitfalls lie ahead if we aren’t careful. AI has the power to do huge amounts of good for humanity, but without high-quality data and human oversight its decisions can become flawed. It’s important to follow best practice when building AI solutions to avoid inadvertently disadvantaging or excluding certain people and groups, or misusing personal information.

AI is a form of advanced analytics – as such, many of the ethical concerns surrounding it have their roots in data. Data is the fuel that feeds AI, providing the raw information that machines need to analyse to help make decisions. That means that the foundation of good ethical AI is good ethical data and data handling practices. It’s possible to avoid ethical shortcomings in AI so long as a common, shared code of ethics can be established with data at the forefront.


Related: AI for sustainability needs to happen – right now


A FATEful encounter

In a digital age, data is increasingly our most valuable asset.  Yet it’s not just data that needs to be handled correctly – we also need to control how decisions are made using that data. It’s right and proper to demand it be treated ethically, but ethics is a question of personal values and, if not codified, it differs from person to person.

When agreeing a common approach or framework for AI, it’s best to use those values we agree on as the founding principles. Debate is welcome and consensus will help employees and decision makers support the guidelines in the future.

While it’s advisable for each organisation to agree its own code of AI ethics, it’s useful to have a set of core principles to work from. For example, the Government’s Data Ethics Framework provides general but strong guidelines to help public sector organisations use data responsibly. For AI and data science, the Fairness, Accountability, Transparency And Explainability (FATE) framework provides an ideal starting point. 

At its core, the FATE framework encourages responsible and transparent data use at every stage of the AI process, from collection to analysis. It mandates that decisions made by machines be fair and unbiased, with a diverse range of inputs at design to avoid inbuilt discrimination.

The method by which data is turned into insight and used to make decisions must be transparent and explainable, with consumers always able to question and opt out if so desired. Finally, human oversight is a must to ensure no automated decision is made that betrays the company’s values, ethics and regulatory obligations.


Related: Why deepfakes pose an unprecedented threat to business


Garbage in, garbage out

If they’re allowed free rein, there’s a serious risk that bad data practices and poor communication will corrupt AI deployments, leading to poor decision-making. When data sets aren’t representative of actual populations or user sets, the decisions made by AI may appear biased or discriminatory. For example, when facial recognition technology is trained only with images of a single racial group it may fail to recognise, or may even misidentify, people from other groups.

Following the FATE approach, however, helps to ensure this will not happen. From the beginning, its guidelines will encourage a diverse range of people and data to review and feed into the decision-making process. Businesses seem to be taking this lesson to heart – our research found that 63 per cent have an ethics committeethat reviews their use of AI.  In short, FATE enables good data practice and the collection of high-quality data, the building blocks of any ethical AI solution.  

It’s right to be optimistic about AI’s future. Organisations that use it are more collaborative, evidence-based and make more informed, accurate and successful decisions.  It’s not hard to see why AI is becoming a dominant trend in many sectors. Yet we cannot be blind to the risks or the need to get the basics of data governance right. AI offers great power, and with great power comes even greater responsibility.

Join Dr. Brown, the SAS team, and over 20,000 other industry and technology leaders at The AI Summit London, June 12-13


Dr Iain Brown is Head of Data Science at SAS UK & Ireland