2020: The year of more dependable AI?
2020: The year of more dependable AI?
February 26, 2020
by Amy Hodler, Neo4j 25 February 2020
Around the world, businesses and governments are expected to turn to artificial intelligence (AI), exploiting its potential to automate and improve their decision-making. The promise of AI is to transform complex systems to be more transparent, safer, smarter and better able to operate at scale.
And since these are the same elements we insist on in ethical supply chains, why not use similar methods to improve our AI supply chains? To deliver on their promise, AI systems have to become more reliable and encourage more trust than we’ve seen so far.
These two factors are interrelated, as accountability and appropriate data use will encourage investment in and adoption of AI. Similarly, progress with AI will require citizens to trust the technology and have belief in the fairness in AI-led decisions, as well as how their data is used.
To
create more responsible AI, we have to have the structures in place
to understand our AI supply chain: all the connections and context
between our data, how it was collected and processed as well as
assumptions and biases that may have been codified and/or amplified.
Decision making and liability
As the adoption of AI increases, it will become more difficult to apportion responsibility for decisions. If mistakes cause harm, who will be culpable? A system for tracking accountability is needed for the kind of high-stake decisions made in hospitals, courtrooms and workplaces – deciding who gets insurance, who gets what sort of legal settlement, and who gets hired. We can start by aligning our success measures for AI to the desired outcomes, and answering questions like how would we know if the AI system was wrong.
Transparency
If an opaque AI system has been used to make significant decisions, it may be difficult to unpick the causes behind a specific course of action. In supply chains, tracking is paramount to understanding. We should strive to use interpretable models whenever possible to provide a clear explanation of the reasoning process.
Eradicating bias
Machine learning (ML) systems can entrench existing bias in decision making systems. Care must be taken to ensure that AI evolves to be non-discriminatory. To ameliorate these dangers, our AI supply chain needs to completely understand the data used for training and testing. For example, we must be able to answer questions about how, and by whom, the data was collected, as well as whether the data is representative for how the model will be applied.
Data lineage and protection against data manipulation is foundational for trustworthy AI. This means not only tracking data but the lineage of changes. For example, what was the impact of cleansing data and what was added or removed from the data? Also, just as we would insist on tracking and alerting on any ingredient changes in a medical supplement, we should strive to track data changes and test for signs of manipulation.
Increased momentum in 2020
In 2019 we saw a tipping point of public, private, and government interest in creating guidelines for AI systems that better align to cultural values as people and governments wake up to this.
Last year the European Union (EU) published a set of proposals on the seven key requirements any AI system should meet in order to be deemed “trustworthy”.
The EU expert group on AI advised that AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights, and AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Most commentators agree these are sound ideas and the guidance is useful and timely. But it’s really just a start – so much more remains to be done.
And in 2020, that momentum will accelerate, with new updated checklists set to be published. Furthermore, organizations are starting to use independent risk assessments and incorporate tracking the human elements in their AI supply chain. For example, monitoring the diversity of teams involved, including cultural backgrounds that may influence choices, and even considering their work conditions.
At the same time, in order to significantly improve our ability to explain how AI systems derive their policy proposals, we anticipate that organizations will look to an increasingly promising approach for adding more reliability and trust in AI – using graph technology.
The logic here is that AI performs better and is more trackable when we add context. To extend their power, AI systems need to be supplied with related information to draw on in solving the problems we want them to address. This will enable them to have greater capability and handle more complex, nuanced decisions.
Making AI predictions more reliable and trustworthy
Graphs offer a number of ways to add that vital context layer in. Let’s consider one instance of this in AI. Software is notorious for supplying incorrect or hard-to-see-how-you-landed-there answers. ML classifiers have made associations that lead to miscategorization of items, such as classifying french fries as crab legs.
Algorithms aside, understanding what data was used to train our model and why is extremely important to validating classifications and predictions. Data lineage is the system of record for any item of a data point's origin, how it was collected and processed, as well as assumptions and biases that may have been codified/amplified, and how it moved over time. It’s information that is easy to encode in a graph network representation.
Graphs are already very powerful and proven tools for managing supply chains, helping to coordinate, track and make sense of complicated interdependencies. Today, because of good supply chain management and ethical will, you can buy a pair of eco-friendly jeans or fair-trade coffee – but you have no idea if the AI system is ethically trained or if the data used is biased. This is unacceptable. We believe that in 2020, as the challenge of AI and ethics becomes an ever more pressing concern, we need to furnish AI with the right context and transparent decision making, to do the simple groundwork of tracking the elements of our AI supply chain. It’s time we got started, and in so doing made trustworthy AI applications a reality.
Amy Hodler is Director for Analytics and AI Program at Neo4j, the graph database company, and co-author of Graph Algorithms: Practical Examples in Apache Spark & Neo4j
About the Author
You May Also Like