Building a Foundation for Trustworthy AI

An opinion piece from the chief AI officer of Schneider Electric

Philippe Rambach, Chief AI officer, Schneider Electric

September 20, 2023

5 Min Read
Drawing of a man on a ledge
Getty Images

Years from now someone will write a monumental book on the history of artificial intelligence (AI). I’m pretty sure that the book will describe the early 2020s as a pivotal period. Today, we are still not getting much closer to Artificial General Intelligence (AGI), but we are already very close to applying AI in all fields of human activity, at an unprecedented scale and speed.

It may now feel like we’re living in an ‘endless summer’ of AI breakthroughs, but with amazing capabilities comes great responsibility. And discussion is heating up around ethical, responsible, and trustworthy AI.

The epic failures of AI, like the inability of image recognition software to reliably distinguish a chihuahua from a muffin, illustrate its persistent shortcomings. Likewise, more serious examples of biased hiring recommendations are not warming up the image of AI as trusted advisor. How can we trust AI in these circumstances?

The foundation of trust

On one hand, creating AI solutions follows the same process as creating other digital products – the foundation is to manage risks, ensure cybersecurity, assure legal compliance and data protection.

In this sense, three dimensions influence the way that we develop and use AI at Schneider Electric:

  1. Compliance with laws and standards, like our Vulnerability Handling & Coordinated Disclosure Policy, which addresses cybersecurity vulnerabilities and targets compliance with ISO/IEC 29147 and ISO/IEC 30111. At the same time, as new responsible AI standards are still under development, we actively contribute to their definition, and we commit to comply fully with them.

  2. Our ethical code of conduct, expressed in our Trust Charter. We want trust to power all our relationships in a meaningful, inclusive, and positive way. Our strong focus and commitment to sustainability translates into AI-enabled solutions accelerating decarbonization and optimizing energy usage. We also adopt frugal AI – we thrive to lower the carbon footprint of machine learning by designing AI models that require less energy. 

  3. Our internal governance policies and processes. For instance, we have appointed a digital risk leader and data officer, dedicated to our AI projects. We also launched a Responsible AI (RAI) work group focused on frameworks and legislation in the field, such as the European Commission’s AI Act or the American Algorithmic Accountability Act, and we deliberately choose not to launch projects raising the highest ethical concerns.

How hard is it to trust AI?

On the other hand, the changing nature of the applicative context, the possible imbalance in available data causing bias, and the need to back up the results with explanations, are adding an additional trust complexity for AI usage.

Let’s consider some pitfalls around machine learning (ML). Even though the risks can be similar to other digital initiatives, they usually scale widely and are more difficult to mitigate due to an increased complexity of systems. They require additional traceability and can be more difficult to explain.

There are two crucial elements to overcome these challenges and build trustworthy AI:

1. Domain knowledge combined with AI expertise

AI experts and data scientists are often at the forefront of ethical decision-making – detecting bias, building feedback loops, running anomaly detection to avoid data poisoning – in applications that may have far reaching consequences for humans. They should not be left alone in this critical endeavor.

To select a valuable use case, choose and clean the data, test the model, and control its behavior, you will need both data scientists and domain experts.

For example, take the task of predicting the weekly HVAC (Heating, Ventilation, and Air Conditioning) energy consumption of an office building. The combined expertise of data scientists and field experts enables the selection of key features in designing relevant algorithms, such as the impact of outside temperatures on different days of the week (a cold Sunday has a different effect than a cold Monday). This approach ensures a more accurate forecasting model and provides explanations for consumption patterns.

Therefore, if unusual conditions occur, user-validated suggestions for relearning can be incorporated to improve system behavior and avoid models biased with overrepresented data. Domain expert’s input is key for explainability and bias avoidance.

2. Risk anticipation

Most of current AI regulation has to do with applying a risk-based approach, for good reason. AI projects need strong risk management, and anticipating risk must start at the design phase. This involves predicting different issues that can occur due to erroneous or unusual data, cyberattacks, etc., and theorizing their potential consequences. This enables practitioners to implement additional actions to mitigate such risks, like improving the data sets used for training the AI model, detecting data drifts (unusual data evolutions at run time), implementing guardrails for the AI, and, crucially, ensuring a human user is in the loop whenever confidence in the result falls below a given threshold.

Be trustworthy

So, is responsible AI lagging behind the pace of technological breakthroughs? In answering this, I would echo recent research from MIT Sloan Management Review, which concluded: "To be a responsible AI leader, focus on being responsible."

We cannot trust AI blindly. Instead, companies can choose to work with trustworthy AI providers with domain knowledge that deliver reliable AI solutions while ensuring the highest ethical, data privacy and cybersecurity standards.

As a company that has been developing solutions for clients in critical infrastructure, national electrical grids, nuclear plants, hospitals, water treatment utilities, and more, we know how important trust is. We see no other way than developing AI in the same responsible manner that ensures security, efficacy, reliability, fairness (or the flipside of bias), explainability, and privacy for our customers.

In the end, only trustworthy people and companies can develop trustworthy AI.

Read more about:

Applied Intelligence

About the Author

Philippe Rambach

Chief AI officer, Schneider Electric, Schneider Electric

Philippe Rambach is the senior vice president, chief artificial intelligence officer of Schneider Electric. His mission is to drive AI innovation at scale, both internally and for customers, to provide greater overall efficiency and sustainability through data-based insights.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like