What Are AI Ethics? Definition and Recommendations for AI in the Workplace

What does the term AI ethics mean and how will ethical considerations affect the future of AI?

Barney Dixon, Senior Editor - Omdia

January 19, 2024

14 Min Read
Illustration of a digital gavel with words AI in text
Getty Images

Understanding AI Ethics

AI Systems and Ethics

Artificial intelligence (AI) and machine learning (ML) have been revolutionizing businesses in the last few years, providing unprecedented opportunities for innovation, data analysis and efficiency. However, with their application comes several ethical questions and concerns that do not necessarily hold simple answers. 

AI is fast-evolving and as it makes its way into a wide-range of use cases across various industries, its broad nature will present a problem in terms of regulation and protecting end-users from harm. It also poses challenges for regulators, who will have to predict when a particular outcome of an AI service will be malicious.

As businesses come to grips with this new technology, and governments around the world discuss how best to regulate it, individuals and organizations using AI and ML should consider the ethical implications of their use and develop an understanding of AI ethics moving forward.

However, there is a persistent knowledge gap among organizations, stopping stakeholders from fully understanding the power of these technologies, how to strategically invest in them and how to use them ethically. 

For example, many business leaders assume that AI and ML should sit siloed in specialized IT and technology departments. They might struggle to connect the dots when leveraging this technology to solve entrenched business problems. 

For AI and ML to reach their full potential, and to do so in an ethical way, committed team collaboration is needed. With clear and consistent collaboration, it will become easier to identify what opportunities AI and ML can offer, while maintaining a trajectory in line with a clear AI ethics strategy that has been agreed upon across the business. Having a dedicated AI/ML champion within each business unit can be an effective way to achieve this and bridge the gap between business units and data scientists.

Leaders can be drivers of change and showcase examples of successful AI implementation. Nearly every business department will have a use case for AI/ML. A cohesive strategy and clear direction will boost return on investment, while maintaining an ethical outlook.

What Is the Definition of AI Ethics?

AI ethics is the study and application of ethical principles, values and guidelines in the development, deployment, and use of AI technology. This involves addressing the moral and societal implications that may arise when developing and using AI systems. The goal of AI ethics should be to ensure AI is developed and used responsibly and in a socially beneficial way.

Several countries have already developed their own set of ethical guidelines and principles for AI. For example, the EU has introduced a framework for AI ethics based on eight principles:

  • Human agency and oversight

  • Technical robustness and safety

  • Privacy and data governance

  • Transparency

  • Diversity

  • Non-discrimination and fairness

  • Societal and environmental well-being

  • Accountability 

Another example is Japan, whose principles of human-centric AI focus on:

  • Education and literacy

  • Privacy protection

  • Ensuring security

  • Fair competition

  • Fairness

  • Accountability

  • Transparency

To facilitate the development of further ethical guidelines for AI, countries have been creating AI ethics forums and councils, responsible for monitoring the use and development of AI technologies across sectors and providing recommendations on ethical issues. In most cases, these guidelines are non-binding for developers and service providers, however countries should consider introducing a reward and labeling mechanism to drive adoption. Under such a mechanism, companies that meet the requirements of trustworthy AI could be awarded a seal or label, which they could use to brand themselves on their compliance with IT security and ethics.

Ethics will play the most pivotal role in shaping the regulatory environment for AI, and several governments across the world are incorporating an AI ethics code into their national strategies. The development and training of an AI system is a continuous process involving the development of algorithms and the training of new data, as well as monitoring and updates. Because of this, policymakers should be extra cautious about defining the ethical guidelines. Guidelines should help avoid unfair bias, prevent the input of inappropriate data to train AI systems, and maintain respect for privacy. 

Developing a trustworthy and robust AI system will require the input of multiple stakeholders, including developers, statisticians, academics and data cleansers. Policymakers and governments should work together to invest in and train professionals to incorporate the best ethical practices into their AI systems. This is not a new avenue either, many governments already allocate considerable funding to the training of digital skills. 

Meanwhile, companies should introduce an AI ethics officer role or board to monitor and safeguard ethical values incorporated into AI systems, similar to the role of a data protection officer.

Tackling the Issue of Ethics

Exploring the Moral Implications of AI Predictions in Everyday Scenarios

Imagine this scenario: Your bank tells you that an AI calculated you have a 75% chance of getting divorced within the next two years, and therefore you and your spouse are not eligible for a mortgage. Would you consider this prediction to be ethically or morally adequate?

Making a prediction like this, and acting on it, could cause significant reputational issues for a bank and its image. If customers, society, politicians and prosecutors perceive an AI project as unethical, the AI could become a reputational, or even legal, burden. Managers and AI project leaders should be prepared to prevent this from happening.

Ethical concerns, particularly around data privacy and bias, are integral to AI and ML discussions. Addressing bias requires a holistic approach, including regular monitoring and human intervention.

Ethical Dilemmas in AI Models

The discourse around ethics and the AI model has two aspects: model properties and decisions in ethically challenging situations or ethical dilemmas, for which no indisputable choices exist.

The classic example is the trolley problem, which also applies to autonomous cars or assistant systems for drivers in a car. Ahead of a trolley are three people on the track. They cannot move, and the trolley cannot stop. There is only one way to stop the trolley from killing these three people: diverting the trolley. As a result, the trolley would not kill the people on the track. But it would kill another person who would not be harmed otherwise. What is the most ethical choice?

The trolley problem pattern is behind many ethical dilemmas, including autonomous cars deciding whether to hit pedestrians or crash into another vehicle.

Ethical Decision Making in AI: Approaches

Comparing Top-Down and Bottom-Up Approaches in AI Ethics

When AI directly impacts people’s lives, it is mandatory to understand how the AI component chooses an action in challenging situations. In general, there are two approaches: top-down and bottom-up. In a bottom-up approach, the AI system learns by observation.

However, ethical dilemmas are seldom encountered. It can take years or decades until a driver faces such a dilemma – if at all. Observing the crowd instead of working with selected training drivers is an option to get training data quicker. The disadvantage is that the crowd also teaches bad habits such as speeding.

The alternative is the top-down approach with ethical guidelines helping in critical situations. Philosopher Immanuel Kant’s categorical imperative (doing the right thing because it is the right thing to do) or science fiction writer Isaac Asimov’s Three Laws of Robotics are examples. Asimov’s rules or laws are as follows: 

  1. A robot may not injure a human being or allow a human being to come to harm.

  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence if such protection does not conflict with the First or Second Law.

Kant and Asimov are intriguing but abstract. They require a translation to be applicable for deciding whether a car hits a pedestrian, a cyclist, or a truck when an accident is inevitable. 

In practice, laws and regulations clearly define acceptable and unacceptable actions for many circumstances. If not, it can be tricky.

Following a utilitarian ethic – looking at the impact on all affected people together – requires quantifying suffering and harm. Is the life of a 5-year-old boy more important or the life of a 40-year-old mother with three teenage daughters?

An AI model making an explicit decision whom to kill is not a driver in a stressful situation with an impulse to save his own life. It is an emotionless, calculating, and rational AI component that decides and kills – and here, society’s expectations are higher than for a human driver.

Societal Expectations and Autonomous Cars

Balancing Societal Expectations and Owner Protection in Autonomous Vehicles

What complicates matters are contradicting expectations. In general, society expects autonomous cars to minimize the overall harm to humans. There is just one exception: vehicle owners want to be protected by their vehicle, independent of their general love for utilitarian ethics when discussing cars in general. 

The implementation of top-down ethics requires close collaboration of data scientists or engineers with ethics specialists.

Data scientists build frameworks that capture the world, control devices, or combine ethical judgments with the AI component’s original aim. It is up to the ethics specialists to decide how to quantify the severity of harm and make it comparable – or how to choose an option if there are only bad ones.

Role of Employees in Ethical AI/ML Use

Empowering Employees to Ensure Ethical Use of AI and ML Models

Employees play a pivotal role in ensuring AI/ML models are used ethically and for the right reasons. Quality control involves monitoring models for potential biases and adapting them to unforeseen circumstances. Leaders should invest in developing their workforce to address the evolving needs of AI and ML integration.

They will also need to ensure the model is continually monitored to avoid 'drift.’ AI drift occurs when the AI model encounters new data that is different from what it was trained on. The result is that the output is less accurate, which can be dangerous when dealing with critical business decisions. Model re-training then becomes necessary.

For example, a model that predicts credit risk in people that is trained on a range of salaries could become inaccurate if fed new data on average salaries, since paychecks can change over time.

Businesses will need to become AI/ML-capable to remain competitive in the market. AI/ML will soon become crucial for every successful business practice, so understanding the technology and its potential is vital and should be a priority.

Public and Academic Interest in AI Ethics

Examining the Growing Public and Academic Interest in AI Ethics

Ethical dilemmas in AI get broad public and academic interest. However, companies and organizations have a more hands-on approach. They want to know how to create ethically and morally adequate AI models, which decide, for example, which passengers are subject to more thorough airport security checks.

The Importance of Fairness in AI Models

Addressing Fairness and Bias in AI Model Development

Well known in this area is the early work of Google’s former AI ethics star Timit Gebru. She pointed out that IBM’s and Microsoft’s face detection models work well for white men but are inaccurate for women of color.

Missing fairness or biased data can be the root cause for such inadequate models. Suppose a widely used training set for face detection algorithms contains primarily pictures of old and young men and boys and only a few females’ images.

If training data does not reflect the overall population, it is biased. When data scientists use this training data to create models, the resulting models work better for males than females.

Fairness has a different twist. It demands that a model has a similar quality level for all relevant subgroups independent of their actual share of the population. Suppose a company has 95% female employees. In that case, a face-matching algorithm working well only for females can have outstanding overall results and be the best possible solution.

However, suppose it performs poorly for the small male employee minority and does not detect anyone. In that case, society considers the model to be ethically questionable. So, fairness means to balance the model quality for subgroups, even if harming the overall accuracy.

Recommendations for Implementing AI Ethics in the Workplace

Organizations must not only navigate the ethical landscape of AI, but also create a responsible and accountable AI culture that aligns with organizational goals and societal expectations. Here are some recommendations for implementing AI Ethics in the workplace:

  1. Establish AI Principles: When developing and using AI, organizations should clearly define guiding principles that align with widely accepted ethical standards and organizational values.

  2. Develop a Governance Mechanism: Organizations should create a robust governance structure, including a designated AI ethics committee or officer who is responsible for the ethical implementation of AI throughout the organization.

  3. Embrace Transparency: Transparency should be prioritized. Organizations should openly communicate about AI systems, their capabilities and their limitations. Employees and stakeholders should be provided with clear information on how AI is being used in the workplace.

  4. Consider Human Rights: Organizations should ensure that the development and deployment of AI aligns with fundamental human rights. This can include assessing potential impacts on individuals and society.

  5. Create a Framework for Data Protection: Organizations should develop comprehensive data protection policies that safeguard privacy and ensure data accuracy. They should establish protocols for the secure handling of data in AI applications.

  6. Develop a Code of Ethics for AI: Organizations should create a specific ethics code tailored to AI technologies within the workplace. The code should articulate the ethical responsibilities of all employees and other stakeholders involved with AI initiatives.

  7. Train the AI: Organizations should implement ongoing training programs for AI systems and ensure that they are continuously updated on ethical considerations, legal requirements and organizational policies. AI algorithms should be regularly reviewed and refined to keep up with evolving ethical standards.

  8. Get Feedback: Backchannels for employees and other stakeholders should be established to allow feedback on AI applications. Organizations should actively seek input to identify potential biases, ethical concerns and areas for improvement.

  9. Incorporate Human Oversight: Human oversight should be integrated into AI systems to ensure that critical decisions involve human judgement. Protocols should be established for intervention in cases that may have significant ethical implications.

  10. Develop a Risk-Mitigation Plan: A comprehensive risk-mitigation plan, one which addresses the potential ethical challenges associated with AI implementation, should be developed. Organizations should include strategies for minimizing risks, responding to ethical concerns, and adapting to changing ethical standards.

Now more than ever, businesses should take the necessary steps to ensure that they have an AI and ML-competent team with the right skills and knowledge to bring the potential to life. The first step is starting small, with steps to ensure all business units and employees are on board. From there, AI and ML can be extended slowly to minimize shock. Those who are taking these steps are already reaping the rewards.

About the Author(s)

Barney Dixon

Senior Editor - Omdia

Barney Dixon is a Senior Editor at Omdia, with expertise in several areas, including artificial intelligence. Barney has a wealth of experience across a number of publications, covering topics from intellectual property law to financial services and politics.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like