AI ethics: If data is the new oil, who owns the mineral rights?

AI Business

January 16, 2020

7 Min Read

This is the first installment of a series on AI ethics

by Mark Beccue, Tractica 16 January 2020

As we near the end of 2019, artificial intelligence (AI) technology continues to steadily permeate our world.

While promoters wax on about how AI is helping to track and save snow leopards, accurately predict high risk patients for atrial fibrillation (A-fib), and seamlessly solve customer service issues, there is concern about how the use of AI technology can be controlled.

In Tractica’s view, the primary ethical issues for AI fall into four categories:

  • Privacy: Managing the use and ownership of personal data.

  • Accuracy/accountability: Trusting the results, accuracy, and decision-making of AI systems.

  • Bias: Recognizing AI systems’ ability/inability to eliminate bias given the limitations of accurate and complete data.

  • Propaganda generation: Managing the potential for AI systems to generate false information.

In this blog post, we examine how data privacy presents an ethics challenge for AI stakeholders.

A growing market

Tractica publishes a regularly updated database of market forecasts for more than 315 AI use cases. In the latest version, the cumulative software revenue associated with top 30 use cases is forecast to represent $272.4 billion between 2018 and 2025. Data privacy significantly affects 18 of those 30 use cases, representing an estimated $171.2 billion.

Top 30 AI use cases by cumulative revenue: 2018-2025

(Source: Tractica)

As a backdrop, the depth and breadth of personal data being generated, stored, and used are growing exponentially. Entities that gain permission to use personal data through complex acceptance agreements sometimes share that data with other entities. Thus, it is difficult to know who has access to personal data – and when.

Dangers of the new oil

Data is famously the new oil. For AI, data is lifeblood; AI essentially doesn’t work without it. In our connected, digital world, enterprises need data to survive and thrive, but increasingly, consumers are wary of personal data use. There is certainly a debate about who owns and controls data and when it can be used.

In a quick sampling of the use cases listed, consider the following:

  • Voice/speech recognition: When is it proper for smart speakers to be listening to and collecting data? What can or should they collect?

  • Video surveillance: Where does the data collected from your home security system go? Does the government have the right to record your movements?

  • Customer service & marketing virtual digital assistants (VDAs): For companies to deliver the best virtual agent customer service, personal context data, such as your account information, customer history, location, etc. is needed. How much do you want virtual agents in customer service to know?

  • Localization and mapping: Your smartphone delivers accurate location information. What companies/institutions are you willing to share your location with and when?

  • Sentiment, human emotion analysis: AI systems can, to a great extent, understand your mood. What companies/institutions are you willing to share that with and when?

AI best practices and regulation

As with most technological advances, regulations and laws lag market advancement. Best practices and self-regulation typically materialize first, and that is true in the case of data privacy for AI. Market leaders like Microsoft and Google have published best practices.

Microsoft calls its guidance Microsoft AI principles, which are six principles aimed to “guide the development and use of artificial intelligence.” In regard to privacy, two Microsoft principles are relevant: Transparency (AI systems should be understandable) and Privacy and Security (AI systems should be secure and respect privacy).

Google appears to be stumbling down a road to best practices. The company formed an ethics panel in April to help guide AI issues, but disbanded it a week later due to protests over the appointment of a Heritage Foundation executive to the panel. In addition, Google employees pushed back on the company’s AI project for the U. S. military. Its published principles do not address data privacy.

There are leading-edge regulations that will help with AI data privacy, such as Europe’s General Data Protection Regulation (GDPR). California will put into effect the California Consumer Privacy Act in January 2020, which will affect 40 million Americans. But these are exceptions. The U.S. Senate introduced a federal bill for debate in 2017, but it has gotten nowhere to date.

A siloed approach: Five elements of privacy legislation

How these regulations will help sort out the control of private data varies by regulatory body, but they are foundations that can be refined. In an article exploring AI privacy concerns, Forbes spoke with Bernhard Debatin, Ohio University professor and director of the Institute for Applied and Professional Ethics. Debatin said good privacy legislation in the age of AI should cover five elements:

  1. AI systems must be transparent.

  2. An AI must have a “deeply rooted” right to the information it is collecting.

  3. Consumers must be able to opt out of the system.

  4. The data collected and the purpose of the AI must be limited by design.

  5. Data must be deleted upon consumer request.

This type of rationale makes sense and decouples the original use and intent from the confusion of daisy-chained data access. It is a siloed approach that, in many ways, flies in the face of AI’s strength, which is connecting the dots across disparate data. There will likely be a great deal of pushback from enterprises if regulations were to include all of these elements, particularly from those in the advertising and marketing businesses. Platforms that hold so much personal data, such as Google, Apple, Microsoft, Baidu, Tencent, and telecom operators, as well as brands, marketers, and advertising agencies, typically depend on either providing or consuming downstream use of data.

There will also be the question of cost. Who pays for consumer opt-out systems? Who pays for deleting consumer data? Will the business cases for this type of siloed approach show a positive ROI? Should consumers bear some of the cost of protecting their data, or conversely, should consumers be compensated for the use of their data?

This struggle to manage and control personal data in the AI era will likely open the door to market disruptors. Such companies will seize the opportunity to provide some of these middleman, enterprise, and consumer services.

This opinion was originally published on the Tractica research blog.

Mark Beccue is a principal analyst contributing to Tractica’s Artificial Intelligence and User Interface Technologies practices, with a focus on intelligent interfaces and collaboration tools, as well as key application markets for AI.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like