AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Health & Pharma

US security commission on AI considers the ethics of using citizen data to fight COVID-19

by Sebastian Moss
Article Image

Corporate-military partnership asks Congress and Trump to pass contract-tracing privacy legislation

An independent commission formed by the US government to examine American competitiveness in artificial intelligence has released a whitepaper debating the privacy implications and ethical issues surrounding the use of AI to mitigate the impact of the novel coronavirus, COVID-19.

The National Security Commission on Artificial Intelligence (NSCAI) was established under the FY2019 John S. McCain National Defense Authorization Act. It is headed by former Google CEO Eric Schmidt, and former Deputy Secretary of Defense, Robert Work.

Members of the commission include Oracle CEO Safra Catz, Amazon Web Services CEO Andy Jassy, Google Cloud AI head Andrew Moore, and the CEO of the CIA's venture capital arm, In-Q-Tel, Chris Darby.

Avoiding unwanted side effects

“Given the magnitude of the crisis, the national security implications, and the promise offered by AI and related technologies to fight this virus, I believe the NSCAI has a responsibility to offer recommendations that can accelerate the response to COVID-19, prepare the United States to thwart a future pandemic, and leave the nation in a stronger position after the crisis,” the NSCAI’s executive director said in a foreword to the paper.

Ylli Bajraktari continued: "We have initiated temporary special projects to issue whitepapers that address AI-related aspects of pandemic response and implications of the crisis for America’s security and strategic competitiveness. 

“Each paper is a collaboration of participating Commissioners and select staff, and only reflects the views of the Commissioners and staff who have contributed to the special project."

This first paper, focused on ethics and privacy, was written by Microsoft's chief scientific officer Dr. Eric Horvitz, former FCC Commissioner Mignon Clyburn, Dakota State University president Dr. José-Marie Griffiths, and Dr. Jason Matheny, former Director of Intelligence Advanced Research Projects Activity, the US intelligence community's cutting-edge research arm.

The paper describes several ways in which data and IT technologies could be used to mitigate COVID-19, including "high-performance computing for simulations and other analyses, in support of the design of therapeutics and vaccines, and computational modeling for tracking contagious diseases, monitoring the spread among individuals, predicting future outbreaks, and allocating healthcare resources."

Pattern recognition, classification, and recommendations generated via machine learning could all prove vital in the ongoing struggle, but the paper warns that key questions remain over how the technologies are developed and deployed, who controls the applications and underlying data, and how the data is used.

"Missteps could undermine core civil liberties, put inappropriate information and power in the hands of government or private corporations, and deepen inequalities in our healthcare and society," the paper states.

It also notes that the technologies could be applied unfairly, for example those with broadband access and smartphone owners are more likely to benefit from proposals such as home work or contract tracing. 

Data collected from such approaches would provide a skewed picture of usage scenarios, with more data on the more fortunate, something that could lead to biases and an unfair allocation of resources.

To minimize the risk of such errors occurring, the paper's authors recommend leveraging “technology, policy, and law to put civil liberties considerations at the center of contact tracing methods and tools."

The dangers of AI-based contact tracing

Take contact tracing, for example, where users’ phones are equipped with software that keeps records of any other phones that pass nearby. If a user experiences COVID-19 symptoms, they inform the app, and it notifies every one of those logged phones that their owners should self-isolate. AI-based applications and the use of geo-location could help track and predict the spread of the virus, and potentially identify people who should remain at home.

Proponents claim this would present an effective way of limiting the impact of the virus while allowing some level of work and life to continue uninterrupted. But questions remain about the actual effectiveness of the approach.

“Additional unknowns are the legal and ethical implications of a private company, non-profit organization, or the government agency amassing and exploiting users’ data, even if for a public good, and the ultimate vulnerabilities of such a system to adversarial attacks,” the paper states. 

This is where the legislative branch should step in, the NSCAI argues.

"Congress should pass, and the President should sign legislation that mandates the consistent use of best practices and standards," the authors suggest. Congress should also require the Federal Trade Commission (FTC) to regulate the fielding of contact tracing applications, in coordination with entities such as the National Institute of Standards and Technology and the Centers for Disease Control and Prevention (CDC).

Such best practices, the NSCAI believes, can be reduced to 12 steps - including making mobile-based contact tracing applications strictly voluntary. 

This is the opposite of what is happening in the Indian city of Noida, where citizens are being forced to download a controversial government-designed app, or face fines or jail time.

Other best practices include full disclosure on how the data will be used, explicit consent, automated data deletion once it has served its purposes, use of the latest standards in data encryption, and as much data anonymization as possible.

The organization also calls for expert panels consisting of privacy and security experts to review proposals and apps, adding that proposals should invite red-teaming, including adversarial attacks as part of the design process.

Despite the group's belief in the long-term power of AI, it states that "proposed systems should be used as a tool to complement, not replace, human efforts such as manual contact tracing."

So far, individual states have mostly opted to rely on human contact tracers, passing on automated systems, a Wired analysis found.

Avoiding bias

The NSCAI members’ other major recommendation is to “ensure that federally funded computing tools created and fielded to mitigate the COVID-19 pandemic are developed with a sensitivity to and account for potential bias and, at a minimum, do not introduce additional unfairness into healthcare delivery and outcomes.”

Currently, AI methods, including machine learning, are being applied to understand molecular interactions for designing therapeutics and vaccines, and to perform triage at multiple phases of care, including identifying at-risk patients for guiding hospital admission and predicting risk of physiologic decline and of mortality to guide therapy.

More broadly, AI is being used to conduct modeling and decision support for the allocation of scarce resources. Here, researchers have to be careful - AI is only as good as the data it is trained on.

“While great value can be provided, statistical analyses, including uses of machine learning procedures, can introduce unintended biases. The consequence of bias is pronounced in healthcare where social determinants of health significantly impact one’s access to information and technology, vulnerability to disease, and where many underlying health disparities are correlated to race and poverty.”

Analyses need to be properly designed, and take into consideration studies of existing disparities exposed by COVID-19. "An essential starting point is developing a baseline understanding of how socioeconomic factors and social determinants of health are shaping COVID-19 health outcomes, which requires important surveying," the authors write. 

"Despite inconsistent data reporting policies, growing evidence indicates that African American, Hispanic, and Native American populations in particular are disproportionately impacted by COVID-19."

Failing to consider factors such as race, gender, and region could skew predictions, diagnoses, risk scores, and decisions about where, or to whom, finite resources and care should be prioritized, NSCAI warns.

The committee members recommend collecting and analyzing data with this in mind, urging the CDC and related bodies to segment COVID-19 data by ethnicity, employment, and other factors.

The next steps

The whitepaper has been formally submitted to Congress, which will have the opportunity to review the document, and decide whether to follow the recommendations.

Several more papers addressing the potential of AI in dealing with pandemics are planned.

Practitioner Portal - for AI practitioners

Story

UK's ICO publishes guidance on AI and data protection

8/3/2020

The document aims to help organizations mitigate the risks of using personal data in AI applications

Story

Perfect AI model, broken app: Integration patterns and testing needs

7/24/2020

It is a long way from having a working machine learning model on a local laptop to having a full-fledged fashion store with a mobile app incorporating this model

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

More Webinars

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

9/30/2019
More Research Reports

Infographics

AI tops the list of most impactful emerging technologies

Infographics archive

Newsletter Sign Up


Sign Up