BSA says governments pass laws forcing private sector firms to carry out impact assessments for AI systems

Ben Wodecki, Jr. Editor

June 14, 2021

2 Min Read

BSA says governments pass laws forcing private sector firms to carry out impact assessments for AI systems

The Software Alliance (BSA) trade group has called on world leaders to pass legislation requiring private sector companies to perform impact assessments on high-risk applications of AI technologies.

As part of its call to action, the BSA unveiled a framework which it says would ensure AI is accountable by design.

Entitled Confronting Bias: BSA’s Framework to Build Trust in AI, the 32-page document details how companies can perform impact assessments and outlines over 50 diagnostic statements specifying actions for companies to take.

“AI has the potential to reshape industries and improve quality of life around the globe. But, in the absence of key safeguards, AI can also create feedback loops that may entrench and exacerbate historical inequities,” Christian Troncoso, BSA’s senior policy director, said.

Regulation, regulation, regulation

The trade group, which was established by Microsoft in 1988 to represent commercial software makers, described an “urgent need” for policymakers to align around best practices for mitigating the potential risks of AI bias.

Claiming its framework is the “first of its kind”, the BSA attempted to leave no stone unturned – outlining actions to take across design, deployment, and development.

It also set out corporate governance structures, processes, and safeguards which the organization said are needed to “implement and support an effective AI risk management program.”

BSA president and CEO Victoria Espinel referred to the EU’s draft regulation on AI, saying the organization would look to work with the bloc in order to “build the right approach and pass it into law.”

The proposed EU legislation, recently published by the European Commission, would force AI systems to be categorized in terms of their trustworthiness and potential impact on citizen's rights. Systems found to be infringing human rights would be banned from sale.

"Now is the time for [the] industry to step forward and work with policymakers to pass legislation to address risks of AI bias, and BSA will help lead this effort," Espinel added.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like