Software Alliance unveils framework for tackling bias in AI
BSA says governments pass laws forcing private sector firms to carry out impact assessments for AI systems
BSA says governments pass laws forcing private sector firms to carry out impact assessments for AI systems
The Software Alliance (BSA) trade group has called on world leaders to pass legislation requiring private sector companies to perform impact assessments on high-risk applications of AI technologies.
As part of its call to action, the BSA unveiled a framework which it says would ensure AI is accountable by design.
Entitled Confronting Bias: BSA’s Framework to Build Trust in AI, the 32-page document details how companies can perform impact assessments and outlines over 50 diagnostic statements specifying actions for companies to take.
“AI has the potential to reshape industries and improve quality of life around the globe. But, in the absence of key safeguards, AI can also create feedback loops that may entrench and exacerbate historical inequities,” Christian Troncoso, BSA’s senior policy director, said.
Regulation, regulation, regulation
The trade group, which was established by Microsoft in 1988 to represent commercial software makers, described an “urgent need” for policymakers to align around best practices for mitigating the potential risks of AI bias.
Claiming its framework is the “first of its kind”, the BSA attempted to leave no stone unturned – outlining actions to take across design, deployment, and development.
It also set out corporate governance structures, processes, and safeguards which the organization said are needed to “implement and support an effective AI risk management program.”
BSA president and CEO Victoria Espinel referred to the EU’s draft regulation on AI, saying the organization would look to work with the bloc in order to “build the right approach and pass it into law.”
The proposed EU legislation, recently published by the European Commission, would force AI systems to be categorized in terms of their trustworthiness and potential impact on citizen's rights. Systems found to be infringing human rights would be banned from sale.
"Now is the time for [the] industry to step forward and work with policymakers to pass legislation to address risks of AI bias, and BSA will help lead this effort," Espinel added.
About the Author
You May Also Like