July 29, 2022
Former Barclays Bank chief data officer serves as director
A new independent AI ethics board has been launched to help organizations develop and deploy AI responsibly.
The Institute for Experiential AI Ethics Board (AIEB), which is being spearheaded by Northeastern University, will help organizations with ethical AI queries with a small multidisciplinary team of experts tasked with determining whether a system may propose ethical issues.
The Institute will also work to develop human-centric AI solutions that leverage machine technology to extend human intelligence.
The independent group aims to "serve the growing demand for ethics guidance in a professional and efficient manner," an announcement reads.
The core of AIEB’s expert group is composed of Northeastern University faculty members, though experts from Slim.ai, Mayo Clinic, Harvard University and MIT are also present.
Former Barclays Bank chief data officer Usama Fayyad serves as the Institute’s inaugural director, while Northeastern Uni’s professors Ricardo Baeza-Yates and Cansu Canca serve as co-chairs.
“The use of AI-enabled tools in healthcare and beyond requires a deep understanding of the potential consequences,” said Tamiko Eto, AIEB expert group member and research compliance manager at Kaiser Permanente.
“Any implementation must be evaluated in the context of bias, privacy, fairness, diversity and a variety of other factors, with input from multiple groups with context-specific expertise.”
The AIEB launch comes after the University of Cambridge unveiled a dedicated AI research center designed to promote ethically sound AI technologies. The Centre for Human-Inspired Artificial Intelligence (CHIA) will attempt to investigate how human and machine intelligence can be combined in technologies that best contribute to social and global progress.
Implementing the ethical use of AI is an important industry topic. In one recent example, Microsoft decided to bar users of its facial recognition tech from using it to infer attributes such as age and gender.
But such actions are uncommon, with self-regulation on AI deployments meaning some companies have been reluctant to adopt transparent AI when they are not legally obligated to do so.