Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!
November 27, 2023
This year’s AI craze has raised no shortage of questions and debates: How will we resolve the IP issues related to generative AI? Which professions are most likely to be impacted by AI? Could AI end humanity? And, most recently, what is going on with OpenAI?
There seems to be consensus that AI presents both great opportunity and great risk to societies around the world, requiring some form of rules and oversight to be established to ensure organizations and governments strike the right balance between innovation and risk. We see laws, regulations or other forms of government intervention for AI emerging in China, the European Union, the U.S. and Canada. Add to this the proliferation of codes of conduct, standards and frameworks emerging from international bodies and standards associations.
Taking all of these things into account, it is clear that society is going to need an army of professionals to establish, interpret and apply the emerging rules for AI and ensure that uses of AI technologies not only comply with law, but sustainably serve humanity. There has been much discussion as to who will take on these AI governance roles: technologists, ethicists, lawyers, security professionals, privacy professionals, etc.
I would suggest that the answer is ‘all of the above.’ AI has the potential to affect such a vast array of society, the people helping guide its use will need to be drawn from a multitude of disciplines. Nonetheless, I would (selfishly) argue, that privacy professionals have a unique head start to help kickstart the profession of AI governance.
Which is why I jumped at the chance to recently attend the inaugural AI Governance Global conference in Boston, organized by the International Association of Privacy Professionals (IAPP), as well as the two days of AI governance training prior to the conference. I will provide my thoughts on each part of this event below, but spoiler alert: This was a clear milestone not just for privacy professionals, but for the professionalization of AI governance going forward.
The week’s events began at the end of October with two full days of AI governance training. The training content was developed by the IAPP and delivered by senior professionals with practical experience implementing and advising on AI governance programs. The primary aim of the training was to help individuals develop sufficient knowledge to architect, implement and lead AI governance within organizations. Additionally, the IAPP has announced a forthcoming certification for AI governance that will align with the content of the training. The certification is expected to be launched in early 2024.
The training itself was divided into eight different modules that covered a range of AI governance topics including technical discussions of different forms of AI and their respective risks, key operational elements of AI governance and risk management, existing and emerging AI-related laws, regulations and standards, and key ethical and societal implications of AI technologies.
The training does track with the AI Governance Professional certification Body of Knowledge that the IAPP published earlier this year. The training emphasized the importance for a multi-disciplinary and diverse team to lead and contribute to AI governance within organizations; formalized policies, standards, and accountability processes; and training for those individuals involved in the development, sourcing and use of AI technologies. Much of the content related to implementing AI governance appeared well-aligned with NIST’s AI Governance Risk Framework.
With around 300 attendees of the two-day training session, many of whom came from disciplines other than privacy, it was clear that there is a keen appetite for learning and for the professionalization of AI governance through a certification scheme. See more on this professionalization push in an op-ed from IAPP President and CEO J. Trevor Hughes.
The two-day conference following the training brought more than 1,000 attendees, a handful of AI governance technology vendors and a range of expert-led sessions. The keynote presentations were excellent and featured a range of perspectives from leaders such as the former Prime Minister of New Zealand the Right Honourable Dame Jacinda Ardern, journalists Jane Friedman and Kevin Roose and leading technology and policy academic, Jonathan Zittrain.
A keynote panel included privacy and AI governance leaders from big tech firms Microsoft, Google, IBM and Meta, with Google’s Chief Privacy Officer Keith Enright sharing an inspiring observation about his enthusiasm about living in a time of such great technological innovation and his gratitude for an opportunity to contribute to the policy discussions for this technology that may have societal repercussions for years to come.
From left: Moderator Jennifer Strong, Microsoft Chief Privacy Officer Julie Brill, Google Global Chief Privacy Officer Keith Enright, IBM Chief Privacy and Trust Officer Christina Montgomery, and Meta Deputy Chief Privacy Officer, Policy Rob Sherman. The panelists shared their approaches to developing safe, secure and trustworthy AI systems that protect privacy. Credit: Stephen Bolinger
The breakout sessions focused on specific aspects of AI governance. Examples of the topics include emerging AI laws and regulations, the use of AI in hiring and marketing, implementing AI governance programs, mechanisms for measuring fairness of AI systems, and consideration of GDPR compliance issues for generative AI models. Many of the breakout session presentations are available for download for free from the IAPP’s conference website.
I came away from the sessions with three key observations. First, there seems to be a growing consensus that the NIST Risk Management Framework is the most developed and tangible approach to implementing AI governance at scale. Second, there is no certainty that the ‘Brussels Effect’ will emerge alongside the EU’s enactment of the EU AI Act to set a de facto global regulatory standard for AI in the way that the GDPR did for data protection.
Finally, taking journalist Roose’s keynote presentation to heart, I must strive to maintain and develop the professional attributes that will be more challenging for AI to replicate: handling surprising situations, being social, and possessing a scarce combination of rare skills.
As is often the case with professional conferences, as good as the training, keynotes and breakout sessions were, the greatest value for me was the ability to connect with other leaders who are faced with similar challenges. The content of the sessions gave us the prompts to get into the details and share what is working, what is not, and the many questions that are yet to be answered as we face a global technological shift and the governance requirements that are coming with this shift.
AI governance will not happen if it is treated as an afterthought; it will happen through the dedication of individuals in academia, civil society, government and industry working to establish and apply clear rules to ensure we are using AI responsibly and in service to a broad set of beneficiaries. Echoing the sentiment of Google’s Enright, I am grateful to be a part of this developing community as it comes together to chart a course for the future of AI governance.
Read more about:ChatGPT / Generative AI
You May Also Like
Generative AI Journeys with CDW UK's Chief TechnologistFeb 28, 2024
Qantm AI CEO on AI Strategy, Governance and Avoiding PitfallsFeb 14, 2024
Deloitte AI Institute Head: 5 Steps to Prepare Enterprises for an AI FutureJan 31, 2024
Athenahealth's Data Science Architect on Benefits of AI in Health CareJan 19, 2024