The EU AI Act is Law Now What?

A C-suite guide to the European Union’s Artificial Intelligence Act

Seth Dobrin

April 8, 2024

9 Min Read
Getty Images

The European Union's (EU) Artificial Intelligence (AI) Act, recently approved by the European Parliament in March 2024, is set to revolutionize the landscape of AI development and deployment in the EU and across the globe.

With its focus on promoting "trustworthy" and "human-centric" AI while safeguarding fundamental rights, health, safety and the environment, the act introduces significant new requirements and restrictions for companies operating in the EU and impacting EU citizens worldwide. Although the legislation won't come into force for another two years, C-suite executives at AI companies must start preparing to ensure compliance and mitigate potential risks. This article serves as an actionable guide for the C-suite, breaking down the act's key provisions and outlining the necessary steps.

At its core, the EU AI Act adopts a risk-based approach, subjecting "high-risk" AI systems — those with the potential to cause significant harm — to the most stringent requirements. The act's scope is broad, encompassing providers placing AI systems on the EU market or putting them into service — regardless of their establishment in the EU — deployers using AI systems within the EU, as well as importers, distributors and product manufacturers incorporating high-risk AI systems into their offerings.

Related:AI and the Risk of Technological Colonialism

The act explicitly prohibits certain AI practices deemed unacceptable, such as systems that manipulate human behavior to cause harm, exploit vulnerabilities of specific groups, provide "social scoring," or enable "real-time" remote biometric identification in public spaces by law enforcement (with limited exceptions). Additionally, post-hoc remote biometric identification is prohibited without judicial authorization.

The act mandates a range of requirements for high-risk AI systems, covering risk management and data governance, technical documentation, record-keeping and logging, transparency and user information, human oversight, accuracy, robustness and cybersecurity. Providers must ensure that high-risk systems undergo the required conformity assessments and registration procedures before placing them on the market. Non-compliance can result in severe penalties, reaching up to €35 million or 7% of global turnover (revenue).

Beyond high-risk systems, the act imposes transparency obligations on specific other AI systems, such as chatbots and deep fake generators. Furthermore, general-purpose foundation models are subject to particular requirements.

To navigate this complex regulatory landscape, C-suite executives must proactively prepare their organizations for the EU AI Act. The first crucial step is to conduct a comprehensive assessment of the company's current and planned AI usage, identifying which systems may fall within the scope of the act. Particular attention should be paid to potentially high-risk AI systems, such as those used for biometric identification/categorization, management of critical infrastructure, education and vocational training, employment and HR decisions, creditworthiness assessments, eligibility determinations for public services and benefits, law enforcement and criminal justice, as well as migration and border control. This assessment should consider both AI systems provided by the company and third-party AI systems deployed, including AI embedded in products. Companies can identify gaps between their current practices and the act's requirements by analyzing data practices, human oversight measures, transparency and testing.

Related:AI's Craziest Year

To ensure compliance, the C-suite must implement robust governance structures to oversee AI development and deployment. This may involve designating a lead executive, such as a chief AI ethics officer, who is accountable for oversight and establishing a cross-functional AI governance committee with stakeholders from legal, ethics, engineering, product and risk departments. Clear policies and procedures aligned with the act should be defined and AI oversight should be integrated into the company's risk management and compliance frameworks. The board must have visibility into high-risk AI systems and readiness plans.

Related:AI iQ: Asking the Right Questions in AI Deployments

Strengthening risk management practices is another critical priority under the AI Act. Providers must establish risk management systems to identify, analyze, estimate and evaluate risks associated with high-risk AI systems throughout their life cycle. The C-suite should ensure the implementation of best practices, including conducting thorough risk and impact assessments (especially on protected groups), implementing controls to eliminate or mitigate identified risks, putting in place processes for continuous monitoring of AI systems post-deployment, establishing protocols for swift corrective actions if risks materialize and securing sufficient liability insurance coverage for potential AI harms.

The act's emphasis on transparency and accountability necessitates enhanced documentation and record-keeping practices. High-risk AI systems require detailed technical documentation on their development and functionality to demonstrate conformity with the act's requirements. The C-suite should oversee initiatives to maintain comprehensive documentation of datasets, models, testing and human oversight measures and automatically log events during the system life cycle to ensure traceability.

Secure, resilient IT systems must be implemented to organize and retain the required documentation and data governance frameworks should be established to ensure data quality and integrity. Streamlining documentation procedures can help efficiently generate the required deliverables for conformity assessments.

Meaningful human oversight is another cornerstone of the EU AI Act. High-risk AI systems must enable "effective" human oversight and the C-suite must institute governance to ensure that this is not merely a box-ticking exercise but a substantive safeguard. This involves carefully defining oversight roles and responsibilities, empowering human overseers with the authority to override AI system outputs, providing oversight personnel with necessary training, resources and system knowledge, implementing checks to verify that human oversight is consistently performed and fostering an organizational culture that values and listens to human input over AI.

Investing in transparency and explainability is crucial for compliance with the act's disclosure requirements for high-risk AI systems. The C-suite should champion organization-wide initiatives to implement transparency by design in AI system interfaces, use best-of-breed techniques to improve AI explainability while protecting intellectual property, validate that AI systems provide the required information to deployers and affected persons, train personnel to communicate transparently about AI in conformance with the act and publish voluntary transparency reports demonstrating responsible AI practices.

Robust testing and post-market monitoring are essential for high-risk AI systems under the act. The C-suite must ensure that the organization invests in state-of-the-art testing methodologies and tooling, conducts testing in real-world conditions through regulatory sandboxes where possible, institutes KPIs and metrics for continuous monitoring of real-world AI performance, participates in the development of harmonized standards for testing and performs ongoing research to identify AI system limitations and failure modes proactively.

Given the act's requirements down the AI value chain, securing the AI supply chain is another critical task for the C-suite. This involves conducting due diligence on AI vendors' conformance with the act, including compliance provisions in contracts with AI providers and deployers, obtaining attestations and artifacts from suppliers proving their systems' conformity, coordinating human oversight plans with vendors for third-party AI systems and educating procurement teams on how to evaluate AI systems against the act's standards.

Engaging constructively with policymakers is essential in this new era of AI regulation. To help shape policy in a pro-innovation direction, the C-suite should participate in multi-stakeholder forums defining guidelines and best practices under the act, respond to regulatory consultations with evidence-based input on technical and economic feasibility, proactively engage regulators to address questions and demonstrate compliance plans, contribute to the development of harmonized AI standards that become de facto requirements and publicly support baseline rules for responsible AI development.

Finally, complying with the EU AI Act's detailed requirements demands fostering a culture of responsible, human-centric AI development throughout the organization. The C-suite must lead by example in promoting systematic consideration of AI's societal impacts (not just business benefits), proactive identification and mitigation of AI risks and harms, openness, transparency and accountability around AI practices, diversity, non-discrimination and fairness as core AI principles and ongoing workforce training and awareness-building on trustworthy AI.

By taking these comprehensive steps, C-suite leaders can position their organizations to meet the EU AI Act's demanding requirements and seize the opportunity for responsible innovation in the age of AI. Compliance should be viewed not as a mere box-checking exercise but as a means to teach a fundamentally new paradigm for AI development that places ethics, responsibility and societal impact at the center. Businesses that embrace this shift will be well-positioned to thrive under the act, building robust, transparent and accountable AI systems that earn the trust of citizens and customers alike. The C-suite holds the key to writing this new chapter of artificial intelligence, creating value for both business and society.

Key Takeaways

Key Takeaway 1: Assess AI Usage and Risks

  • Action: Map current and planned AI systems, identifying potentially high-risk applications.

  • Implementation: Conduct a comprehensive audit of AI systems, analyzing data practices, human oversight, transparency and testing.

  • Action: Identify gaps between current practices and the EU AI Act's requirements.

  • Implementation: Create a gap analysis report and prioritize areas for improvement.

Key Takeaway 2: Implement Governance Structures

  • Action: Designate a lead executive accountable for AI oversight.

  • Implementation: Appoint a chief AI ethics officer or assign responsibility to an existing C-suite member.

  • Action: Establish a cross-functional AI governance committee

  • Implementation: Include stakeholders from legal, ethics, engineering, product and risk departments in the committee.

  • Action: Define clear policies and procedures aligned with the EU AI Act.

  • Implementation: Develop and communicate AI governance policies, integrating them into existing risk management and compliance frameworks.

Key Takeaway 3: Strengthen Risk Management

  • Action: Implement best practices for AI risk management.

  • Implementation: Conduct thorough risk and impact assessments, implement controls to mitigate risks and establish continuous monitoring processes.

  • Action: Secure sufficient liability insurance coverage for potential AI harms.

  • Implementation: Review existing insurance policies and acquire additional coverage as needed.

Key Takeaway 4: Enhance Documentation and Record-Keeping

  • Action: Maintain comprehensive documentation of AI systems.

  • Implementation: Establish processes to document datasets, models, testing and human oversight measures.

  • Action: Implement secure, resilient IT systems for documentation management.

  • Implementation: Invest in tools and infrastructure to organize, retain and retrieve required documentation efficiently.

Key Takeaway 5: Ensure Meaningful Human Oversight

  • Action: Define clear roles and responsibilities for human oversight.

  • Implementation: Assign and train personnel for AI oversight roles, empowering them to override AI system outputs when necessary.

  • Action: Foster an organizational culture that values human input over AI.

  • Implementation: Communicate the importance of human oversight and create channels for personnel to raise concerns about AI systems.

Key Takeaway 6: Invest in Transparency and Explainability

  • Action: Implement transparency by design in AI system interfaces.

  • Implementation: Work with AI development teams to incorporate transparency features and information disclosure into AI systems.

  • Action: Train personnel to communicate transparently about AI.

  • Implementation: Develop training programs on AI transparency and the EU AI Act's requirements for relevant personnel.

Key Takeaway 7: Robust Testing and Monitoring

  • Action: Invest in state-of-the-art testing methodologies and tooling.

  • Implementation: Allocate resources for acquiring and developing advanced AI testing tools and methodologies.

  • Action: Participate in the development of harmonized standards for testing.

  • Implementation: Engage with industry consortia and standards bodies to contribute to the development of AI testing standards.

Key Takeaway 8: Secure the AI Supply Chain

  • Action: Conduct due diligence on AI vendors' conformance with the EU AI Act.

  • Implementation: Develop vendor assessment criteria and processes to evaluate AI suppliers' compliance with the act.

  • Action: Include compliance provisions in contracts with AI providers and deployers.

  • Implementation: Work with legal teams to incorporate EU AI Act compliance requirements into vendor contracts and SLAs.

Key Takeaway 9: Foster an AI Governance Culture

  • Action: Train key personnel on appropriate AI governance in the context of the legal framework of the EU AI Act.

  • Implementation: Appoint a responsible individual equivalent to the EU GDPR Data Protection Officer for AI and secure appropriate training for them, such as the IAPP AI Governance Professional training and certificate.

  • Action: Lead by example in promoting responsible, human-centric AI development.

  • Implementation: Incorporate AI ethics and responsibility into corporate values and leadership communications.

  • Action: Provide ongoing workforce training on trustworthy AI.

  • Implementation: Develop and deliver comprehensive training programs on AI ethics, transparency and the EU AI Act for all relevant personnel.

About the Author

Seth Dobrin

Seth Dobrin is the founder and CEO of Qantm AI and the former chief AI officer at IBM.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like