AI Legislation and Governance
The emerging AI legal web brings significant complexity for businesses looking to leverage this powerful new technology
September 16, 2024
As AI technology becomes more widely adopted, additional legislation and guidelines are emerging to protect personal data and ensure models are developed, focusing on risk management and bias prevention.
Nearly 40 countries are currently working on AI-related legal frameworks, addressing new use cases for AI and expanding on existing laws such as the General Data Protection Regulation (GDPR). This emerging legal web brings significant complexity for businesses looking to leverage this powerful new technology, whether to make internal decision-making more efficient or to get new and better solutions to the market.
Key Considerations for Compliance
Existing Laws
How well AI applications function depends in large part on the data used to train the LLMs powering them. Businesses and regulators share concerns about protecting personal privacy and ensuring that proprietary data is secure. As a result, multiple areas of existing laws already govern how user data can be used. In the United States and other countries, specific unfair and deceptive trade practices laws may apply to the use of AI, even if the law doesn’t reference the technology explicitly. Existing consumer protection laws like GDPR and CCPA (The California Consumer Privacy Act) are taking a fresh look at AI and how it uses and trains data, laying out security and data protection requirements and requiring transparency around how AI may use personal data.
“Non-Binding” Guidelines
In March, the United Nations adopted a resolution promoting "safe, secure and trustworthy" AI. The resolution encourages countries to safeguard human rights, protect personal data and monitor AI for risks. While non-binding, it provides principles to guide AI's development and use.
In October 2023, President Biden issued the Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. Like the UN resolution, this Executive Order provides direction and guidance without imposing penalties for non-compliance. An Executive Order is often viewed as a precursor to future legislation and actions on standards organizations may need to follow. At a more granular level, individual federal agencies are releasing their guidelines, from the Office of Management and Budget to the Securities and Exchange Commission (SEC) addressing AI-related conflicts of interest in finance.
Legislation with Teeth
Once again, Europe is far ahead of the U.S. in terms of legislation. As with data privacy and GDPR, Europe is leading again with the Artificial Intelligence Act (EU AI Act). This is the world's first standalone law focused solely on AI development and use. The law went into effect on August 1, 2024 and includes a phased two-year process for compliance.
The EU AI Act classifies AI systems based on four levels of risk: minimal, limited, high and unacceptable. Unacceptable uses include manipulating human behavior to circumvent free will through subliminal techniques, biometric categorization, social scoring by governments and scraping facial images from CCTV to create databases. These uses are prohibited, with limited exceptions for law enforcement. High-risk systems, such as those used in critical infrastructure, finance, healthcare, justice and democratic processes, must adhere to strict safety, transparency and data governance guidelines. Violations can result in fines of up to 7% of a company’s annual global revenue (compared to GDPR's maximum penalties of 4%).
In the U.S., the federal government is far from passing comprehensive AI legislation. Instead, action is being taken at the state level, adding to the complexity of businesses.
Strategies for Compliance with Shifting Laws
Fifty states, fifty AI laws? What can you do to prepare for a regulatory environment that is playing catch-up with new technology? Every agency and every regulator wants to have a position to be seen as being on top of the problem. So, what commonalities run through these emerging initiatives to protect end users?
Privacy. Data protection. Testing and transparency. Risk mitigation.
1. Stay informed about AI regulations: Regularly monitor changes in AI legislation and guidelines at local, regional, national, and international levels.
2. Implement strong data privacy and security measures: Develop and implement robust data privacy and security measures to protect personal data.
3. Promote transparency and explainability: Ensure AI systems are transparent and explainable, providing clear information about data collection, usage and processing.
4. Conduct risk assessments: Identify the risks associated with AI systems, assess compliance with applicable regulations and adopt a risk mitigation framework.
By focusing on these strategies, organizations can better navigate the evolving regulatory landscape and compliance risks with the ethical and responsible use of AI.
About the Author
You May Also Like