Navigating Big Tech’s Influence on the AI Regulatory Landscape in 2025Navigating Big Tech’s Influence on the AI Regulatory Landscape in 2025

An intersectional regulatory approach to AI is needed to balance innovation with profitability

Phil Lim, Director of product management, Diligent

December 20, 2024

3 Min Read
A gavel on a pile of legal books
Getty images

Tech giants have spent over $30 billion acquiring artificial intelligence (AI) startups while facing antitrust concerns. In response, states rushed to fill the federal regulatory void introducing nearly 700 AI-related bills in 2024, compared to 191 in 2023. This led to a growing concern of Big Tech shaping regulations in their favor to limit competition and discourage new market entrants.

Additionally, scrutiny over AI's return on investment (ROI) increased, with many companies shifting focus to high-impact projects and cutting less productive initiatives. This scrutiny reflects an industry trend toward balancing innovation with profitability, as economic conditions cause investors and stakeholders to demand clearer value propositions from AI-related spending.

To address this, a more intersectional regulatory approach to AI is needed. It must balance economic growth with cyber resilience, national security, and equitable outcomes. To promote fairness and transparency, an intersectional approach and regulatory clarity are essential.

Adopting an Intersectional Approach

As the AI landscape evolves, companies face the challenge of balancing rapid innovation with regulatory compliance and ethical responsibility.

With generative AI, it can be tempting to quickly create an “acceptable AI use policy” based on what Big Tech organizations deploy, paste it into a policy management system, check the box, and call it done. This inevitably leads to the policy going unread or misunderstood by employees. Establishing a core set of principles to lay the foundation for meaningful AI policies is more effective.

Related:The Case Against the Black Box Model

When considering policy objectives, taking an intersectional approach to AI governance is crucial. Companies should equip themselves with solutions to identify and map regulatory obligations, implement best-practice controls, and responsibly manage AI. This approach ensures compliance support through evolving global regulations, accelerates AI adoption, and delivers tangible value.

Boards should also consider how AI will impact cybersecurity, IT security, and enterprise risks. Given 36% of board directors identified generative AI as the most challenging issue to oversee, boards must invest in specialized training and education to understand the risks involved.

Good Governance is Clarity

Knowing what questions to ask and collecting the right data is the first step. Boards then need to turn the information into actionable insight, fill in the gaps, respond to arising issues, and determine a strategy. To establish an approach to AI governance, it’s crucial to conduct an internal assessment and determine a robust risk management framework. Companies can apply practices and standards from the EU AI Act as it is risk-based and treats AI with a balanced approach between governance and innovation. A chief AI officer is one possibility to oversee AI governance and help bridge the gap between early adopters of AI and leadership.

Related:How Airports Use AI to Keep Birds Off Runways

With Big Tech’s increasing impact on the regulatory landscape, steering an organization toward sustainable, trustworthy practices and shoring up the internal knowledge base for ongoing risk management and oversight is critical. This means factoring AI into IT risk management, as well as the broader enterprise risk monitoring and strategy. The right technology will be invaluable for helping boards stay on top of the details for timely, transparent insight.

About the Author

Phil Lim

Director of product management, Diligent, Diligent

Phil Lim is the director of product management at Diligent. He brings over 15 years of experience in client advisory and product management roles, with expertise in data analytics as well as audit, risk and compliance management. Phil is a frequent speaker at IIA, ISACA, and ACFE conferences.

Sign Up for the Newsletter
The most up-to-date AI news and insights delivered right to your inbox!

You May Also Like