Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
An intersectional regulatory approach to AI is needed to balance innovation with profitability
Tech giants have spent over $30 billion acquiring artificial intelligence (AI) startups while facing antitrust concerns. In response, states rushed to fill the federal regulatory void introducing nearly 700 AI-related bills in 2024, compared to 191 in 2023. This led to a growing concern of Big Tech shaping regulations in their favor to limit competition and discourage new market entrants.
Additionally, scrutiny over AI's return on investment (ROI) increased, with many companies shifting focus to high-impact projects and cutting less productive initiatives. This scrutiny reflects an industry trend toward balancing innovation with profitability, as economic conditions cause investors and stakeholders to demand clearer value propositions from AI-related spending.
To address this, a more intersectional regulatory approach to AI is needed. It must balance economic growth with cyber resilience, national security, and equitable outcomes. To promote fairness and transparency, an intersectional approach and regulatory clarity are essential.
As the AI landscape evolves, companies face the challenge of balancing rapid innovation with regulatory compliance and ethical responsibility.
With generative AI, it can be tempting to quickly create an “acceptable AI use policy” based on what Big Tech organizations deploy, paste it into a policy management system, check the box, and call it done. This inevitably leads to the policy going unread or misunderstood by employees. Establishing a core set of principles to lay the foundation for meaningful AI policies is more effective.
When considering policy objectives, taking an intersectional approach to AI governance is crucial. Companies should equip themselves with solutions to identify and map regulatory obligations, implement best-practice controls, and responsibly manage AI. This approach ensures compliance support through evolving global regulations, accelerates AI adoption, and delivers tangible value.
Boards should also consider how AI will impact cybersecurity, IT security, and enterprise risks. Given 36% of board directors identified generative AI as the most challenging issue to oversee, boards must invest in specialized training and education to understand the risks involved.
Knowing what questions to ask and collecting the right data is the first step. Boards then need to turn the information into actionable insight, fill in the gaps, respond to arising issues, and determine a strategy. To establish an approach to AI governance, it’s crucial to conduct an internal assessment and determine a robust risk management framework. Companies can apply practices and standards from the EU AI Act as it is risk-based and treats AI with a balanced approach between governance and innovation. A chief AI officer is one possibility to oversee AI governance and help bridge the gap between early adopters of AI and leadership.
With Big Tech’s increasing impact on the regulatory landscape, steering an organization toward sustainable, trustworthy practices and shoring up the internal knowledge base for ongoing risk management and oversight is critical. This means factoring AI into IT risk management, as well as the broader enterprise risk monitoring and strategy. The right technology will be invaluable for helping boards stay on top of the details for timely, transparent insight.
You May Also Like