Local Governments Take the Lead in Regulating AI

U.S. cities are taking the lead in regulating AI, ahead of the federal government's efforts. But what are the implications?

Sascha Brodsky, Contributor

October 24, 2023

4 Min Read
image of a city with tech symbols floating above
AI Business via DALL-E 3

At a Glance

  • New York, Connecticut and California are among at least 10 states working to regulate AI ahead of the federal government.

The federal government is lagging in efforts to regulate AI, so cities nationwide are stepping in to fill the gap.

New York City recently introduced an AI strategy aimed at assessing AI tools and their associated risks, enhancing the proficiency of city staff in AI, and promoting the responsible adoption of these technologies. It is part of a growing effort to devise local laws for AI and better integrate the technology into government.

“How this plays out could have meaningful implications for AI policy both federally and in other municipalities,” Sarah Shugars, an assistant professor of communication at Rutgers University, said in an interview. “Can governments merely say they will use AI responsibly, or will citizens demand actions that back up those words?"

Putting AI on a local leash

In its 51-page AI strategy, New York City outlined a set of actions it will undertake to gain a deeper understanding of the technology and responsibly integrate it into the government in coming years.

The initial phase outlined in the city's AI strategy involves the formation of an 'AI Steering Committee,' composed of stakeholders from various city agencies. The document delves into nearly 40 specific initiatives, with 29 slated for initiation or completion in the coming year. The city also has committed to publishing an annual report on AI progress to inform the public about the plan's advancements and implementation.

Related:White House publishes AI Bill of Rights

“While artificial intelligence presents a once-in-a-generation opportunity to more effectively deliver for New Yorkers, we must be clear-eyed about the potential pitfalls and associated risks these technologies present,” said Mayor Eric Adams in a news release. “I am proud to introduce a plan that will strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm.”

AI is already facing regulations in the Big Apple. Under a law that came into effect this year, employers in New York City are not allowed to use AI-powered tools for hiring and promotions. The law applies to "automated employment decision tools," which are computer programs that use AI, statistics or data analysis to make decisions about hiring and employment.

A nationwide push

New York is not alone in trying to regulate AI. There is no comprehensive federal AI law yet, but the White House released an AI bill of rights last year, focusing on protecting consumers when designing and using automated systems. The Biden administration plans to issue an executive order addressing AI soon, looking at existing laws and the need for new legislation. Meanwhile, the European Union nears final passage of its wide-reaching AI Act.

Related:Silicon Valley AI Elite Back Regulation But Disagree on Details

In the U.S., 10 states added AI rules to privacy laws for 2023, and other states are thinking about doing the same, according to the Electronic Privacy Information Center. Some states are also looking into AI's impact on areas like healt hcare, insurance, and jobs. Connecticut is creating an AI bill of rights, according to Statescoop. Delaware is implementing the Personal Data Privacy Act, giving consumers control over automated decisions, and Washington, D.C., is preventing algorithms from making biased decisions.

New Jersey State Senator Doug Steinhardt has proposed a bill to revise the state’s identity theft law, incorporating provisions to address fraudulent impersonation through AI or deepfake technology. Voice-cloning scams, where fraudsters use AI to mimic a person's voice and contact relatives for money, are on the rise.

“As artificial intelligence tools have grown increasingly more powerful and available to the general public, they’ve opened the door for scammers to commit shockingly disturbing new crimes involving identity theft,” said Steinhardt in a news release.

Related:EU AI Act Nears Completion as Lawmakers Reach Historic Moment

California appeared ready to take action on AI at the start of the year, but any significant AI policy effort will likely need to wait until at least 2024. Lawmakers had suggested plans to combat algorithmic bias and provide privacy safeguards for residents, including the ability to opt out of AI systems.

Many discussions at the national level revolve around AI usage, often causing concerns when it is applied in various communities across the country, David Dunmoyer, the campaign director of Better Tech For Tomorrow at Texas Public Policy Foundation, noted in an interview.

Take the self-driving car company Cruise, for example, has a fleet of self-driving cars in Austin.

“After numerous complaints from residents who felt imperiled by the underdeveloped technology, it triggered a probe by the National Highway Transportation Safety Administration,” he said. “Cities are becoming petri dishes for fledgling AI technologies, and the federal government is using these pilot programs to gauge their regulatory approach.”

However, some AI proponents worry that local regulations could stifle the industry.

"It feels as though regulation is inevitable,” Raj Kaur Khaira, chief operating officer at AutoGenAI, said in an interview. “But it remains to be seen what will be regulated. Will it be AI itself or just the use of it? Any regulation in this area needs to be proportionate to the risk they’re trying to protect against. As we saw during (Meta CEO) Mark Zuckerberg’s congressional hearing when a senator asked Zuckerberg how Facebook makes money; unfortunately, regulators and lawmakers don’t always understand the thing they are trying to regulate.”

About the Author(s)

Sascha Brodsky

Contributor

Sascha Brodsky is a freelance technology writer based in New York City. His work has been published in The Atlantic, The Guardian, The Los Angeles Times, Reuters, and many other outlets. He graduated from Columbia University's Graduate School of Journalism and its School of International and Public Affairs. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like