The State of AI Regulation

Back To News

By Ashley Durkin-Rixey and Andrea O’Neal

At this point, we’re all accustomed to AI dominating both national and international headlines. It’s not uncommon to hear about the latest industry player rolling out a new form of generative AI, or see thought pieces about its ethical considerations. It’s quite clear that we’re in an AI global frenzy. So much so that even President Biden in his State of the Union Address briefly mentioned his Administration’s commitment to adopting the safe, secure and trustworthy development and use of AI in the United States.

Given the president’s emphasis on the promise of AI, we reflect on the status of AI’s regulatory movements in the US. How much progress are we making toward robust AI regulation and adoption?

The Executive Level

Since releasing the White House Executive Order (EO) on AI in October 2023, the Biden Administration – and federal agencies at large – have been at the forefront of recent AI regulatory developments.

In response to the EO, the U.S. Office of Management and Budget (OMB) released a draft memo addressing its use of AI and the government’s procurement of the technology. Both the Federal Trade Commission (FTC) and the Department of Justice (DOJ) have publicly stated mergers and acquisitions, data security and privacy and protection of civil liberties are areas for future guidance and are working with other agencies to provide a comprehensive overview. Moreover, Chair Lina Khan announced at a recent FTC AI Summit an investigation into general AI investments by Amazon, Anthropic, Google, Microsoft, and OpenAI.

Spurred by the federal government’s momentum, we’re seeing many companies make voluntary commitments and join government-led groups like the Consortium formed to support the U.S. AI Safety Institute (USAISI) with research and expertise.

The National Institute of Standards and Technology is expanding its AI Risk Management Framework (NIST AI RMF) to apply to generative AI. Per the EO, this expansion will be used throughout the federal government and is increasingly cited as a foundation for proposed AI regulation by policymakers.

Congress

Last year, the Senate’s “AI Insight Forum” series created a media splash and an influx of AI stakeholders flocking to the Beltway. Posed as a way to help Senators learn and prepare for potential AI legislation, the forums created a lot of, dare we say, conversation.

Several key actions have happened so far this year:

  • As a result of the AI Insight Forums, just last week Sen. Mike Rounds (R-SD) – one of four lawmakers on the Senate’s bipartisan artificial intelligence working group – said the group plans to issue a legislative report on possible AI legislation by the “end of March.”
  • In January 2024, Senate Commerce Committee Chair Maria Cantwell (D-WA) indicated she would soon introduce a series of bipartisan bills to address AI risks and spur innovation in the industry.
  • In late February, the House of Representatives announced the formation of its own AI Task Force, chaired by Reps. Jay Obernolte (R-CA-23) and Ted Lieu (D-CA-36).
    • The Task Force’s first primary objective is to pass the CREATE AI Act, making the National Science Foundation’s National AI Research Resource (NAIRR) pilot a fully funded program.
  • Rep. Anna Eshoo, one of the members of the Task Force, brought Dr. Fei-Fei Li, Co-Director of Stanford’s Human-Centered AI Institute (HAI), as her guest at the State of the Union address. Dr. Li worked with Rep. Eshoo to establish the NAIRR pilot program.

States

Over 91 AI-related bills were introduced in state houses in 2023. 2024 may beat that number because lawmakers in 20 states have introduced 89 bills or resolutions about AI since the year started. State lawmakers are in a rare position to advance legislation quickly thanks to a record number of state governments under single-party control, avoiding the partisan gridlock experienced by their federal counterparts.

A Look Ahead

While there is AI regulatory movement at all levels of the government, there is a lack of coordination between key policymakers and government agencies.

What’s the issue with this? We fail to have a larger national standard for AI, which can pose issues for consumers and businesses alike if states continue to create a patchwork of laws regulating AI.

While there are murmurs and whispers about adopting a national, federal privacy law to help curtail AI privacy implications and protect civil rights, there’s been no momentum on a bipartisan bill since the American Data Protection and Privacy Act (ADPPA) was introduced two years ago.

While policymakers build knowledge and startups and larger tech companies continue to push the envelope and iterate on AI technology and platforms, there needs to be a balance between regulation and innovation. The tug-of-war we see between enacting regulation, consumer adoption and technological innovation creates a plethora of issues touching on privacy, data security, copyright, intellectual property (IP) and more.

During these consequential times in the digital world, we’re helping our clients stay abreast of the changing landscape. For more of our thoughts on the state of AI, check out our CEO Maura Corbett’s piece for Tech Policy Press and our Summer of AI series archive.

To learn how the Glen Echo Group can help support your organization’s AI public affairs efforts, contact us.