Governing Disruption: 2026 AI Policy Outlook

Back To News

Two months in, it's clear that Glen Echo's forecasters, like so many, were correct in predicting that AI would be a core focus for policymakers at all levels in 2026. It’s difficult to say exactly where we’re headed, given the pace of technological advancement and the rapid introduction of new legislation, executive orders and litigation both inside and outside the Beltway. What we do know is that there is overwhelming bipartisan support — nearly seven in ten— for further regulation around AI. Voters are clearly sending a message that the technology needs guardrails, and as midterm elections approach, policymakers can’t afford to ignore it. 

Against this backdrop, Glen Echo hosted a timely virtual discussion on “Governing Disruption: 2026 AI Policy Outlook,” moderated by Brad Williamson, Senior Vice President at Glen Echo Group. The webinar featured Republican pollster Alex Lundry, President of Red Bud Consulting and Director at Centerline, and Heather West, Senior Director of Cyber Security at Venable where she leads the Alliance for Trust in AI. The trio explored public sentiment around AI, legislative priorities and the unresolved questions that will shape AI governance in 2026 and beyond. 

A Rare Point of Consensus on Regulation

A major takeaway from the discussion was that support for AI regulation is strong and bipartisan, with recent Centerline polling showing that 59 percent of respondents believe regulation should be implemented. Lundry emphasized that Democrats, Republicans and Independents all agree that some form of AI guardrails is necessary. 

But that support for regulation is rooted in uncertainty. Voters are concerned about AI’s potential consequences, including job loss, environmental impact and other social disruptions, but many lack an understanding of what AI regulation would entail. As Lundry noted, today’s debate is often driven more by anxiety than specificity.



Data Center Debates Aren’t Going Anywhere 

As AI deployment accelerates, questions have emerged around the expansion of data centers and their impact on energy prices and the environment. We’ve seen policymakers quickly shift from touting major AI lab and developer investments to pausing data center tax incentives and, in some cases, halting development completely.  

Lundry explained that across the country, politicians are hearing pushback from constituents on energy affordability, as “a lot of consumers feel helpless when it comes to rising utility costs, though many tie higher bills to inflation rather than to AI data centers specifically. 

Companies have recognized the political sensitivity of this issue and are beginning to proactively declare that data centers will “pay their fair share of costs,” which could impact the next phase of the massive AI infrastructure buildout we’re seeing.


The Road to a National AI Framework

The panelists addressed one looming question for AI policy: How realistic is the development of a national AI framework in the United States? West argued that the U.S. is already developing a national AI framework via the states, which continue to pursue their own regulatory paths despite recent federal action ranging from an Executive Order to congressional proposals on safety, transparency and whistleblower protections. Lundry highlighted that the outcome is uncertain because of this tension between state and federal authority. 

Kids’ Online Safety is Top Priority

When it comes to policymaker priorities, protecting children online remains an issue that stands above the rest. Lundry described AI’s effects on children as the top concern, pointing to recent high-profile cases involving AI chatbots. West characterized child safety as “a visceral issue” that will consistently draw legislative attention from federal, state, local and international levels.

Panelists framed concerns in two key ways: exposure to harmful content and broader cognitive impacts. For now, however, exposure to harmful and inappropriate content is the primary concern. 

The Data Privacy & Compliance Puzzle

While the United States still lacks a federal privacy law, recent privacy debates have necessarily shifted to include AI, given that these models require massive amounts of data. West noted that concerns about ownership and intellectual property are also significant, but are currently taking a backseat to data privacy. 

Lundry reflected that historically, people have been opposed to sharing personal data; now, “there's a clear exchange of value, that [the consumer] is getting something valuable in return” from AI tools. He also pointed out that many AI companies are speaking more proactively about privacy to enter highly regulated industries such as finance and healthcare. 

From a compliance standpoint, West emphasized that existing regulations still apply to AI, particularly in highly regulated sectors, though there remains ambiguity in translating older regulatory frameworks to new AI tools. Lundry also pointed out that enterprise adoption is accelerating faster than regulatory and legislative clarity.

Who do we trust on AI?

During the Q&A, an attendee’s question about message credibility spurred conversation around the trusted voices in AI debates. Lundry shared polling insights on who the public trusts to speak credibly about data centers and AI infrastructure. The most trusted voices were independent, university-appointed experts and federal government scientists and researchers. Members of Congress ranked at the bottom due to “an astonishing lack of trust in our government officials,” while CEOs and privacy sector leaders fell in the middle.

The takeaway is clear: neutrality and perceived independence carry significant weight in shaping public opinion on AI. As West summarized, “it’s time to bring in the nerds.”

This was the first in a series of virtual discussions Glen Echo is hosting, exploring how regulations are shaping AI. If you have burning AI questions you’d like to hear experts address in our next conversation, reach out to us

To stay up to date and be the first to know when our next webinar is scheduled signup here.