The Administration Is Already Governing AI Development. It Just Doesn’t Have a Strategy.

The Administration Is Already Governing AI Development. It Just Doesn’t Have a Strategy.

Last week the White House released its National Policy Framework for Artificial Intelligence, a four-page set of legislative recommendations for Congress. The framework addresses real issues, including protecting children online, preventing AI-enabled fraud, managing energy costs from data centers, clarifying copyright questions, and preempting the patchwork of state AI laws.

But the framework is silent on whether and how to govern the trajectory of AI development itself. It says nothing about frontier model safety evaluation, responsible scaling, international coordination on development risks, or verification mechanisms. It explicitly recommends against creating a new federal AI rulemaking body and shields developers from liability for third-party misuse of their models.

On its own, the document suggests that the administration’s position is that AI applications need some guardrails while AI development should be left to the market and accelerated as fast as possible. If the administration actually adhered to that stance, it would at least represent a coherent strategy. 

But the administration’s own actions tell a different story. Across multiple policy domains, the government is already making consequential decisions about the direction of AI development. It just isn’t calling them governance.

Export controls are development governance. Deciding which countries and companies can access frontier compute attempts to influence who can develop advanced AI and at what pace. The escalating chip restrictions since 2022, followed by partial reversals and revenue-sharing arrangements, represent an active, ongoing attempt to shape the global development landscape.

Dismantling the AI Safety Institute was a development governance decision. Renaming it the Center for AI Standards and Innovation and reorienting its mission from safety evaluation to deregulation was a judgment that pre-deployment evaluation of frontier models is not a government function.

Government procurement decisions, particularly from the Department of Defense, shape what AI developers build and what safety commitments they maintain. The DOD is the single largest customer for many advanced technologies. And when it uses its procurement power to punish or reward specific AI development choices, that shapes development norms industry-wide, not just for the companies holding the contracts. The recent designation of Anthropic as a supply chain risk after the company maintained restrictions on mass domestic surveillance and fully autonomous weapons illustrates the point. Whatever one thinks about the merits of Anthropic’s position, the episode sent a signal to every AI developer about which safety commitments will survive contact with political pressure and which won’t. That is development governance.

Infrastructure permitting for AI data centers, CHIPS Act funding allocations, and energy policy decisions all shape the scale, location, and pace of frontier development. Each involves tradeoffs that affect the trajectory of the technology.

None of this is to say these decisions are individually wrong. Some of them may be exactly right. But each decision sits on its own. Export controls are made by one set of officials responding to one set of pressures. The AI Safety Institute was dismantled by another. The Anthropic dispute played out through procurement and social media. Energy permitting follows its own logic. No framework connects these decisions or makes it possible to evaluate whether they add up to a coherent development trajectory or pull in contradictory directions. They are not pieces of a strategy. They are a series of ad hoc responses to immediate pressures. The administration does not have a coherent strategy for AI development. It has a collection of unrelated decisions.

Evaluating them together requires analytical frameworks, and for AI development those frameworks don’t yet exist. We can’t answer even basic questions with any rigor. Do safety frameworks actually constrain developer behavior, or do they just provide cover? Under what conditions does international coordination improve outcomes, and when does it just add transaction costs? What even is the desired outcome? Which risks are better managed by the competitive process itself? When do governance interventions backfire? These decisions will keep being made whether the tools exist or not. The question is whether anyone starts building them.

In a new paper, I propose a structural solution to this analytical gap. The AI governance conversation has spent years debating whether development should be constrained, accelerated, or left alone. But debates cannot be resolved without the intellectual infrastructure to test those positions. I argue we need a new, independent research institution designed to do what government agencies and corporate labs structurally cannot, which is to build a rigorous, empirical foundation for governing frontier AI.

The White House framework addresses the questions we already have the tools to think about. The harder questions, which the administration is already answering through its uncoordinated actions, are the ones where we are flying blind. Building the tools to answer them is an investment nobody is making, but it is the only way to turn ad hoc reactions into actual strategy.

Scott Wallsten is President and Senior Fellow at the Technology Policy Institute and also a senior fellow at the Georgetown Center for Business and Public Policy. He is an economist with expertise in industrial organization and public policy, and his research focuses on competition, regulation, telecommunications, the economics of digitization, and technology policy. He was the economics director for the FCC's National Broadband Plan and has been a lecturer in Stanford University’s public policy program, director of communications policy studies and senior fellow at the Progress & Freedom Foundation, a senior fellow at the AEI – Brookings Joint Center for Regulatory Studies and a resident scholar at the American Enterprise Institute, an economist at The World Bank, a scholar at the Stanford Institute for Economic Policy Research, and a staff economist at the U.S. President’s Council of Economic Advisers. He holds a PhD in economics from Stanford University.

Share This Article

AI development, AI governance, AI policy, AI regulation, AI Safety Institute, AI strategy, Anthropic, cost-benefit analysis, Department of Defense, export controls, frontier AI, White House AI framework

View More Publications by

Recommended Reads

Building the Analytical Infrastructure for Governing Frontier AI Development

Research Papers

Jeff Macher on Generative AI and the Future of Global Research

Podcasts

Why Policy Experiments Never End: Constituencies Form Faster Than Evidence

Blog

Explore More Topics

Antitrust and Competition 182
Artificial Intelligence 38
Big Data 21
Blockchain 29
Broadband 387
China 2
Content Moderation 15
Economics and Methods 37
Economics of Digitization 15
Evidence-Based Policy 18
Free Speech 20
Infrastructure 1
Innovation 2
Intellectual Property 56
Miscellaneous 334
Privacy and Security 137
Regulation 15
Trade 2
Uncategorized 4

Related Articles

Building the Analytical Infrastructure for Governing Frontier AI Development

Jeff Macher on Generative AI and the Future of Global Research

Why Policy Experiments Never End: Constituencies Form Faster Than Evidence

AI Isn’t Flooding FCC Comments (At Least Not Yet)

Shane Greenstein on Co-Invention and the Geography of AI Innovation

From Simple to Impossible: How Task Complexity Limits AI Research Assistants

The FCC’s Public Interest Standard: Shield or Weapon? with Harold Feld and Tom Hazlett

Measuring AI Intensity by Occupation: Adjusting for Workforce Size

Sign Up for Updates

This field is for validation purposes and should be left unchanged.