Building the Analytical Infrastructure for Governing Frontier AI Development

Building the Analytical Infrastructure for Governing Frontier AI Development

Artificial intelligence presents genuinely novel governance challenges. Its combination of rapid capability growth, omni-use applications, and nearly free proliferation is inconsistent with templates designed for other technologies that may have dangerous applications. Recent attempts to govern AI development have had mixed results, with many proposals vetoed, revoked, weakened, or still searching for enforcement mechanisms. The deeper problem is not political will. It is that the analytical foundations for thinking rigorously about AI governance do not yet exist. We lack even a shared vocabulary. The phrase “AI safety” means at least four different things across different communities, and as a result, summits, reports, and legislative proposals talk past each other.

This paper proposes building the missing analytical infrastructure through a dedicated, independent research institution with a long time horizon. The RAND Corporation’s role in nuclear strategy is a useful analogy, not because AI is comparable to nuclear weapons, but because AI, like nuclear technology in the late 1940s, presents strategic questions that existing intellectual frameworks cannot answer. Over roughly fifteen years, RAND and affiliated researchers produced the conceptual vocabulary (deterrence, crisis stability, second-strike capability) that made nuclear governance possible. AI governance has no comparable vocabulary and no institution dedicated to building one.

The proposed institution would be lean, comprising roughly fourteen resident scholars and an annual budget of approximately $12 million, closer in spirit to the Institute for Advanced Study, though much smaller, than to the “CERN for AI” proposals that envision billions in shared infrastructure. It would not need access to proprietary models or training data. Its questions are about actors, incentives, and institutions, not about what any particular frontier AI model can do. Its outputs would include assessments of whether safety frameworks actually constrain behavior, formal models of AI competition dynamics, structured scenario analyses for regulators, and post- incident analysis producing shared facts.

The paper examines why no existing institution fills this role. Government-housed bodies are politically fragile (the U.S. AI Safety Institute was dismantled and repurposed in less than 2 years). University centers are vulnerable to institutional politics (Oxford closed the Future of Humanity Institute in 2024). Industry-funded bodies gravitate toward consensus. Each reason for failure informs the proposed design, which involves freestanding governance, an irrevocable endowment severing funder from research, geographic separation from political capitals, and scholar-driven rather than program-directed organization.

The paper tries to stress-test the idea. A “Why This Might Be Wrong” section considers whether governance risks increasing concentration, whether the object of control is too entangled with general-purpose infrastructure, whether existing institutions could simply be scaled, whether a distributed regime complex is preferable, and whether AI might be the kind of problem that resists institutional solutions entirely. What the objections cannot answer is the cost of inaction. The longer the wait, the more governance builds on foundations no one has tested, and the harder those foundations become to replace

This paper proposes building the missing analytical infrastructure through a dedicated, independent research institution with a long time horizon. The RAND Corporation’s role in nuclear strategy is a useful analogy, not because AI is comparable to nuclear weapons, but because AI, like nuclear technology in the late 1940s, presents strategic questions that existing intellectual frameworks cannot answer. Over roughly fifteen years, RAND and affiliated researchers produced the conceptual vocabulary (deterrence, crisis stability, second-strike capability) that made nuclear governance possible. AI governance has no comparable vocabulary and no institution dedicated to building one. The proposed institution would be lean, comprising roughly fourteen resident scholars and an annual budget of approximately $12 million, closer in spirit to the Institute for Advanced Study, though much smaller, than to the “CERN for AI” proposals that envision billions in shared infrastructure. It would not need access to proprietary models or training data. Its questions are about actors, incentives, and institutions, not about what any particular frontier AI model can do. Its outputs would include assessments of whether safety frameworks actually constrain behavior, formal models of AI competition dynamics, structured scenario analyses for regulators, and post-incident analysis producing shared facts.

The paper examines why no existing institution fills this role. Government-housed bodies are politically fragile (the U.S. AI Safety Institute was dismantled and repurposed in less than 2 years). University centers are vulnerable to institutional politics (Oxford closed the Future of Humanity Institute in 2024). Industry-funded bodies gravitate toward consensus. Each reason for failure informs the proposed design, which involves freestanding governance, an irrevocable endowment severing funder from research, geographic separation from political capitals, and scholar-driven rather than program-directed organization.

The paper tries to stress-test the idea. A “Why This Might Be Wrong” section considers whether governance risks increasing concentration, whether the object of control is too entangled with general-purpose infrastructure, whether existing institutions could simply be scaled, whether a distributed regime complex is preferable, and whether AI might be the kind of problem that resists institutional solutions entirely. What the objections cannot answer is the cost of inaction. The longer the wait, the more governance builds on foundations no one has tested, and the harder those foundations become to replace.

Scott Wallsten is President and Senior Fellow at the Technology Policy Institute and also a senior fellow at the Georgetown Center for Business and Public Policy. He is an economist with expertise in industrial organization and public policy, and his research focuses on competition, regulation, telecommunications, the economics of digitization, and technology policy. He was the economics director for the FCC's National Broadband Plan and has been a lecturer in Stanford University’s public policy program, director of communications policy studies and senior fellow at the Progress & Freedom Foundation, a senior fellow at the AEI – Brookings Joint Center for Regulatory Studies and a resident scholar at the American Enterprise Institute, an economist at The World Bank, a scholar at the Stanford Institute for Economic Policy Research, and a staff economist at the U.S. President’s Council of Economic Advisers. He holds a PhD in economics from Stanford University.

Share This Article

ai

View More Publications by

Recommended Reads

Jeff Macher on Generative AI and the Future of Global Research

Podcasts

AI Isn’t Flooding FCC Comments (At Least Not Yet)

Blog

Shane Greenstein on Co-Invention and the Geography of AI Innovation

Podcasts

Explore More Topics

Antitrust and Competition 182
Artificial Intelligence 37
Big Data 21
Blockchain 29
Broadband 387
China 2
Content Moderation 15
Economics and Methods 37
Economics of Digitization 15
Evidence-Based Policy 18
Free Speech 20
Infrastructure 1
Innovation 2
Intellectual Property 56
Miscellaneous 334
Privacy and Security 137
Regulation 14
Trade 2
Uncategorized 4

Related Articles

Jeff Macher on Generative AI and the Future of Global Research

AI Isn’t Flooding FCC Comments (At Least Not Yet)

Shane Greenstein on Co-Invention and the Geography of AI Innovation

From Simple to Impossible: How Task Complexity Limits AI Research Assistants

Measuring AI Intensity by Occupation: Adjusting for Workforce Size

AI’s Energy Crisis Is Not What You Think

Needham’s Laura Martin on Why Disney Should Ditch ABC

Want AI Leadership? Stop Attacking the Science That Creates It

Sign Up for Updates

This field is for validation purposes and should be left unchanged.