AI Isn’t Flooding FCC Comments (At Least Not Yet)

AI Isn’t Flooding FCC Comments (At Least Not Yet)

We ❤️ irony: This post was written entirely by Claude Opus 4.6 Extended based on Scott Wallsten’s prompts and Nathaniel Lovin’s work downloading all FCC proceedings and running them through Pangram.

There’s been plenty of worry that AI-generated comments will overwhelm regulatory proceedings, drowning out genuine public participation with machine-produced noise. To test whether that concern holds up in practice, we ran all the comments from two recent FCC proceedings through Pangram, an AI detection tool, to see how many were actually written by AI.

The answer: very few.

We analyzed comments from two proceedings: the IP Transition (WC Docket No. 25-304, 61 comments) and Space Modernization (SB Docket Nos. 25-305, 25-306, 83 comments). These are substantive regulatory proceedings that attracted comments primarily from industry, trade groups, and advocacy organizations rather than mass public comment campaigns.

What We Found

Of 144 total comments across both proceedings, 131 (91 percent) were classified by Pangram as “fully human written.” Only two comments, one in each proceeding, were flagged as “fully AI generated.” The rest fell somewhere in between, with small traces of AI detected but nothing suggesting wholesale AI authorship.

144
Total Comments
91%
Fully Human
13
Any AI Signal
2
Fully AI Generated

Pangram analyzes text in overlapping “windows” and classifies each independently. This more granular analysis tells a similar story. In the IP Transition proceeding, just 45 of 972 text windows (4.6 percent) were flagged as AI-generated. In the Space proceeding, it was 11 of 919 windows (1.2 percent).

Even among the handful of filings with any AI signal, the pattern was concentrated. In the IP Transition docket, only two filings scored above 0.6 on Pangram’s 0-to-1 AI assistance scale, one of them a document titled “Advancing the All-IP Transition” that scored a perfect 1.0. After those outliers, scores dropped off quickly: the remaining filings with any detectable AI scored below 0.15. In the Space proceeding, a single individual’s comment scored 1.0, one other scored 0.69, and the rest with any AI signal were below 0.35.

A Closer Look: The Embratel Filing

The aggregate numbers tell one story, but the more interesting question is what AI-assisted commenting actually looks like in practice. The Embratel TVSAT filing in the Space Modernization proceeding offers a useful case study. Pangram classified it as “AI Detected” with an overall AI assistance score of 0.35, placing it in the middle range of flagged filings.

Pangram breaks documents into overlapping text windows and scores each one independently. The Embratel filing produced 17 windows, and the pattern across them is striking. The first seven windows, covering the cover page, summary, company background, and early substantive sections on foreign ownership, milestone requirements, and surety bonds, all scored below 4 percent on the AI scale. These sections read like what they appear to be: substantive regulatory arguments written by lawyers with deep knowledge of satellite licensing.

Then, starting around the eighth window, the scores jump. Sections addressing two-degree spacing rules, coordination agreements, grandfathering provisions, and ephemeris data reporting requirements consistently scored above 50 percent, with the highest-scoring window (a structured five-point proposal for reforming ephemeris reporting) hitting 79 percent.

The pattern suggests something that will probably become increasingly common: a human-drafted core, where the filer’s unique expertise and specific regulatory positions are laid out, supplemented by AI-assisted drafting for the more formulaic sections that require restating standard regulatory arguments or structuring numbered proposals. The substantive heart of the filing appears to be human; the AI assistance shows up in the more routine argumentation.

This is worth noting because it illustrates that “AI-assisted” is not the same as “AI-generated.” The Embratel filing is not a case of someone asking ChatGPT to write FCC comments from scratch. It looks more like a legal team using AI as a drafting tool for portions of a lengthy regulatory submission, in the same way lawyers have long used templates, prior filings, and associates.

Embratel TVSAT: AI Score by Document Position
Each bar represents one Pangram text window, in reading order. Dashed line = 50% threshold.
Cover page / TOC
0%
TOC continued
0%
Summary / background
4%
Foreign ownership
2%
Foreign ownership cont.
1%
Milestone requirements
3%
Surety bond requirements
3%
Bond transition / terms
26%
Two-degree spacing
54%
Reporting / EOL disposal
56%
U.S. contact requirements
67%
Default service rules
25%
Coordination / GSO FSS
59%
Grandfathering provisions
56%
Ephemeris data reporting
42%
Ephemeris proposals (list)
79%
Final considerations
53%
Human Written AI-Generated (>50%) Dashed line = 50% threshold

What to Make of This

These results should be interpreted with a few caveats. First, these proceedings attracted relatively few comments compared to high-profile rulemakings like net neutrality, which drew millions. AI flooding is more likely to appear in mass-comment proceedings where volume can be used to simulate grassroots support. Second, AI detection tools are imperfect. Pangram may miss some AI-generated text, and it may also flag text that was merely edited with AI assistance rather than generated wholesale. Third, these are just two proceedings. The picture could look different elsewhere.

That said, the results suggest that, at least in these substantive regulatory proceedings, AI-generated comments are not yet a significant problem. The vast majority of filers appear to be doing it the old-fashioned way: writing their own comments. Whether that remains true as AI tools become more capable and widely adopted is worth watching.

Methods

We used Pangram to analyze all comments filed in FCC proceedings WC Docket No. 25-304 (IP Transition) and SB Docket Nos. 25-305/25-306 (Space Modernization). Pangram assigns both a headline classification (e.g., “Fully Human Written,” “AI Detected,” “Fully AI Generated”) and a granular, window-level AI assistance score ranging from 0 (fully human) to 1 (fully AI-generated). We report both the headline classifications and the window-level analysis.

+ posts

Scott Wallsten is President and Senior Fellow at the Technology Policy Institute and also a senior fellow at the Georgetown Center for Business and Public Policy. He is an economist with expertise in industrial organization and public policy, and his research focuses on competition, regulation, telecommunications, the economics of digitization, and technology policy. He was the economics director for the FCC's National Broadband Plan and has been a lecturer in Stanford University’s public policy program, director of communications policy studies and senior fellow at the Progress & Freedom Foundation, a senior fellow at the AEI – Brookings Joint Center for Regulatory Studies and a resident scholar at the American Enterprise Institute, an economist at The World Bank, a scholar at the Stanford Institute for Economic Policy Research, and a staff economist at the U.S. President’s Council of Economic Advisers. He holds a PhD in economics from Stanford University.

Share This Article

ai, artificial intelligence, fcc, regulation

View More Publications by

Recommended Reads

Adam White on the Consumer Financial Protection Bureau

Podcasts

Bruce Mehlman on Policy Risks to Watch in 2022

Podcasts

2021’s Top Tech Policy Stories in Review with Jonathan Make

Podcasts

Explore More Topics

Antitrust and Competition 182
Artificial Intelligence 35
Big Data 21
Blockchain 29
Broadband 387
China 2
Content Moderation 15
Economics and Methods 37
Economics of Digitization 15
Evidence-Based Policy 18
Free Speech 20
Infrastructure 1
Innovation 2
Intellectual Property 56
Miscellaneous 334
Privacy and Security 137
Regulation 13
Trade 2
Uncategorized 4

Related Articles

Shane Greenstein on Co-Invention and the Geography of AI Innovation

From Simple to Impossible: How Task Complexity Limits AI Research Assistants

The FCC’s Public Interest Standard: Shield or Weapon? with Harold Feld and Tom Hazlett

Measuring AI Intensity by Occupation: Adjusting for Workforce Size

AI’s Energy Crisis Is Not What You Think

Request Denied: The Empire’s Interoperability Problem

Needham’s Laura Martin on Why Disney Should Ditch ABC

Want AI Leadership? Stop Attacking the Science That Creates It

Sign Up for Updates

This field is for validation purposes and should be left unchanged.