Last summer, I asked Anthropic’s AI assistant Claude to clean some data and answer a few questions. When its results looked off, I queried, “Did you remember to trim the leading and trailing spaces?” Claude admitted the oversight, re-ran the analysis, and nailed it. I knew what to ask because I’ve spent decades making similar mistakes myself.
But life in those trenches is vanishing. AI is replacing those training grounds that turn novices into critical thinkers.

That erosion is what LinkedIn executive Aneesh Raman dubs “breaking the bottom rung” in a recent New York Times op-ed.
As someone who suffered through those roles myself and later hired many others to fill them, I worry deeply about what comes next. In this post, I’ll show how I’m complicit in that breaking, why it matters, and what we can and cannot do about it.
As an empirical researcher, much of my work over the past 30 years has involved data analysis. While a PhD student, I spent months at a time wrangling raw data by hand. Later, I hired research assistants (RAs) to do the same. It’s tedious work, but it trained them to be researchers. Many went on to get their PhDs, become professors, manage companies, become venture capitalists, and more, and mentor the next cohort.
That’s the way it has always been. Until now. And it’s this cycle of learning-by-doing that my own embrace of AI is helping to unravel.
I no longer need RAs. Today’s AI can clean data, write code, test hypotheses, and even write a literature review in minutes instead of months.
“But wait,” you might be thinking. “Don’t AI hallucinations and errors mean you can’t trust their answers?” You would be correct to think that. And here’s the rub: I know where to look for errors only because I’ve made so many.
And now because of me and people like me, fewer people are learning the foundations of this kind of work.
It almost feels perverse to reap these AI gains when I know they hollow out the very apprenticeships that shaped me.
But how could I not adopt AI tools? They allow me to be more productive and save precious resources at my small, cash-strapped think tank. As an economist, it’s a simple cost-benefit analysis: By using AI and the knowledge I gained over the decades I get enormous benefits in the short run while the costs are in the future and, moreover, will be borne by society overall, not by me. It would be irrational for me to behave otherwise.
If everyone follows this path it creates the possibility that people increasingly accept AI-generated answers uncritically, particularly in fields where “right” and “wrong” answers may not exist in the first place.
But such extrapolation is rarely correct, not just because predicting the future is hard, but because we can do things that change the future.
The response cannot be to slow the development and adoption of AI. It’s helping to accelerate research and promises to bring all kinds of new benefits to society. Reducing the quality of work now because we’re concerned about the quality of work in the future is nonsensical. AIs give us amazing new tools that we should exploit.
Discussions about AI and the future of work center around the kinds of skills that will be necessary and how to help people obtain them. To a large degree, the market causes people to develop skills on their own as entirely new professions develop. Even a few years ago very few people would have had any idea what a “prompt engineer” was, let alone how important that field would become.
I don’t want to minimize the potential new opportunities, the difficulties many will face in an AI transition, and the challenges society must overcome in potentially revamping education for a new era.
But I’m talking about a different problem: how will people gain the real-world experience necessary to know how to critically examine AI output?
Thus, the question is how to ensure that humans know how to oversee AI and ask the right questions if they never get the learning-by-doing experience of thinking through the problems themselves.
I don’t know the answer, and I don’t think anyone truly does yet. Still, we might make some suppositions and entertain ideas.
To some extent, we should remember that the nature of work itself can change. Just as “prompt engineer” is a new job, perhaps the role of junior positions will change to embrace AI and new tools will emerge to create different ways of assembling and checking data. Maybe our concept of what constitutes “data” will evolve to include the kinds of observations that we don’t think of as useful information today. For that matter, maybe my entire line of thought is upside-down. The younger generations of researchers may think of entirely new kinds of analyses that fully take advantage of new tools and processing power, while those of us trained in the analog age are stuck thinking about trailing spaces in the data.
Still, we must preserve the ability to question outputs, whether from humans or machines. This isn’t just about updating educational curricula to include more logic and critical thinking, though that might be part of the solution. For younger generations, it’s about designing new learning experiences that provide the insights we once gained through tedious work. For older generations, it means recognizing that our paths don’t have to be the only ones. I don’t know how we create new “digital trenches,” but I know we need them. New apprenticeships, digital or otherwise, should teach the art of skepticism. Otherwise, we risk a future where people don’t challenge AI’s answers simply because nobody remembers how.
Scott Wallsten is President and Senior Fellow at the Technology Policy Institute and also a senior fellow at the Georgetown Center for Business and Public Policy. He is an economist with expertise in industrial organization and public policy, and his research focuses on competition, regulation, telecommunications, the economics of digitization, and technology policy. He was the economics director for the FCC's National Broadband Plan and has been a lecturer in Stanford University’s public policy program, director of communications policy studies and senior fellow at the Progress & Freedom Foundation, a senior fellow at the AEI – Brookings Joint Center for Regulatory Studies and a resident scholar at the American Enterprise Institute, an economist at The World Bank, a scholar at the Stanford Institute for Economic Policy Research, and a staff economist at the U.S. President’s Council of Economic Advisers. He holds a PhD in economics from Stanford University.