Scott Wallsten
Hello and welcome back Two Think Minimum—the podcast of the Technology Policy Institute. I’m Scott Wallsten, President of TPI, and today is Wednesday, February 5th, 2025. I’m here with my colleagues, Tom Lenard, President Emeritus and Senior Fellow, and Sarah Oh Lam TPI Senior Fellow. While artificial intelligence continues to dominate tech policy discussions. Today, we’re switching gears a little to explore how regulation and policy interact with America’s startups. So-called “little tech.” This perspective is a little different from the conventional focus on big Tech and provides insights into how innovation is unfolding at a smaller scale to help us dive into this angle. We’re thrilled to have Matt Perault and Matt is the head of AI policy at Andresen Horowitz.
He also currently serves as a senior fellow at the Center on Technology Policy at New York University, a board member of the North Carolina Institute of Technology Policy, a contributing editor at Lawfare, a fellow at the Abundance Institute, and a fellow at the National Security Institute at the George Mason University, and Scalia Law School.
Prior to joining a16z, he served as Director of the Center on Technology Policy at UNC Chapel Hill, a professor of the practice at UNC School of Information Library Sciences, and so much more. We have to stop listing them, or we’d never get to the conversation. Matt, thanks for joining us.
Matt Perault
Thanks for having me on.
Scott Wallsten
So, to start off the discussion. You had a piece yesterday, or maybe it was the day before. We’ll include a link to it when we post this, about AI regulation and “little tech,” and so on. So I’d like you to talk about that and your arguments. But before you do that define “little tech,” because that’s not a phrase that everybody knows about.
Matt Perault
So this is what’s so exciting for me about this job, getting proximity to “little tech” companies which are the startups that drive the innovation ecosystem. I worked in tech for a while. I’ve known you guys for a while, and we’ve had lots of discussions about lots of policy issues over time. And what I realized when I was like thinking about this role is, despite working in technology for a long time, I don’t think I’d really had proximity to the startup community. So when I joined Facebook, I forgot exactly how big it was. But it was maybe like 1,500 people, 2,000 people, and that’s smaller than it is today. But it was a long way from the Harvard dorm room days. And when I directed a Tech Policy Center, first at Duke and then at UNC.
In our work, we wrote very frequently about potential costs for startups of various regulatory initiatives. And that was always a theory that we had, but not something that I really knew intimately. In the Andreessen Horowitz portfolio, there are companies that might not have a general counsel, let alone a policy team. Or the kind of work that I did when I was at Meta, which was, you know, working on policy briefs on various issues. And we had a sizable enough staff that we could kind of go deep, or the aspiration at least, was to go deep on each of those issues. For these companies, they’re trying to build products and get them to market. And the existential issues that they’re focused on are really those initial ones of, “Do we have a business that’s going to survive and potentially succeed?”
And so, the focus for us is on those companies, and there can be a range of different types of companies. So, it could be like I was just describing, it could be like a founder and an engineer. We also provide investments to later-stage companies, and those might have more mature legal and policy teams. But they’re still distinct, I think, from the kinds of places that I used to work where, you know at Facebook, by the time I left the the policy team was hundreds of people. They have a large number of lawyers on staff, and that’s true for other companies like Microsoft or Google.
Scott Wallsten
So, your piece focuses on… it’s really the heart of policy. How policy will affect these companies, and what policy should and should not focus on, and where it should come from. So tell us about your piece and the arguments that you’re making.
Matt Perault
Yeah, so our focus in thinking about the core components of AI policy is trying to orient policymakers towards regulation that addresses harmful uses of AI, as opposed to the development of AI itself. And the way that others at our firm have talked about this, is that when you’re focused on model development, what you’re doing is regulating math. AI is essentially Algebra. It’s complicated Algebra. It’s Algebra at scale, but it’s essentially math. When you put regulatory burdens on that process and you say, “When you’re in the process of developing a model, you need to jump through XYZ regulatory hoops,” or “You need to do impact assessments that assess that look at the potential impact of your model downstream.”
What you’re really doing, is you’re burdening that process. The obvious way to say it, is that when you’re regulating models, that’s all you’re doing. You’re regulating models in our view. That’s kind of a bank shot. If what you’re aiming to get at is consumer protection. If you want to protect consumers and if you’re concerned about harmful uses of technologies for consumers, then what you should do is protect consumers. And you can do that in a few different ways.
The first is to focus on existing law, and how you enforce it. And we can talk in more detail about what that might mean in practice. The second is to the extent existing law is insufficient, and I should be clear. It’s not evident, I think, at this point that it is insufficient. But if it were, if we were to identify areas where existing law there are holes in existing law, and there are harmful uses of AI that need to be addressed through new law. You’d want to do it narrowly focused on trying to address those potential harms, not by taxing the actual innovation of developing models.
Scott Wallsten
So, what are the sorts of things we should be looking for, if we’re thinking about how what sensible regulation would mean? What are some of the harms that in principle we should be concerned about it?
Matt Perault
Yeah. Well, let me just sort of initially emphasize the importance of this framework from a “little tech” perspective. So again, like, I think it just looks different, depending on where you are in the tech ecosystem. But for small companies that are trying to compete with larger ones in developing models. If there are complex regulatory regimes with significant compliance requirements that impose significant compliance costs. Those costs won’t be borne equally by every player in the ecosystem. So if you have more resources that you can deploy to compliance, that means you’re pulling fewer engineers away from product development or salespeople away from helping to get your product to market and monetization, relative to the overall size of your business than a “little tech” company. So a “little tech” company that’s trying to compete with an established platform…
I think that if we want to get the kind of innovation that I think we want in this industry, you want those companies to be primarily focused on product development and monetization, not on trying to figure out how to comply with complex burdens in one market versus another. In terms of like thinking about harmful uses, I think, in the existing law that can be used to cover them, I think that is almost coextensive with the body of existing law that exists. So, if AI is used to commit fraud, then there’s no exception in AI in fraud statutes. There’s no exception in fraud statutes for AI. So you would hope that those statutes would be enforced if there are violations of antitrust law where AI is used in some way. There are not exceptions in current antitrust law for AI. The same would go with civil rights law, and those laws exist at the federal level as well as at the state level. And so, state enforcers and federal enforcers could both enforce existing laws when AI is used to perpetuate harm.
Scott Wallsten
So are they jumping too quickly to passing AI-specific or thinking about AI-specific rules rather than thinking about where existing laws could apply instead?
Matt Perault
Yeah, I think it is moving too quickly, I think, in terms of looking to regulate before we really understand what those new harms are and before we understand how existing law could be used to address them. And I think it’s also moving in a direction that’s unproductive. So, I think the theory of trying to protect consumers by regulating model development, is that if you slow innovation and products that you have concerns about are not released into the world. Then you’re avoiding potential harm to consumers. The problem is, that you’re also avoiding creating the companies that could compete with DeepSeek and other companies globally. And so that mentality works if what you think you’re preventing is something that is just going to be entirely problematic, or something where the costs of the social costs of putting it into the world far outstrip the benefits. And I think what we have with AI is likely the opposite, where there will of course, be harms, like there are with any tool, with any technology. But the harms will be outweighed by significant benefits. So that means that when you’re focusing regulation on slowing model development, you’re taking the benefits off the table.
And you’re not in a particularly efficient way protecting consumers. You’re sort of taking your eye off the ball of protecting consumers and focusing the regulation on the innovation. I do think that there are a whole bunch of things that you could do that are active steps to protect consumers. That because of what we see as the distraction of model development-focused regulation, we aren’t doing on the consumer protection side. And you guys given your deep expertise in policy, I bet you can list out five things that I’ll miss in talking this through. But enforcers don’t just. They’re not just the cases that do not just come down from on high, and then you understand how to prosecute a violation of existing law. You need to figure out how to make a case right? You need to figure out how to put together the building blocks that allow you to show to a judge that a violation of the law has occurred. And in order to do that, you need to have the resources to put the case together.
So you need to have, the actual time and the ability to allocate staff resources to it. You need to have a technical understanding. There’s an enormous amount of work that goes into enforcement. And so, our view of the right way to think about the policy agenda is to put the focus on that. And to have lawmakers help to make it possible for enforcers to enforce existing law.
Scott Wallsten
You also talk in your article about the better roles for states versus the federal government. And I think there’s kind of an interesting tension that you essentially raise. Which is that, on the one hand, having states involved, creates a patchwork of laws which are really difficult, in particular for smaller firms to deal with. But, on the other hand, you would like to see them, as as the phrase “laboratories of democracy,” right? If you’re thinking about this from the point of view of encouraging small firms to continue to innovate and grow. How do you put those things together? Like what’s with the productive role of the states here?
Matt Perault
Yeah. So I don’t actually see those intentions. But you should tell me if I’m missing something. So I think, states are laboratories, but they should be laboratories in the areas that are traditional state domains. And so I don’t think the idea of states as “laboratories of democracy,” is that states should burden interstate commerce by initiating experimental approaches to how national markets should be regulated. I think the idea is that states would experiment in all the different things that are within traditional state domains. And so, that’s really focused on policing activity within their jurisdictions. You guys know this, I think because it was something I really enjoyed doing and focusing on as an academic.
The absence of experimentation and policymaking has led to some outcomes that I think are suboptimal. The main one being that we don’t actually get governance of the tech sector because it’s sort of hard to move forward when you don’t know what the outcomes are going to be of various policy proposals. And then, when we do act, we do it with a sort of like leaping with a sense of certainty that I think we probably shouldn’t have. And put laws on the books that will exist in perpetuity that might or might not be the right answers. And experimentation give us the ability to learn as we go, and to figure out what works and what doesn’t, and to tweak over time. And so, I think experimental frameworks and public policy are really important, and I think states have an important role to play in that.
But we are seeing increased competition in AI from China. I think there is much more unity where we sit today than there was even two weeks ago around the desire to create very competitive American AI products. And if that’s our focus, it’s really hard to do that when you’re orienting toward one legal standard in Texas, and another legal standard in New York, and another legal standard in Florida, and another legal standard in California. From a constitutional perspective, that’s not the way that we’re trying to create products that traverse across state boundaries. And I think, that role in creating a national market, and an AI really should be led by Congress.
Tom Lenard
I don’t quite understand how experimental. I don’t know how that would work. How that would be implemented? Could you describe maybe a hypothetical experiment?
Matt Perault
Yeah, there are a lot of different models for it. So I think it really depends, and it can vary. I think the one model that has been kind of most consistently deployed is the regulatory sandbox, and that’s been used in a variety of different places. It’s been used in Singapore and the UK, and several states have used them as well. It’s popular in Fintech. I think it wouldn’t surprise me if there are multiple of them in AI, but one that I know of is in Utah. And basically, the idea is you can apply to participate. You can use that testing ground to experiment with products, and you get regulatory forbearance, limited regulatory forbearance for your experimentation as long as it’s within the terms of the sandbox. And so typically sandboxes are thought of as ways to test products. I think there are. You can envision, and Scott, Bob, Rennan and I wrote about this at UNC when we were working in that capacity. Like you can envision ways to trial, not just products, but also policy regimes to figure out what works in a particular with a particular policy regime.
And I think that makes a lot of sense, because, you know we all have written extensively about unintended consequences. Experimentation gives you the ability to learn the consequences and then to like adjust policy. You know, policy regimes as you go. And so I think it’s appealing. It’s an appealing framework. I think the sandbox model is an interesting one to test with AI and you know, gives us the opportunity to learn a lot.
Tom Lenard
So, who determines the regulations that are being tested?
Matt Perault
Well, so it… I mean. Again, it depends on the framework. Usually, I think they usually sit in like state departments of commerce, and so they’re responsible for granting access to the sandbox. So it’s not just that anybody could participate. There are typically standards on acceptance, and how the sandbox admissions, and how that would work in practice. And then those agencies, I think, lead in determining the scope of regulatory forbearance. And I think there are different ways that you can structure it like any participant would get an exemption from certain laws, though I think the Utah one, is that the applicant actually will ask for specific regulatory forbearance as part of the application process.
Scott Wallsten
It sounds like then we sort of have some of these experiments already going. Not because they were set up that way, but because we have these differences across states. What should we be looking at to try to determine the different effects of the of these various approaches?
Matt Perault
You know that’s a good question, and I don’t have a good answer. I don’t remember if this is part of the Utah program or not. I think the ideal thing is that when you set up the ideal thing, I think, is that any regulation that is passed has some mechanism built in for tracking its effect. And that could be lots of different things. It could like, have certain data production requirements as part of it, it could set up an advisory board that reports on progress every six months. It could have it could take evidence from companies about costs and benefits of a particular regulatory framework.
I don’t remember if the Utah one does or does not. I don’t think it has any kind of formal process for surfacing evidence of impact after the fact. But that’s part of the thing that I find to be just frustrating about public policy overall is that there aren’t mechanisms really, aside from like the work that you guys do, you know, obviously like, think tanks and academics put a lot of effort and trying to figure that out. But typically the regulation doesn’t have an internal mechanism for doing that. And I think that is a weakness. That’s a frustration. Because I think part of the point of your question is like, experimentation is great, and seems like a way that you can learn. But are we actually gonna learn? And sometimes I don’t think there are the mechanisms set up to do that as effectively as we’d hope.
Scott Wallsten
Yeah, I agree with that. That’s an eternal frustration. Hopefully, someday, that will be different.
Matt Perault
Do you find in…sorry, am I allowed to ask you a question?
Scott Wallsten
Of course.
Matt Perault
Do you find when you… I mean when I was like trying to do some of this work in as an academic, I was realizing how difficult it was for certain things, or maybe many things to actually like, get the information that you’d need to evaluate impacts. Even when you felt like, okay, “I’m gonna actually devote energy like resource, time, research, energy, the efforts of an organization around trying to evaluate to some extent costs and benefits.” That were often harder to do than I would have thought like. I would have sort of thought that like once you’ve decided what the issue is, mainly trying to find people to study the stuff that should be studied. But I found it actually hard, often, for reasons that were not obvious, and not really discussed very much publicly to track the information that would enable you to do it effectively.
Scott Wallsten
Yeah, I mean, I found that there’s often little incentive to collect the kind of information that would allow for a real experiment or to set up the program in a way that allows for experimentation to begin with.
Matt Perault
Yes.
Scott Wallsten
So it’s usually about finding. I mean, in my case, finding proxies for…
Matt Perault
Right?
Scott Wallsten
So I don’t know. I mean, do you? Do you think that there’s a reason to believe that the incentives might be might be different here. That because maybe because everything is so new? No, you’re shaking your head “no.”
Matt Perault
No, I think the point is I it’s I think, I don’t understand why policymakers seem as uninterested in this as they are. I assume it’s because the benefits, all the costs and benefits accrue at the moment of passage. And whatever happens down the road doesn’t really matter as much. And so the long-term effects are just like, not that interesting and increase the cost of the bill, and, you know, make it less desirable. So, I’m not sure if the incentives shift enough to make it matter. But I think if you’re talking about need and value, that’s a different thing. We should understand, like 6, 12, 18, 24 months into the implementation of the EU AI Act, we should understand its impact on AI development in Europe. And I don’t have a sense that we will. And if you and like that, seems like, you guys are probably closer to the literature on like GDPR impact than I am, for instance. But I’ve you know there’s there have been some studies like Liad Wagman has done several reports on the impact of GDPR. But there’s been a relatively like limited amount, I think, in the, you know, despite like at this point, we now have several years from that, and that was a fairly big policy initiative.
Scott Wallsten
You know. I wonder, though it seems like some private organizations like your employer, for example, would have a real interest in understanding what the effects of different rules are. And obviously, I assume that’s part of the reason why you’re there. But is there interest? And of course, then it would be proprietary, but still it would be. Is there interest in there in kind of finding empirical ways of studying the effects of different rules?
Matt Perault
Yeah, I mean, I think that’s an important to your point, I think it’s an important part of my job. I think it’s everyone argues for evidence-based policy, and I think the question is whether you can marshal evidence to support the policy ideas that you have, and I think we’ll be much more effective if we’re marshaling evidence to support the ideas we have.
Scott Wallsten
We’re sort of moving off policy, just for a second. We know that people like to say that one type of firm is more innovative than another, but in reality different types of firms innovate in different ways. And small firms innovate better in some things, and large firms innovate better in other things with AI, do you? Where do you see smaller firms or “little tech” having a comparative advantage?
Matt Perault
So I’m probably not the right person to answer that froma future…
Scott Wallsten
And I’m not the right person to understand it either. So…
Matt Perault
From a future of AI innovation perspective. But I think our general point, our firm’s not asking for a handout for small companies like we’re not. We’re not asking for sort of disproportionate. Public benefits are being placed behind “little tech” to ensure that they can defeat big tech. That’s not really the approach. The approach is, we want “little tech” companies to be able to compete, despite, I think, how the odds are stacked against them in many ways. You know, most of the companies we invest in fail. That’s the nature of venture capital. That it’s not the way that I’ve seen some people write about, is the batting average is low. You know, most companies don’t work out. And our view is that the possibility that some might work out and the opportunity for them to work out, meaning you don’t essentially have a regulated monopoly, where you have a small number of companies who essentially have the exclusive right to build.
That opportunity is that we don’t exactly know where the innovation will come from, and what it will look like. But the possibility will drive a lot of benefits in the ecosystem, and a few weeks that was an that was a very abstract answer to your specific question. But I think we have, like a very important data point on that. In the last couple of weeks, like two weeks ago, we thought the most important thing was getting compute behind very expensive proprietary models. There were American models. And now there’s an enormous amount of emphasis being placed on the limited potential business models of proprietary models focus on open source with significantly less cost, significantly less compute power required lower energy needs, probably. And I think and that was in a week, or you know that, like, in the course of a weekend between a Friday and a Monday. That it felt like those fundamental assumptions of where the forefront of AI development was radically shifted, and that, I think, is a very compelling statement about the kind of perspective we have about where innovation is going to come from, and the importance of enabling innovation to come from the places that it comes from.
Tom Lenard
So.
Scott Wallsten
Nope, go ahead, Tom.
Tom Lenard
I mean, you talk about when you talk about “little tech,” you talk about fostering an environment where “little tech” can compete against big tech. But aren’t they all kind of in reality, for the most part of the same same ecosystem? And you know obviously one of the perhaps the major exit strategies for startup, you know, venture finance firms is to be acquired by a big firm. And that and that may be the, and that in many areas seems like an efficient way to do it. There are the little firms that can develop ideas and experiment, but they don’t have the necessary, always the wherewithal to develop them and put the investment in, you know, too well, to develop them more completely. And so they and they so so they’re acquired by a big company that has those resources.
Matt Perault
So I think, M&A is incredibly important for “little tech.” Obviously, like, “little tech” companies get investments based on the possibility of exit. If exit isn’t a possibility, it’s going to be hard for them to get the capital. They need to build their businesses. So I think M&A is critical, as you’re suggesting. And there’s a lot of alignment there with larger tech platforms. Different people talk about these concepts in different ways. One of the things that I was curious about in the career shifts that I’ve made in the last 6-7 years was, how much will my views of the policy landscape shift depending on my employer. And obviously they’re gonna shift in part because they have to shift to represent the interest of an employer. And also because your perspective changes as you move from one place to another, but my point of view didn’t change dramatically when I left Meta and went into academia.
Like the ideas that were important that I felt to me the core of what the right policy approach might be. I think we’re kind of consistent. When I was looking at it from it through an academic lens, and from my perspective at Meta. And I was curious about how much things might shift when I moved from my academic work to working at Andreessen Horowitz, and I’m sure that will like to evolve over time. I’ll learn more about different things. Our perspective as a firm might shift, but I don’t think it’s really dramatically changed. And so I don’t. When I think of “little tech,” I don’t necessarily think of it as adversarial to big tech, and I think some people will see it that way. I have a 3 and a 5 year old at home. So we have books with titles like opposites like in, out, up, down, big, little. So I understand then you hear “little tech,” you think of it as oppositional to big tech. But there’s a lot of there are lots of areas of alignment.
Our co-founders recently published a blog post with Microsoft leadership about AI for startups. So the question was, where is there alignment between our companies on the right policy agenda for startups? And I think there are lots of areas of alignment. So, when I use the term, I don’t necessarily mean it as one that’s adversarial.
Tom Lenard
So, I assume that I’ll ask it as a question that your company is hoping for a more permissive merger policy than was the case in the last four years that would make it easier for the firms that they invest in to be acquired?
Matt Perault
I think I think the answer there is like the one. The answer that I hope would always be the answer, which is that there shouldn’t be an issue with pro-competitive mergers. And some not, all mergers are pro-competitive, but there shouldn’t be an issue with pro-competitive mergers.
Scott Wallsten
So let me ask a pretty basic question. When you were at UNC, I really understood your job, what you did.
Matt Perault
Yeah.
Scott Wallsten
And I read all your work. What do you do now? What’s your job?
Matt Perault
Yeah, so there are a couple of different components of it. So one, and this is the one that we’re talking about now, which is trying my best to represent the interests of our portfolio overall.
Scott Wallsten
So present them to whom?
Matt Perault
Well, to you guys.
Matt Perault
To the thought leaders, to the organizations like yours that are kind of leading thinkers in the sector, to policymakers to academics, to policymakers at the state level policymakers internationally policymakers in D.C. And also, Andreessen Horowitz has put a lot of effort into developing frameworks around ideas around lots of different things that the firm is interested in. Anyone who is interested in learning about crypto policy should look at the editorial content we have on Crypto. It’s not sort of lightweight puff pieces that are just press releases. The firm goes very deep on thinking about the technology, thinking about the analytical frameworks. And trying to help shape the thinking and to do that informed by a deep tech perspective. So I think that is clearly the component of my job that we’re focused on right now.
The thing that drew me to this work, was a goal of developing a more proximate relationship to the garage stage, which you know. Again, I said this at the beginning of the conversation. That was not a part of the ecosystem that I knew intimately. So the way that I’ve been thinking about it is, I’m sort of like a 3rd-year medical school student, who has talked all the time about what patient care is like, but has never actually cared for a patient. And that’s how my relationship to the startup world felt to me that I had a sense of what it was like. I sort of lusted after that like they would, you know, it’s obviously like a very compelling…
The idea of building a successful company from the ground up is very compelling, I think, But that’s it’s not something that I have been very close to. And I like situations where I have a lot to learn, and a lot of my exposure at Meta, and then in my consulting practice when I was at when I was in academia, we’re focused on more like Meta, obviously large company, but my consulting practice was also midsize companies. You know, it was like companies that had a policy team. And we’re looking for policy support of one form or another. The companies like, without a general counsel, or I, talked to some to a company this morning, and I was talking to the general counsel, and I said, “how big is your legal team?” And he said, “you’re looking at your at the legal team.”And and one of the exciting things for me has been trying to develop a skill set to be helpful to those companies. That’s not. I don’t think that’s the skill set I had on day one of the job. I think it’s a different one. These aren’t companies that are looking for a 2,500 word you know, analysis of something, or like a 20 page, memo or you know, a 20-page policy brief, the the thing to deliver that’s of value is something else. And my hope is to try to be able to do that.
Sarah Oh Lam
I think what’s interesting is in “little tech” has come into D.C., in the last 20 years. We haven’t really seen or heard a ton from that constituency of small…
Matt Perault
Yeah.
Sarah Oh Lam
So I think that’s a little bit new. That even like opening D.C. offices like we’ve seen venture capital associations, or but no, no real big names coming into the D.C. policy discussion. Do you think there’s… because there’s more tech regulation that there’s kind of a need for representing all these small companies that don’t have resources to get into policy? Why, now, compared to like 20 years ago?
Matt Perault
So I think our firm, specifically had a very strong reaction to seeing the development of an AI policy agenda that they thought would make it very hard for “little tech” companies to get off the ground, and it would. It would close the door to competition, essentially in AI. And I think in some ways like you could say, like, we’re a strange voice for a “little tech” like we’re not a founder and an engineer. That’s not what our firm is. I think the challenges. And I think this is like really, strongly like viscerally internalized by the firm. If we don’t speak on behalf of “little tech,” who is and you know the same, the same things that present challenges for compliance that like if you’re a one person, legal team.
How are you going to navigate a complicated patchwork of state laws? That’s also the same thing that makes it hard for little tech to have the resources to like go to D.C. once a month and do meetings with a lot of staffers, or even spend the time, to testify once. Or to know everyone who works in the White House and spend time talking to them about how your company think about issues? That’s not a thing that happens. I think, when your question about tomorrow is, “are we gonna survive?” Your agenda for tomorrow looks different when that’s the question that you’re focused on. And so, I think that the reason that they created the function that I’m in now is to try to give a voice to a different perspective.
Matt Perault
So, one thing that was striking to me as I was coming into the firm, was that I think I thought I understood what venture capital was, and I really didn’t. I thought venture capital was more like flipping houses, like you’re looking for sort of short-term benefits, short-term bumps. And that that’s the focus of the firm is like, can you get a return quickly, and that’s not at all the case. So the life cycle of most venture capital funds is 10 years. And so you’re looking to build companies over a long period of time, and a lot of, I think, for a lot of companies like even the successful ones several years in, there’s still a lot of uncertainty associated with them. So really, like you need a longer time horizon. because you’re doing the deep work of actually building companies from the ground up.
And and our firm has put a lot of emphasis and try to compete in the ecosystem by offering kind of comprehensive services that will enable companies to grow and to support the companies that we invest in, to build, to help build them over time. So that can mean helping advise them on how to go to market, helping how to advise them on, how to build an HR practice, helping them to find the talent that they need for those different components of their business. When they start to build them, I think to me, that was fascinating to get a better understanding of that. But it also changes the way that the nature of the policy work, because the policy work isn’t focused on trying to get returns tomorrow. And so this isn’t how I experienced Meta at all, but like a public company in theory, if you bump the stock price tomorrow, that’s a good thing.
And, as you guys know, like when you’re running a nonprofit when you, when you’re like running our center. Our center, we financed it ourselves. You know, we had to go out and raise the money for our center. We weren’t getting university money for it. And so we were focused on trying to ensure that in the next year, we would be able to keep the lights on, and we could have the staff that we needed to have that creates a certain kind of incentive structure, lots of good things associated with it. But it is not focused on the long run. You know the view is not, “where are we going to be 10 years from now?” And so when we are thinking about a policy agenda that works for “little tech.” It’s not like, “how can we ensure that the next 18 months are really great for startups?” The idea is really like, we want to have a competitive ecosystem where American companies can compete aggressively, can be competitive with foreign companies, and that is not just the domain of large incumbent platforms.
Scott Wallsten
It’s in some ways, it sounds a little bit like some of the things a trade association might do, except that of course, you know you’ve got specific investments, too. How do you? I mean, when you’re thinking about the policy aspect of it, do you have to just sort of set aside what your specific investments are and say, “this is what we think is good for this whole set of companies?” “And we got that.”
Matt Perault
Yeah, I think that that’s right. I mean, obviously the work done well is informed by the companies significantly and informed by what you see in the companies, but I don’t think it’s the same. Different trade associations have different ways of deciding on how they move forward. And I think some are consensus-driven, and some are a little bit more like Member informed but not consensus-oriented. And I would say, I think we’re probably closer to the latter. In part, because there, there may be individual companies that have a short-term interest that’s not in line with what we see in the long run a market in the best interest of creating the most, the strongest products over the long run. And I think it’s important in my role that I try to have that long time horizon, that it’s this again. This isn’t about having a bunch of great IPOs in 6 months or something. This is about my job to create a thriving, to ensure that there’s a policy landscape that can enable thriving AI companies in the long run.
Tom Lenard
So.
Scott Wallsten
We should. No sorry. Go ahead, Tom.
Tom Lenard
Oh, this is maybe a little bit of a deviation. But my impression is that one of the strengths of the American innovation system, is that we do have a thriving venture capital. Basically a thriving venture capital system, which I don’t think Europe has and maybe no other country has it like we have it. If you were to advise Europeans, this is a hard question. But the European Union or European countries, what they could do to have a more, a more thriving venture, capital system. What would advise them to do?
Matt Perault
Well, yeah, I’m well, I’m curious about your list because I’m sure you have a long one. I guess the sort of two that come to mind are, first, st the thing that you mentioned previously, which is like enabling pro-competitive mergers. That obviously seems really important, venture won’t enter if it can’t exit. And if it’s tough to for an acquisition to be approved, then it’s gonna be tough to have the exits, the exits that will enable entrance. So that’s one. And then the second is, I think, a model that focuses…a regulatory approach to AI that’s focused on model development. I have increasingly understood to just be a tax on innovation, really, just a straight tax on innovation, almost in name a tax and innovation, because regulating model development is only regulating model development. It is not regulating whatever harmful use you might imagine. It’s not, you know, it’s not AI and antitrust. It’s not AI and unfair and deceptive trade practices. It’s not AI and civil rights. It’s literally, “we hope that by slowing the pace of model development that that will result in benefits to consumers.”And we don’t know if it will result in benefits to consumers. Maybe it will. Maybe it won’t, but we know that it will make it harder to develop models.” And so, a regulatory framework that adopts that approach, I think, means there’s going to be less innovation, which means there’s going to be less venture capital flowing in.
Scott Wallsten
That strikes me as a good place to leave it. I sort of tied it back to the beginning. So thank you so much for joining us. I thought it was an interesting conversation.
Sarah Oh Lam is a Senior Fellow at the Technology Policy Institute. Oh completed her PhD in Economics from George Mason University, and holds a JD from GMU and a BS in Management Science and Engineering from Stanford University. She was previously the Operations and Research Director for the Information Economy Project at George Mason School of Law. She has also presented research at the 39th Telecommunications Policy Research Conference and has co-authored work published in the Northwestern Journal of Technology & Intellectual Property among other research projects. Her research interests include law and economics, regulatory analysis, and technology policy.
Thomas Lenard is Senior Fellow and President Emeritus at the Technology Policy Institute. Lenard is the author or coauthor of numerous books and articles on telecommunications, electricity, antitrust, privacy, e-commerce and other regulatory issues. His publications include Net Neutrality or Net Neutering: Should Broadband Internet Services Be Regulated?; The Digital Economy Fact Book; Privacy and the Commercial Use of Personal Information; Competition, Innovation and the Microsoft Monopoly: Antitrust in the Digital Marketplace; and Deregulating Electricity: The Federal Role.
Before joining the Technology Policy Institute, Lenard was acting president, senior vice president for research and senior fellow at The Progress & Freedom Foundation. He has served in senior economics positions at the Office of Management and Budget, the Federal Trade Commission and the Council on Wage and Price Stability, and was a member of the economics faculty at the University of California, Davis. He is a past president and chairman of the board of the National Economists Club.
Lenard is a graduate of the University of Wisconsin and holds a PhD in economics from Brown University. He can be reached at [email protected]
Scott Wallsten is President and Senior Fellow at the Technology Policy Institute and also a senior fellow at the Georgetown Center for Business and Public Policy. He is an economist with expertise in industrial organization and public policy, and his research focuses on competition, regulation, telecommunications, the economics of digitization, and technology policy. He was the economics director for the FCC's National Broadband Plan and has been a lecturer in Stanford University’s public policy program, director of communications policy studies and senior fellow at the Progress & Freedom Foundation, a senior fellow at the AEI – Brookings Joint Center for Regulatory Studies and a resident scholar at the American Enterprise Institute, an economist at The World Bank, a scholar at the Stanford Institute for Economic Policy Research, and a staff economist at the U.S. President’s Council of Economic Advisers. He holds a PhD in economics from Stanford University.