fbpx

Jon Baron on Evidence Based Policy at Arnold Ventures (Two Think Minimum)

Jon Baron on Evidence Based Policy at Arnold Ventures (Two Think Minimum)

Bob Hahn:

Hello, and welcome to the Technology Policy Institute’s podcast, Two Think Minimum. I’m your host, Bob Hahn, and today we’re delighted to speak with Jon Baron. Jon is Vice-President of Evidence-Based Policy at Arnold Ventures. Before that, he founded the Coalition for Evidence-Based Policy, which worked with federal policy officials to advance evidence-based reform, and I feel particularly honored to have Jon here because he’s on the front lines of the evidence-based policy, I was going to call it battle, but maybe initiative… Jon you’ve been a guru in this field for quite some time. How did you get interested in it?

Jon Baron:

Well, my father was a medical scientist, a medical researcher, and he would talk about controlled studies at the breakfast table when there were claims in the newspaper about things like vitamin C preventing cancer, and he would shake his head and say, “Wow, there’s no, it’s not a control. This is not a controlled study.”

So, and when I would come home from college, grad school and talk about… I was always interested in tackling social problems like poverty and educational failure, and I’d share an idea with him. He would say, “Ah, well, that’s very interesting and it sounds great, but is there a controlled study? Has it been tested in a Crump controlled study?” So, I found that very annoying, but I guess it kind of stuck with me, and then as I went through my career, initially I worked for Congress and the Clinton administration. I worked, I was a Clinton administration official at DOD and thinking what to do with, you know, after the Clinton administration, I knew something about research, but I also knew, and I knew about controlled studies, I knew that some had been done, randomized controlled trials and welfare policy at that point, and I also knew what I thought I could bring to the table was that I really knew the policy process.

I’d worked with members of Congress to get important legislation enacted. When I was on the Hill, I had led some reforms at the Department of Defense. So, I knew how change was made in one of the largest bureaucracies in the world, and so I thought I could combine those two, my knowledge of research, which was somewhat rudimentary at the time but grew, research and evidence, plus my ability to advance change in the policy process.

That’s when I decided to launch the Coalition for Evidence-Based Policy, that was 2001. That’s how I got into it.

Bob Hahn:

So, do you want to tell us a little bit about that coalition and how it morphed into what you’re doing now?

Jon Baron:

We launched the coalition in 2001. It was always bipartisan. We worked first with the Bush administration, later the Obama administration, to advance evidence-based reforms and social spending programs. Initially, in the Bush administration, we worked with OMB to advance changes in their methods of assessing program performance. It was called “the PART” at the time, and that was the process program assessment rating tool. And we worked with them to advance… get the concept of rigor in there. The idea of advancing the more rigorous forms of evaluation, like randomized trials and other good rigorous non-randomized studies. We also initially got some set-asides and legislation like in prisoner re-entry legislation to advance the reentry of prisoners into the community. We got set-asides for rigorous evaluations, preferably randomized, into law.

And then at the end of the Bush administration, and really at the beginning of the Obama administration, we worked with policy officials to get enacted into law what are called tiered evidence programs in a number of different areas, in early childhood, in K-12 education, and in other areas. These are government grant programs, they make grants to state and local entities and to nonprofits and the largest grants go to interventions that have strong evidence of effectiveness along with the requirement for a replication study, and then there are many smaller grants that are made to interventions that have promising evidence or more preliminary evidence along with the requirement for a rigorous evaluation. The idea there is to see whether they were in a more rigorous study, and if they do, then they can move into the top tier.

So long story short, these tiered evidence initiatives are designed to build evidence. They’re like a conveyor belt, build evidence, and for those that really make it into the strong evidence category, to scale them up.

Bob Hahn:

All right. So, help me out a little bit, you know, I don’t know who’s listening to this podcast, but I’m sure there are millions of people who are going to be listening. Tell me a little bit about why randomization is important, or just give me an example, you mentioned your dad talked about some stuff in medicine, so that we’re all on the same page.

Jon Baron:

So, the unique advantage of a randomized control trial is that the process of randomly assigning a sizeable number of individuals, people, into either a treatment group, which receives a certain program, let’s say a job training program or a control group that does not receive the program. That process ensures that there are no systematic differences between the two groups in any factor, except one, which is that the treatment group participates in the program and the control group does not. All other factors are the same if you’re randomizing a large number of people. So, you’ve controlled, that’s why it’s called the controlled study, you’ve controlled for all those other factors. So, any difference in outcomes between the treatment group and the control group can confidently be attributed to the program, or the treatment, and not to other factors. That’s the unique advantage of a randomized trial and why it’s considered in many fields, including medicine, but other fields as well, to be the strongest method of evaluation when it’s feasible to do such a study,

Bob Hahn:

Can you give me some examples of where this kind of approach has worked, sort of this randomized approach or this evaluation approach?

Jon Baron:

So, one example, which actually had an important influence on my thinking when I started the coalition, was that the federal government in the 2000s, well, in the late 1990s, they scaled up a program that provided after school activities and homework assistance for low-income schools, and they operated on the weekend, and the kids could go there after school, they would get assistance with homework and so on, and academic assistance. They would also get, you know, basketball, drama, different kinds of various activities. It sounds like a great idea.

So, it was tested in a very large, randomized trial funded by the US Department of Education. The way they could do it is that the elementary school centers, they were oversubscribed, there were more children and families that wanted to participate than they had space for at the centers.

So, they used a lottery, meaning random assignment, a randomized lottery to determine which children would participate, that’s the treatment group, and which children would not. That’s the control group. Two years later, the study results came back, and they showed that there was no… unfortunately, despite the academic focus of these centers, there was no difference in academic outcomes, reading and math test scores, and all the rest, between the treatment and control group students. So, that was disappointing.

But even worse, the students in the treatment group who had gone to the after-school centers had worse behavioral outcomes than students in the control group. It was roughly a 50% increase in school suspensions. So, why did that happen? Well, one likely reason, we don’t know for certain, but one likely reason is that when at-risk kids are grouped together in a semi-structured setting at the after-school center, some of them modeled delinquent behavior for the other kids.

I believe that explanation because it’s exactly what my two kids used to do when I would group them together. So, anyway, again, there are many examples of programs that are well-intentioned, that sound like very plausible ideas, but just don’t work when they’re put to the test in a rigorous study. That was one early example, but there are many, many examples like that of unfortunately disappointing results from ideas that really sound like they ought to work and are very well-intentioned. The good news is that there are some positive examples too, but they’re exceptional programs that really do work. You know, we can talk about that, but that’s an example, a concrete example, of a large, well-conducted randomized trial that produced a very important finding.

Bob Hahn:

So, we do these studies. Is there any evidence that the decision-makers pay attention to them?

Jon Baron:

There is some evidence, but I think as the evidence-based policy movement has advanced over the last 20 years or so, it’s been more successful in building evidence, including in identifying programs that really work and produced large effects and are replicated. On the evidence building side, there’s been some great success. On the take-up side, does it influence policy decisions? There are some successes, but not as many. Some of the successes are with some of these tiered evidence initiatives that I described. There was one in K-12 education where a large grant, large grants went to programs that really did work, like KIPP charter schools. Not only, you know, they received a major grant to expand funding. These were a type of no excuses charter schools. They’re public charter schools and they mainly serve low-income populations. In Baltimore, there are KIPP charter schools, but many, many other parts of the country as well.

Jon Baron:

I think there are about 250 nationwide. So, they got a large grant to expand, and the Department of Education, to its great credit, also funded a major replication study to see whether the promising effects from the earlier research could be reproduced at scale, and what they found for KIPP was that, yes, indeed, that was true. They move kids up about five to 10 percentile points in reading and math achievement. That was sustained over several years, and there’s also promising evidence that many years later they increased… more of these kids went on to college, about a six percentage point increase in the college enrollment rate. So, that was an example where there was promising evidence, it was reproduced, they got a large grant to expand. The Nurse-Family Partnership in Early Childhood is another example, but what’s happened also is that I think the… in some of these tiered evidence initiatives and more generally, sometimes, there is an overlabeling of program sets as evidence-based so that the evidence standards are too low, and so a lot of the money still goes toward programs that really have more preliminary evidence, evidence that often does not hold up and does not hold up when a more rigorous study is done.

Bob Hahn:

So you mentioned a word that’s sort of thrown about occasionally in Washington and elsewhere that everyone likes, “bi-partisan,” and I’m wondering if there’s an opportunity here, and I sort of think of you as a policy entrepreneur in this world, to bring together folks on both sides of the aisle?

Because it’s hard to be against evidence. I mean, do you see a way of forming a sort of durable poll political coalition that might be willing to listen to some of the stuff we egg heads do?

Jon Baron:

I think so! I think, you know, I’m somewhat optimistic on that front. You know, on the left there are folks saying, “We need to expand government, government should have a large role in improving lives and so on.” On the right, there are folks who were saying, “No, we need more limited government.” So, that part of it, I don’t know. Those are values and ideology and so on.

But one point where I think everyone, or at least most people can agree, is that whatever money is spent should be spent on programs and strategies that actually work. That’s where evidence-based policy, I think, has its role, and it’s a way where it can bring together, you know, people on the left and the right on that basically common sense idea that when we’re spending money, it should go toward programs that are not just plausible, not just well-intentioned, but that actually deliver results.

Bob Hahn:

Over your 20 years or whatever in this area, and starting foundation type things. Do you see evidence that a bipartisan coalition is emerging either at the state level or at the federal level?

Jon Baron:

It’s hard to know because we’ve just come through a period, and hopefully it’s going to mend a little bit, of extreme partisanship. But the part where I saw this really happening, and I hope we do get back to it, at the end of the Bush administration, you know, we had worked closely with the Bush administration to get enacted into law, a pilot program to expand the Nurse-Family Partnership and other evidence-based home visiting programs. These are, without going into tons of details, these are home visiting programs for low-income families with young children. The Obama administration, when they came into office, they looked at what had been done, and they looked at the work, and there was continuity there because some of the career staffers in the Bush administration were there again at OMB and elsewhere in the Obama administration, and the Obama folks expanded on what the Bush folks had been doing.

So, in the area of home visiting, for instance, it became a much larger initiative that was enacted as part of the Affordable Care Act. It’s called the Evidence-Based Home Visiting Program. It’s now a $400 million a year program, and to scale up the Nurse-Family Partnership and other evidence-based home visiting programs. So, that was a bipartisan initiative. It was reauthorized, I think, in 2015 or somewhere in the middle of the decade, again with overwhelming bi-partisan support. In education, the tiered evidence program was basically reauthorized by a partnership of Senator Hatch, a Republican, and Senator Bennett, a Democrat as part of the Every Student Succeeds Act that reauthorization went forward. So, I think at least on a modest scale, this has attracted bipartisan support, this concept.

Bob Hahn:

So some of the examples you gave were sort of what I would call binary, or zero-one, you know, either the program works or the program doesn’t, but is there an opportunity in this area to think about how to improve programs, whether they’re working well or not?

And can we use this, a similar approach like this randomized approach you’re talking about to do that, to sort of get more bang for our buck?

Jon Baron:

Well, let me give you a couple of examples. There was a nationwide randomized controlled trial of the Job Corps program. That’s a program that provides residential job training, it’s a residential program for low-income youth. It’s one of the Great Society programs that’s been around since the 1960s. The randomized trial found very disappointing effects. There were short-term gains in earnings that were fairly modest that faded out, faded basically to zero over time. However, for one of the subgroups, one group of youth, youth who had disabilities, physical or emotional disabilities, there appear to be large effects for that subgroup. So, our foundation is funding a replication trial with Mathematica that’s focused specifically on encouraging more of those youth to participate, and then doing that in a randomized way to see whether that promising effect, subgroup effect, which is suggestive, it’s not definitive, whether that can be reproduced in a more definitive study that’s directly designed to answer that question.

There are other examples, even for programs that have very strong evidence like the Nurse-Family partnership, which as I mentioned as an early childhood home visiting program, where nurses visit low-income mothers who are pregnant with their first child and teach them basic parenting, nutrition, not to smoke, not to drink during pregnancy. It’s been shown to have large effects, sizable effects, in reducing rates of child abuse and neglect. That program uses randomized trials to test in-program improvements against the original.

For example, there was a test of expanding access to providing the mothers with the option of using a a long-acting reversible contraceptive, a LARC, to prevent repeat pregnancies that was tested in one group of families participating in this program, compared to another group of families that just received nurse family partnership as usual, without the emphasis on LARCs.

I think that study is still ongoing, but yes, that’s an example of how you can test program improvements against the original, even though the original is effective, are there ways to improve it over time? It can be used as a program improvement strategy.

Bob Hahn:

So, what I hear you saying, and I don’t want to put words in your mouth is it’s not necessarily a one-shot deal, but you might think of it more like as you test, you learn, and adapt, to use that phrase: test, learn, and adapt.

Is that how you think about it, and are people using it that way generally? Or, do they typically do the one-shot stuff?

Jon Baron:

I think it is thought about that way. I’ve had a couple of thoughts on that. Let me give you one other example, which I think kind of hits the nail on the head. One of the most impressive findings in all of social policy recently came out. It was an HHS-funded study of a job training program for low-income young adults called Year Up. It was an eight-site randomized controlled trial. It found the earnings gains for the low-income participants 30% to 40%, sustained over five years. Those are large effects. They were sustained over time with no sign of fade-out, and the effect was sizeable and statistically significant in each of the eight cities that were part of the study. So, the effects were replicated across all these sites. There’s evidence of generalized ability. Now, this is a fairly expensive program. I can’t remember the exact number, but I think it’s about $15,000 per youth. That’s what philanthropy puts in, and then businesses also contribute roughly the same. I think it’s maybe $14,000 each. I can’t remember the exact number, but we are co-funding along with the Institute of Education Sciences, a test of a lower-cost version of Year Up to see whether it can be delivered not in standalone job training sites, but through community colleges. So, to use that infrastructure, it would be a much lower cost and that study is still underway, but that’s also along the lines that we just described. That’s a way of testing a lower cost and potentially more scalable version of Year Up to see if it can produce the same effects as the original.

Getting back to your question of test, learn, adapt. I think it’s absolutely critical to do that rather than sort of you know, do it once and then you’re done, and your program is, you know, in the proven category and so on. I think it is absolutely essential. I think it’s also useful in the sort of the earlier stages of evidence building before you get to a big randomized control trial. So, in the initial stages, you really want to, when you’re developing a program, the first step is to determine can you implement it successfully? Well, if it’s a job training program, can you manualize the curriculum? Can you deliver the curriculum in accordance with the manual? Will people show up for the program? Will they stick through the program, sort of those basic functionality questions? Then to, or maybe even as part of that process, you can test short-term outcomes, like, you know, did they find another job, et cetera, maybe with reference to a comparison group, or did they learn the curriculum? Did they master the curriculum and get the certification, let’s say. So, that’s the next step in the testing process, then maybe to test it in a longer… once you pass that threshold and you’re promising evidence then to move on to a more rigorous test in a larger scale, randomized controlled trial. All along that process, you learn things, one learns things, the researchers, the program learns things. You know, maybe if we taught the curriculum differently, if we put this module in front of this, we could have greater retention in the program, then people will stick with it and so on. So, a lot of that is an iterative process where you’re learning along the way, so that eventually, hopefully, you get to the point where you have a program that is well implemented, manualized, and has promising evidence, and is ready for a more definitive test in a larger randomized controlled trial.

Bob Hahan:

So, it’s not only test, learn, and adapt, but it’s sort of building in these ideas to make things scalable and maybe lower the costs, sort of like a Miller Beer commercial, or whatever it was. Less filling tastes great. You’re always looking for a better, cheaper way of doing things that will get you where you want to go.

Let me ask you about this piece of legislation that was passed a few years ago, that I know you know about that Congress passed, The Evidence Act. What’s your take on that, and do you think it’s actually having any impact, and can you just sort of briefly summarize it for our listeners?

Jon Baron:

I think it’s too early to tell the impact becuase it’s implementation is underway. I’m cautiously optimistic. One main focus of the evidence act was on data, increasing the use of data for program improvements, for research, and for evaluation. And whether it’s because of the Evidence Act or maybe other accelerated pre-existing trends, there has been a great improvement on that front. From our standpoint, doing rigorous testing through randomized control trials, the largest cost in a study like that, a randomized controlled trial, is usually the cost of data collection.

If you have to do a survey or an interview, I mean, in the old days before you had access to data to measure the outcomes, if you had a thousand people in your sample, you’d basically have to follow up with them every few years, and track them as they move to other cities and so on, you’d have to send them Christmas cards, and then if they come back return, you know, return to sender, you know, you need to put more resources into finding them. It was a very labor-intensive process, and that was usually the biggest cost in a randomized controlled trial, but now because of the advent of government databases and also some private databases like the National Student Clearinghouse in post-secondary education, outcomes like earnings college, graduation rates, student test scores, can now be obtained through these administrative data sources, sometimes at nominal costs, and also in a very complete way, sometimes you can get access to everyone regardless of whether they move across the country.

So, that’s made the cost of these randomized trials and many other longitudinal evaluations, much, much lower in many cases, not all cases, but in many cases. That’s been an important change. The federal government has had an important change. There’s been much greater access to databases, for example, maintained by the Census Bureau and by the Department of Health and Human Services on employment and earnings outcomes of program participants. So, that’s been a very important development, and I think the Evidence Act has encouraged that and accelerated that over time. That’s one piece of the Evidence Act, it’s sort of the data part of it.

The other part of the Evidence Act, which is more I think in process, is that it required the federal agencies to have annual evaluation plans and to develop learning agendas to building evidence, and that process is ongoing. I think you know, OMB is leading that charge and we’ll see how it goes.

Our foundation is providing some support for the federal government to try, through Georgetown University, to partner with the federal government, to identify opportunities for randomized evaluations of programs that are highly promising, better federally funded programs that are highly promising as part of this, you know, these learning agendas and evaluation plans. But that process is really just I think gaining steam and we’ll see how we’ll have to see how it plays out over time.

Bob Hahn:

So, listening to you talk about these learning agendas, and I, as you know, was on the Evidence-Based Policy Commission. I find that to be a rather exciting innovation, but I wanted to get your take on it. Isn’t it a gentle nudge, to use a term from behavioral economics to the folks in the agencies to begin to think about what their objectives are, and whether they can supply what you might call rigorous evidence that sheds light on the best ways to achieve their objectives?

Jon Baron:

Yeah, I think it’s a very promising approach. Some of the challenges are that some of the federal agencies may not yet have the capacity, the in-house expertise to build evidence, to build evidence and move programs toward rigor, just because many of them don’t have an institutional history of doing impact evaluations, for instance, to measure programs effects. So, capacity is an issue, and I know that OMB and the Office of Evaluation Sciences, and others, are working with agencies to try to build that capacity.

I think one of the other challenges, and this is true, not just in the federal government, not just in government, it’s also true in philanthropy, is that there may not be an appreciation of how difficult it is to find programs that produce meaningful effects. So, it is possible, and it has happened that federal agencies have funded many excellent randomized controlled trials and produced a long string of wonderful studies that found small or no effects for almost all of the interventions that were being studied.

So, to give you an example, the Institute of Education Sciences, which is the research arm of the Department of Education was created in 2002, and I served on the board of IES, the Institute of Education Sciences, and that was an exciting time during the 2000s, because IES started funding all of these big randomized trials of you know, school choice programs and different curricula and different teacher training programs and so on, and the thought was, “Oh, this is wonderful.” These studies will identify some things that work and some things that don’t work and that’ll give school districts around the country, the knowledge they need to improve outcomes for students.

Well, it didn’t work out that way because as it turns out over the next 10 years, close to 90% of the programs and strategies that were evaluated in these studies were found to produce weak or no effects compared to what schools were doing anyway in the control group.

Jon Baron:

And at the end of 10 years, you could not point to something, even those, the 10% that were positive, there were no blockbusters there. You couldn’t really point to something and say, “Look, we’ve done it. School districts, if you implement this program, you’re really gonna move the needle on student outcomes.”

So, also that knowledge, of how you really have to be strategic. I could talk about that. So as not to produce a long string of disappointing results, I think that’s another challenge. Capacity is the first challenge, and the second is a strategic approach approach to evaluation that recognizes the steep challenges in finding programs that produce the hope for meaningful improvements in people’s lives.

Bob Hahn:

So, that’s interesting. So, I’m mostly an academic on the other side of this, and I do some of these, these are randomized control trials, and I worry about a different problem. I worry about a problem of what I might call publication bias. That there’s a high reward for academics who find significant results, say of programs, and a low reward for finding no results. So, I worry about the fact that some of these negative findings may never make their way into the literature. Is there an analogous problem in government, or do we have a repository somewhere where we can check the winners and the losers and an easy way for people to get a perspective on what works and what doesn’t and all the tests that have been done?

Jon Baron:

There’s not an easy repository, but I think the agencies that fund large impact evaluations, like in social policy, like the Institute of Education Sciences and the Administration for Children and Families at HHS and the Social Security Administration and the Department of Labor, when they fund a big evaluation, the results are always made public. So, for the big evaluations, there’s not a publication bias problem there like there is… I agree with you entirely in academia. For some of the smaller studies, you know, that they fund through… like NIH funds, certain social and behavioral interventions and IES has a research arm that funds mostly smaller studies, more preliminary studies, there may be that problem of publication bias.

At our foundation when we fund a randomized trial in our division of evidence-based policy, and we fund a lot of them, every one of those, the results that get posted on the Open Science Framework, the findings, and we summarize on our website for the exact reason you’re talking about. The disappointing results and the positive results should be presented in a straightforward way, without spin, so that people can understand the results and learn from them.

Bob Hahn:

So talk to me a little bit, without bragging too much, because Arnold has the reputation of sort of being the innovative leader in this area, and that’s why I referred to you as a guru, but talk to me a little bit about how you think about strategically allocating resources to further the goal of evidence-based policy since you’ve been doing it for a while.

Jon Baron:

So on the evidence building side, when we came to the Arnold Foundation and our nonprofit… the Arnold Foundation funded us, our nonprofit, the Coalition for Evidence-Based Policy, one of our main funders, and then they invited our organization to join them as their evidence-based policy division back in 2015, which was a great development for us, and you know, I think we became a funder at that time as part of the foundation, we specialize in funding randomized trials of social programs, and I think we are probably the largest philanthropic funder of US randomized controlled trials in the country. We came in recognizing the challenge that I mentioned, that it’s really harder to find programs that produce meaningful effects than is commonly appreciated. So, the way we sought to address that, we took a leaf out of medicine, the field of medicine.

And we said, “Look, in most cases, before we will fund a sizeable, randomized control trial, we are going to look for promising prior evidence.” We’re going to look for a strong signal from the earlier research that can be quasi-experimental research. It can be a small pilot randomized controlled trial. It can be a related intervention that’s been shown effective, say in older children, and now we want to try it’s being adapted and tried in younger children, but we look for a strong signal from a prior search that a meaningful effect is possible, and that is primarily where we fund, invest in randomized controlled trials, to see if that promising effect can be reproduced in a more definitive study, and if yes, can it be reproduced in multiple sites? We often fund replication studies, a single-site randomized trial looks great. Can it work in other areas?

So, in other words, we look for what’s already has already been studied and is promising as a way of optimizing our chances of success in growing the number of programs that are backed by strong replicated evidence. That is our goal. Building the body of programs that are backed by strong replicated evidence of meaningful improvements in people’s lives. That, we believe, is the critical missing piece that is needed in social policy. There just aren’t that many programs where you can say, “Look, this thing works! It works here. It works there, at different sites, the effects are large. Let’s implement this thing more widely, faithfully to the model, but let’s expand it and you can have confidence that you’ll move the needle on an important social problem.” So that, for us, is the promised land in terms of evidence building, not to get biblical here. You asked me to have some humility, but that’s anyway, the biblical concept. The non-biblical concept is we’re trying to grow what we see as the critical missing piece needed to solve social problems, which has strong replicated evidence of meaningful improvements in people’s lives.

Bob Hahn:

I don’t know if this is possible, but you know, and I’m not an MBA, I’m an economist, but do you try to think about this when you’re developing your strategy? Do you have any notion of return on investment or is that just way too speculative, you know, to even think about? Do you just sort of look at sort of what you were saying earlier, does a pilot work? Is there other evidence to suggest that might work because ultimately, I think you folks are interested in scalability and sort of what I call the Miller Beer commercial, the “less filling tastes great,” if you’ll forgive me on that.

Do you do this sort of based on a lot of judgment or, you know data, or some combination thereof, or, you know, how does it work?

Jon Baron:

Well, we have a team of people who really know the evidence very well. We monitor the literature of studies, randomized studies and non-randomized studies, very closely. We have a Dragnet on the literature. We have a team of people who do this. We look for promising findings, and when there is a promising finding, we reach out systematically to the study authors or the program to see if they’d be interested in doing a trial. We also have a solicitation process where we ask researchers and programs to submit to us in an open way. It’s a very straightforward process, and we ask for the prior evidence, we ask you show us the prior evidence and we review the study. So, we’re very systematic about looking at prior evidence. So, it’s not a formula, like we were not looking for an effect size of 0.2 standard deviations and so on.

Jon Baron:

It’s more of a judgment call, and it depends on things you described. Is this a scalable program? Maybe the evidence is pretty good, it’s not great, but boy, if this thing works, it’s a low-cost program, it’s scalable. So, we balance a number of factors in determining what to fund as a randomized trial. The good news is, this is another aspect of, I don’t know if this is what you were getting at in terms of return on investment, we’re seeing a success rate in finding studies that we funded that have produced an interim or final report. We are showing a success rate of between 30 and 40% in finding a positive effect on one of the key targeted outcomes. That is much higher than the typical… now people 30 to 40%, well, that means, you know, 60 to 70% are null or close to null findings. It’s true. On the other hand, that’s way better. That ratio is way better than typically is found in large rigorous randomized trials, which is more along the lines of 15%. So, we think that the results are validating, and the much higher success rate are starting to validate our approach of looking at prior promising evidence.

Bob Hahn:

Let me ask you a question that in some ways is on everyone’s mind and that’s related to COVID. Do these kinds of approaches that we’ve been talking about for the last half hour or more have relevance in the area of thinking about pandemics or COVID, and where would you think foundations should be playing a role or the government in either doing trials that could affect how we think about this issue?

Jon Baron:

Two items come to mind, two types of programs come to mind. One is learning loss as a result of COVID. Kids are coming back to school, many of them have lost a year of learning, or they had, you know, a year of potentially less effective learning because they haven’t actually been in the classroom. There are programs that have been shown, one-on-one tutoring programs, especially in early elementary school, that have been shown in rigorous studies, randomized studies, to be effective in moving kids up toward grade level, especially for struggling students, moving them up toward grade level. There was another study that came out, not just in early elementary school, but math. It was the Match, Match tutoring for ninth graders in Chicago that also showed very positive effects.

So, there are evidence-based strategies right now that can be used to address learning deficits caused by COVID. Getting to your point about the Miller beer, or you know, I’m a teetotaler, so maybe I think about it as McDonald’s. In tutoring, we really need the McDonald’s of tutoring. We’re not quite there in terms of the evidence. You need something that is scalable, probably high dosage. I mean, meaning kids are getting tutored four or five times a week, done at reasonable costs, perhaps not using certified teachers, but using senior citizens who are trained to tutor or recent college graduates who are willing to do it for a modest stipend, something that has been demonstrated effective and also is reasonable costs and can be used. It’s an off the shelf, relatively modest cost strategy that can be used for by school districts. So, that’s number one, tutoring.

The other one that comes to mind, there was a program, a reemployment program for unemployed workers that was tested in Nevada. It was tested twice. It was tested by the Department of Labor in a major randomized control trial in the aftermath of the great recession when unemployment in Nevada was like 10, 11%.

The strategy is when people come in to claim the reemployment check to provide them immediate job placement and job search assistance and resume assistance in that initial meeting to try to move them back quickly into the workforce. We funded a replication trial, our foundation also in Nevada during a much healthier labor market. I think it was from 2014 to 2018, which found almost exactly the same effect. It was about a 15% increase in earnings over three years. So, this is a strategy that’s been shown effective in both a weak labor market, like the one we have now, and a tighter labor market, and it’s a low-cost strategy. It costs $250 per person. This seems like a great, as the nation recovers from the pandemic and has an overhang of unemployment, this one would be a great candidate to scale elsewhere outside of Nevada and see whether it works under current conditions.

Bob Hahn:

What I hear you saying is there opportunities to use these tools on things like COVID to think about how to come out of this mess in a better way, or, you know, using your McDonald’s metaphor or whatever.

Let me ask you one more question in closing. So, suppose you had President Biden’s ear for 10 minutes on evidence-based policy. What would you advise him to do to, to make a real difference here? What would be sort of your top three, or top one, or whatever?

Jon Baron:

Well, the first part of it is I would try to use a few moments. So, I have 10 minutes is what you said with President Biden? Okay. I try to use three minutes of it to try to convince him that the way our country has been trying to tackle social problems, like low performing schools, and stagnant wages, and rising healthcare costs, we’ve seen very little progress, and the central reason is that to address those problems, it’s not enough to simply spend money on ideas and programs that sound plausible, because however well-intentioned, many programs unfortunately, don’t work, as we’ve seen again and again, when they’re put to the test and a rigorous study.

So, we really need an effective mechanism in the federal government and in state governments, but since I’m talking with the president, let’s focus on the federal, to systematically test many, many, many more approaches to identify the subset, the exceptional programs, that really do work, that produce large impacts, and then to focus funding on scaling those up.

So, I would suggest that he bring onboard in a senior position either in the Domestic Policy Council or at the head of OMB, somebody who really understands and prioritizes evidence. You know, understands the challenges and makes it his or her top priority, having the president’s ear, to advance. systematically test, rigorously test programs in a way that is designed to strategically grow the number of that are shown to work and then to reallocate funding toward those that really do produce meaningful impacts. That would be my main suggestion.

Bob Hahn:

Thanks. We’ve been talking with Jon Baron, who’s Vice President of Evidence-Based Policy at Arnold Ventures. Thanks, Jon, for joining us, and I hope we can have you back at some future point to see how things are going in this very exciting domain.

Jon Baron:

Thank you. I’ve enjoyed being here. I appreciate the invitation. Look forward to talking again at some point.

Share This Article

View More Publications by Robert Hahn

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.