Scott (00:00):
Hi, and welcome back to Two Think Minimum, the podcast at the Technology Policy Institute. It’s Thursday, September 1st, 2022, and I’m Scott Wallsten, president and senior fellow at TPI and your humble podcast host. Today, I’m delighted to have Mike Rosenbaum with us. Mike is an economist, lawyer, and entrepreneur. He worked at the State Department and the White House during President Clinton’s term, which is where he and I met. And after clerking for the honorable Diana Gribbon Motz of the U.S. Court of Appeals for the Fourth Circuit, he founded his first company Catalyte, focusing on finding, training, and advancing technology talent in 2000. Fourteen years later, he founded Arena Analytics, which aims to help organizations diversify, vitalize, and stabilize their workforce. And more recently, Mike was a candidate for Maryland Governor. What’s particularly interesting for our audience about these companies is that they use data, AI, and machine learning to avoid biases built into more traditional workforce development tools. Mike, thanks for joining us.
Mike (01:00):
Thanks so much for having me.
Scott (01:02):
Give us an overview of your companies and what they do.
Mike (01:04):
Scott, thanks again for having me. And I’m so excited to be on a podcast with you because some of the initial germs of my first company came from work that you and I did together at the Council of Economic Advisors in the nineties. But the background behind my first company Catalyte is, and I’ll give you a little bit of background on me and how I ended up in this place, and then sort of talk to you a little bit about the company themselves. I grew up in Bethesda, and my father is a lawyer and my mother was a teacher. And my grandfather left Germany as Hitler was coming to power. As a result, when I was growing up, the basic message in my house was, we’re not safe, because this could happen here.
Mike (01:48):
Prior to the 1990s, there had been pathways without a college degree to dignity, to some basic level of security, through manufacturing and industry. But by the late 1990s, those industries had declined massively. And so the question was, if we need everyone to have a pathway to dignity to ensure stability of our political system and our economy,, what was the next thing now that manufacturing and industry had declined so much? And the problem was, there were structural problems that prevented those who might have worked in a previous generation in manufacturing or industry from moving to growing industries such as technology and healthcare.
Scott (02:19):
Let me interrupt you for a second, because in the nineties, that was a pretty optimistic time. I mean, there was the dot-com bubble and we thought we were, you know, venture capitalists would pay for everything we did forever. And was this aspect that you’re talking about really on people’s radar, or did you see something that really wasn’t at top of mind?
Mike (02:36):
It was an optimistic time for classic societal elites.
Scott:
Uh huh. Right.
Mike:
So it was an optimistic time if you went to a fancy college. It was an optimistic time if you had the credentialing and the networks to be able to access economic, political, and social elite ecosystems in the U.S. However, if your parents were working at GM or Bethlehem Steel and had pathways to decent paying jobs because of organized labor and organized labor’s relationship with manufacturing and industry, if that was the path that you had seen in your parents’ generation, and that was the path you were going to take, it was not an optimistic time.
Scott:
Mhm. <affirmative>
Mike:
Because those jobs were going away. And so when you and I met, I was doing some work in and around tech policy and then around labor issues. And after we worked together at the CEA, I did some work for the Vice President’s office. And the Vice President had the portfolio that included the Empowerment Zone program and policies around it. The Empowerment Zone program specifically was built around a set of ideas that underinvested communities were untapped retail and distribution markets. And they were untapped retail and distribution markets because they were often located proximate to central business districts. And so the Empowerment Zone and related policies were built to incentivize retail and distribution companies to expand in underinvested communities. I disagreed with those policies.
Scott:
Mhm. <affirmative>
Mike (03:56):
It might be true that an underinvested community is an untapped retail market, but more significantly it is an untapped talent market. It is an untapped talent market because the market for talent is and was based on resumes, resumes correlate with race and gender and class, but they’re not necessarily great predictors of success in a job. And, my field was empirical economics. And I said, if we could apply data to this problem, such that we could rely on information other than a resume and find a path for someone who had all of the raw material to be an awesome X, but didn’t have the opportunity to get that job because of structural limitations in the economy, and we could convince the employer of X to hire that person, then everyone would be better off. Folks from underinvested communities who were subject to this bias would be better off and employers who need better talent would be better off.
Mike (04:43):
And so that was my pitch as an alternative to the Empowerment Zone program. I lost that argument. I thought I was going to go back and become an academic, and my academic advisor—the person who was going to help me get the assistant professor job said, “I don’t think you want to be an academic. I you want to be an entrepreneur.” And she was right. And so I moved to Baltimore because GM and Bethlehem Steel were major employers in Baltimore. And their employment was down by like 95%, and as a result, all the stuff that we all know happens was happening. And I started what was originally a nonprofit organization that used what you would describe today as very simple machine learning to identify the likelihood that someone would be in the top 2% of all software engineers after receiving training based on a series of metrics of software engineering outcomes.
Scott (05:29):
What was the starting point, I guess? I mean, yeah. Well how did you know what data to use and where it would come from?
Mike (05:35):
I didn’t. So in the early days—and for what it’s worth, I quickly shut down the non-profit and sort of, for-profit company doing the same thing for reasons I’m happy to talk about. But that was the core entity. Originally what we did was we tried to use academic research on learning styles and psychology and tie it to some of the early thinking about software engineering productivity. Quickly realized that some of those ideas were not mature enough yet to really do this. What we actually ended up doing was hiring folks today we would describe as data scientists, but at the time you described as applied math folks.
Scott (06:11):
I like that term better.
Mike (06:12):
Right? Applied math background folks, who were, who may have been biologists or may have been sort of pure applied math folks. And we asked them to experiment. We said, figure out if you can find data that seems to correlate with an outcome. And at the same time, let’s look at what the implications are to using these alternative techniques on diverse outcomes based on race, class, and gender.. Race and gender obviously are one bucket. For class, we used college degrees as an indicator of class. And what we quickly found was that we could find no statistically significant correlation between a four year college degree and performance as a software engineer. And so we kept experimenting and we over several years experimented a whole bunch of ways. We had a theory that hand eye coordination would correlate with success as a software engineer. And we would have people assemble tinker toys and then collect data points related to their assembly of tinker toys.
Mike (07:01):
Like the way they approached it, how much time they took, what they started, how long it took them to change something, things like that. To see if we could correlate that with success as a software engineer. There was a piece of research that had come out of Hopkins medical school, that rapid eye movements correlated with the ability to defer gratification. So we tried to use cameras to measure rapid eye movements and see if we could correlate that. The problem was this was a long time ago, the cameras didn’t have high enough resolution to pick up on the rapid eye movements. So we could never really get good data out of it. But we kept iterating, finding a theory, applying it, finding another theory, applying it. Occasionally we would do unsupervised research. We would find datasets where we could pull correlations. We would try to reduce our own biases that were introduced when we use a hypothesis [by using unsupervised research]. And then we correlate it with outcome metrics.
Mike (07:48):
Things like, how quickly does someone get promoted in a job? What’s the retention of that job? If they are charging an enterprise for someone’s time, how long do they keep someone? Things like that were sort of our early metrics that we would correlate with. And over time we would collect more sophisticated metrics, like what were the error rates that folks would introduce? What was the rate of ramp when someone was faced with a new piece of work? We were able to use more and more sophisticated metrics, but over time we got more and more sophisticated models and the technology improved as well, that allowed us to make predictions with increasing levels of accuracy.
Scott (08:25):
So were your model’s different from others on the input side or also on the output side? You looked at their actual performance in jobs and often in government programs, we aren’t clear about the outcome. Sometimes in the worst case, people think spending the money is the outcome, when at best that’s an input. But I would imagine that firms doing training would, you know, have an incentive to look for the right outcome measure. And I don’t know about, although there are many labor training programs and some of them must have had good measures, but were you innovating on both sides of the equation there?
Mike (08:52):
What we found was that helping an enterprise understand what outcomes they were looking to optimize for became part of the work. Particularly in the earlier years. The way we relate to each other as human beings is rife with biases. And enterprises are made up of people. When leaders of enterprises decide who to bring into their organization, there is a natural instinct for people to look for people like themselves. And when you say, what are you looking for in an employee? You end up introducing a bunch of bias, if you aren’t careful about what metrics you’re optimizing for.
Mike (09:32):
An enterprise wants to build products that people will buy. The leaders of that enterprise want that product delivered without errors. They want that product delivered quickly. And so you can tie metrics to those outcomes that allow you to optimize for what you are looking for. Catalyte grew in the early days primarily by helping venture-backed technology companies build out their workforces. That shifted in 2011 because Catalyte got a deal with Nike, and Nike was building one of the early wearable tech products called the FuelBand. And Nike ended up moving most of the technology work, software work on that product to this platform. And so for that, Nike started working with Catalyte in 2011 and in 2013, released a large of data comparing using the Catalyte platform to hiring staff or to sending work overseas using other models. They found that this model generated half the error rates and delivered products at three times the speed. And that was important to be able to say in an objective way, “this model for building out our workforce is working” was really critically important for Catalyte’s ability to talk about productivity and workforce in the world.
Scott (10:54):
So you’re basically talking about Moneyball for labor.
Mike:
That’s exactly what made Catalyte take off. In the early days, we would hand out the book Moneyball as part of the sales process to explain what we were doing. But the problem is the only people who had read the book were people really into baseball people and people in the finance industry. And then around 2011, a movie with Brad Pitt came out and everyone saw it and it became shorthand for Catalyte in its sales process to explain why it is that what Catalyte was doing was so productive.
Scott (11:25):
So Brad Pitt has had some positive effects on the world.
Mike:
Brad Pitt has had some positive effects in the world. And in fact, the CIO of Sony Pictures many years ago, who was so excited about this, gave us a copy of the original Moneyball poster.
Scott:
Wow.
Mike:
To put up in the office. Because he was so thrilled by this.
Scott (11:46):
That’s nice. And so you started with computer science coders, I guess. How many occupations have you been able to apply this to?
Mike (11:54):
Catalyte today operates in sort of all the related kind of tech functions. And so how many actual jobs that is, is probably a much more complicated and gray question, but things like folks who do digital ad work, isn’t exactly software engineer, but related to it, quality assurance folks. Catalyte applies these ideas to all of those related industries. My second company Arena, which you mentioned at the beginning, operates in healthcare providers. And does work across almost all of the jobs you could imagine in a hospital system or a skilled nursing operator or an assisted living operator or CCRC, including both clinical and administrative and managerial.
Scott (12:40):
Oh, that’s interesting. I don’t know whether we should go into or not, but did you start that as a separate company because of specific rules related to healthcare, but it’s just hard to be in healthcare industry?
Mike (12:50):
No… Catalyte was really set up as a vertically integrated model to identify, provide training, and move folks into a job. Arena was originally started as a project inside of Catalyte, that I spun out a separate company at the end of 2014, that was really designed explicitly to reduce bias based on race, class, and gender in healthcare and healthcare providers and healthcare provider workforces. And so Catalyte operates today on a thousands of people kinds of level on an annual basis. Arena operates on the millions of people level. It is a different scale, but what Arena does is essentially change how healthcare providers make hiring and promotion decisions.
Scott (13:30):
And you’ve seen different outcomes as a result?
Mike (13:35):
Massively different. Arena is deployed into healthcare providers, and it is also deployed into places like quick service restaurants. So someone might apply to a Taco Bell, but have the ability to be a really great manager in the maternity ward at the hospital down the street, but doesn’t look like it on paper. That person self-censors, and says, “Well everyone I know works at a Taco Bell, I don’t know anyone who’s a manager at a maternity ward, so I would never apply for the job.” And the executive responsible for hiring that manager at the maternity ward says, “I would never hire someone whose resume says Taco Bell.” And so, what Arena does is route the individual from one place to another place and then build trust in the hiring manager to make hiring decisions in a different way. And the outcome result of that is much more diverse and inclusive workforces, which I’m almost reluctant to use those words because they’re so overused, but I’m happy to quantify that if that’s helpful, but–
Scott (14:30):
I love quantification, so… Apparently, I also like to turn verbs into nouns, but yes, if you quantify it, that would be great.
Mike (14:37):
Arena reduces bias in ex ante hiring and promotion decision making. So, the decisions that are otherwise made as to who’s going to get hired or who’s going to get promoted into a particular role by between 91% and 99% based on race and gender. Class is a little bit of a mushier one because you’re looking for correlators with that, something like a college degree. But for race and gender, you have better data because large enterprises have to report EEOC or OFCCP data. You have access to that data to be able to measure the impact of it. It massively reduces the bias in that decision making and increases– when I say inclusion, what I’m really talking about is, who’s getting promoted? Who’s in higher level jobs inside of a healthcare provider?
Scott (15:23):
And that’s actually a good segue to some policy questions. I mean, you know, that there’s a huge debate about algorithmic bias– not always well defined, but basically the, the concern that different algorithms that platforms use end up reinforcing biases. And we know that that can be true, but you’re using the same tools to opposite effect. Do you have thoughts on this, on the whole debate and what, if anything, policymakers should be doing? What’s the right way to think about all this?
Mike (15:50):
Totally. I mean, I think obviously, you and your audience are ground zero for national and federal government thinking on this topic. I think the natural instinct is to say AI has a substantial risk of institutionalizing bias that human beings have, and therefore we should shy away from it. I would suspect that your audience is sophisticated enough to realize that that may be an overly simplistic way of thinking about it. The question is where does the risk come from and how do we regulate it? And regulators often gravitate to explainability and AI, which is sort of a tool that folks use, but more generally regulator focus on process. Regulating process as a way of dealing with the risk that we all recognize in AI done poorly, or in a way that institutionalizes bias. My own point of view on this is the problem with process regulation is that you’ll never get all of it.
Mike (16:46):
You will always be a few steps behind technological innovation. Regulators will never be able to stay in front of it. The conversations we need to have as a country and society about how we want to think about issues of privacy, how we want to think about issues of the future of these technologies, just can’t move fast enough to be able to use process purely as the tool for this. And so, I personally am a fan of managing what you’re trying to manage; regulate what you’re trying to regulate. What we are afraid of is institutionalizing bias. In the world that I operate in, regulations coming out of the EEOC and OFCCP provide regulatory framework for this topic. At one end of the regulatory spectrum would be to say no technology is allowed that influences how a hiring decision gets made. The problem is that without technology these decisions are already massively rife with bias. Massively rife with bias. And so by saying, none of its allowed, you prevent the innovation that actually could deal with sort of the problem that we all recognize comes from sort of evolutionary tribalist instincts.
Scott (17:55):
Right. I mean, well, that seems to be a kind of a big problem with the debate overall, that there’s not the– what is the comparison? The comparison is without algorithms. We know that people are just horribly biased.
Mike (18:05):
Horribly biased.
Scott (18:07):
We have all of human history to, to show that.
Mike (18:10):
Totally, totally, totally. So the question is, what do you do about it? One option is to use innovation like that developed in my companies. We use a method that we actually essentially repurposed from the intelligence community. We are headquartered in Baltimore, which gives us proximity to Fort Meade, which allows us to have a pool of talent that is extremely sophisticated on these questions. The intelligence community developed a way to predict with a constraint as a method to determine deep fakes. The colloquial way to say it is you’re applying AI to AI, but the more specific way of saying it is, “I want to build a neural network or a model that is going to be able to distinguish between things, subject to the constraint that once that distinguishing has happened, that I cannot tell based on race, class, and gender, which bucket someone fits in.”
Mike (18:48):
So what that means is that, if you use these methods, you should end up being able to say: “I now have a prediction subject to the constraint, that race, class, and gender are not distinguishable in these distinctions” and what that means. That’s really just a method that was being used by the intelligence community to determine deep fakes. That methods turns out to be extremely effective at reducing that bias. A human being couldn’t do that, but a technology can do that. And from a regulatory perspective, it is fair and appropriate to say, you Arena and Catalyte have the information and the knowledge of these technologies, and so we as a society are going to hold you accountable to the bias reduction outcomes we are talking about.
Scott (19:20):
You mean showing to a lack of bias based on some metrics.
Mike (19:26):
You are never going to eliminate bias. Right. You’re never going to eliminate it 100%. But you’re enough better than to your point, the baseline. You have not institutionalized the bias that was already there. That’s a way of regulating in a way that doesn’t make us vulnerable to all of the weaknesses of regulating process. Because the reality is that explainability– we could talk about explainability in like, four more podcasts– but explainability as a regulatory tool has some limitations in part because neural networks are hard to explain… There’s like a lot of stuff in that. If you regulate the outcome, then you’re able to create those limitations more directly.
Scott (20:00):
This is not a fair question at all, but is it possible to apply that to debates about bias in what your Twitter or Facebook feed shows you?
Mike (20:08):
Totally, right. I mean like, you think about what’s possible from a technological perspective to manage content questions. There are a wide range of possibilities there. The question becomes, what as a society do we want? And obviously, your audience is much more sophisticated on this topic than I am. But this question of social media platforms, consent, and privacy is a core one. I remember a thousand years ago, Catalyte was using– and when I say a thousand years ago, I’m talking about like before 2012. Before 2012, Catalyte had experimented using social media. Someone logged in with a Facebook account when applying for a job. And we realized that with natural language processing with private messages folks were sending on Facebook—that folks thought were private but weren’t actually because they consent, because they didn’t understand what the consent was. That you could tell all kinds of things about someone. And I won’t go into it here, but you could imagine what you could tell if you could do natural language processing on lots and lots and lots of private messages on Facebook a dozen years ago. So we experimented with that, found we were able to find signal, but decided to stop in 2011. We stopped the experimentation in 2011 and we stopped the experimentation because we thought that folks did not realize what they were consenting to. Folks did not realize that when they logged in with their Facebook account and they said, “do you give access to your account to this website?” that they had basically generated this massive data dump and had given us all this access. And this question of consent, and folks realize what they’re making public is an important one and critical to the broader regulation of privacy. But I’m not sure that we’re having informed public discussions on this topic that allow us to make democratic decisions about how we want to regulate it.
Scott (21:52):
I mean that comment applies to lots of issues.
Mike:
Lots and lots.
Scott:
Unfortunately.
Mike:
Lots of issues. Right? Totally.
Scott (21:58):
We’re running low on time, but I wanted to ask before we finished a little bit about your run for governor. So first of all, I’ll need a full accounting of the hundred bucks I gave you.
Mike (22:07):
Thank you very much for the hundred dollars. <Laughs> I am extremely grateful for it. I got that ad on that social media platform.
Scott (22:17):
That’s exactly right, it worked.
Mike:
<Laughs> Right. So thank you for it.
Scott:
What did you learn in that experience, both about your business, what you want to do in the future? The things that you think are important and what direction you might want to go.
Mike (22:30):
So I believe, and I would suspect that some of your audience, I would suspect agrees with me, that there are moments in history where there are massive transformations going on. So in the 1930s, for example, the idea of the New Deal related to how to react to the industrial revolution and macroeconomic reactions to that. In the 1960s, we faced social transformation through the civil rights movement and a transformation of the role of the public sector, the creation of Medicare and Social Security. And today we’re going through a similar transition. We’re going through a transition where a lot of the ways we think about our public sector were built in the postwar era when a lot of the country worked in manufacturing and industry, stayed at a single employer for a long time, had a very traditional career trajectory. And that isn’t how the economy is put together anymore. Specifically for the Technology Policy Institute…
Mike (23:25):
Technology and its implications for the labor market have dramatic implications for the social compact, in terms of the deal we all make to live with each other peacefully. And obviously those of us who are fortunate enough to be at the wave of this have benefited a lot and most people haven’t. And so obviously that’s what Catalyte and Arena are about. I had come to the conclusion that the public sector hadn’t grappled with this question and that at a state level, a state level actually might be the right place to really grapple with these questions and come up with solutions, particularly a place like Maryland, where you have Bethesda proximate to a place like Baltimore City, where I live. And that represents the challenge. Bethesda’s proximity to Baltimore City represents the challenge. I decided to run on the premise that I was going to advocate for a change in how we think about the role of the public sector, how we think about the role of the state and local government. And I continue to think that. I think that my point of view on this, I realized in a campaign, even though I think a lot of people share that point of view, it’s a different enough point of view that I think it was going to take a longer period of time than a campaign would allow.
Scott (24:38):
Well let’s spend a little bit on this view of the state’s role. Are you talking about it should take more of an important role in kind of labor-like issues, a more important role on the national stage in all kinds of things?
Mike:
Yeah.
Scott:
I grew up in North Carolina and when I hear, I mean, you didn’t use the word states’ rights and I’ve kind of would be hard to imagine you saying, saying that, but that kind of, you know, it evokes the wrong, bad things. Right?
Mike:
Yep.
Scott:
And so, what do you view as the new role of the state?
Mike (25:00):
So today — late last year, in Maryland… I’ll use numbers. Late last year, in Maryland, Maryland had one of the lower job creation rates of any state in the country, but also in the region. At the same, there were 100,000 open job postings in four industries, all of which provided a pathway to a job that paid at least $65,000 without a college degree: healthcare, tech, skilled trades, manufacturing. On one side you had a relatively high unemployment rate and relatively low levels of job creation. On the other you have a bunch of open jobs. So the question is, what do you do about it? And the way we’ve historically thought about it has been sort the classically postwar way, which is, we’re going to make it less expensive to go to school. We’re going to encourage people to go to college.
Mike (25:50):
We’re going to make it less expensive. As we slowly realize that maybe college isn’t the right answer all the time, we’re going to make community college less expensive. But what about my cost of living? The reality is that if I am working in a minimum wage job and I have a kid, it’s almost impossible for me to go to school, even if it’s free, because I need a way to eat and to breathe and take care of my kid. So when I ran for governor, I suggested that we pay folks $15 an hour to learn full time. We pay 100% of childcare. We pay 100% of transportation costs. And we give folks a $2000 forgivable loan. And we pay all educational costs. So like that sounds like an extravagantly expensive to do. Extravagantly expensive thing to do. So like, so what would it cost in Maryland, which is over 6 million people, what would it cost to do that in Maryland for 150,000 people a year?
Mike (26:39):
What would the math be? If you do that, then folks make progressively higher wages, and that generates tax revenue. In addition to that, Maryland spends between $12 and $15 billion a year on healthcare for folks who are severely poor. And healthcare experts, and I’m sure there are some in your audience cause there’s overlap between technology and healthcare expertise. Folks understand that social context, economic context is a bigger determinant of healthiness than providing healthcare in a clinical setting. So what happens if everyone has economic support, air conditioning, stable housing, and a job. And the answer is that essentially you save a lot of money in healthcare if each individual makes more money. And between increased tax revenues and a healthier population in a state like Maryland, the numbers work out that you could make a billion dollar upfront investment in something like this, where you pay a net new hundred fifty thousand people every year to learn. You could make a billion-dollar upfront investment. And in a state like Maryland, within five or six years, you would generate two to three billion dollars a year in net free cash flow at the state level through a combination of increased state taxes and folks are making more money and are healthier.
Scott (27:55):
So your point is that something like that this is possible at the state level, but not necessarily at the federal level?
Mike (28:00):
So, I think that it is possible at the federal level, but probably harder to accomplish at the federal level because of the nature of congressional politics.
Scott (28:08):
So the states as incubators of innovation.
Mike (28:11):
Exactly. Now there are some systemic reasons why these are challenging things to do. Like, why is it that we don’t have more leaders at a local, state, or federal level who are willing to push the envelope in a transformative way to change how this works? And the answer is, the political systems themselves create risk aversion. When your basic incentive is to get reelected or get the next job, and those timeframes are relatively short, then you need someone who’s willing to lose reelection to accomplish it. And when the nature of the political system attracts folks whose ambition for decades has been to be in an elected office or who currently in an elected office and want a long career of it, it creates a set of incentives that make it more difficult to do the difficult work, the heavy lifting politically to make this work.
Scott (28:57):
Well, that sounds a little bit like a prediction of what you might do in the future.
Mike (29:03):
<Laugh> I think there’s an opportunity to adjust some of the underpinnings of that system. So for example, my own point of view on this is that organized labor, which is important in all elected politics, but particularly in democratic primaries, that organized labor has an opportunity to transform itself particularly for certain pieces of organized labor into the role that allows organized labor to grow rather than shrink. And specifically, that when people stay in a job for like two years, there needs to be a vehicle for lifelong learning and career trajectories that doesn’t have a natural place elsewhere in the economy and organized labor can play that role. And if organized labor starts playing that role, then there are ways of changing how elected officials make political calculations.
Scott (29:49):
I think we should probably wrap it up with that. But Mike, thank you so much for talking with us today. It’s always great chatting. Always learn something new.
Mike (29:56):
Thank you so much Scott. This was so fabulous, and we have known each other a very long time, through multiple generations of the world, so this was awesome. Thank you so much.