“How Russian Twitter Trolls Influence Society and Elections with Patrick Warren” (Two Think Minimum Podcast)
Two Think Minimum Podcast Transcript
Episode 013: “How Russian Twitter Trolls Influence Society and Elections with Patrick Warren”
Recorded on: October 29, 2018
Sarah: Welcome back to TPI’s Podcast Two Think Minimum. It’s Monday, October 29th, 2018, and I’m Sarah Oh, a research fellow at the Technology Policy Institute. Today, we’re excited to talk with Patrick Warren, who has a Ph.D. in economics from MIT and is now an Associate Professor of Economics at Clemson University.
Patrick studies how organizations work – organizations including companies, bureaucracies, parties, even armies. Patrick’s collection of peer-reviewed articles and academic journals shows the appeal of his work to economists, but something else has put them in the public eye these days.
Specifically, he has become a leading authority on the impact of Internet trolls on elections, which is obviously of huge importance as we head into the midterms. His work is based on a data set of about 3 million troll tweets that he and his coauthor Darren Linvill downloaded and have made available to everyone via the FiveThirtyEight website. We’ll talk about this and more on today’s podcast.
Scott: I am joined today by Scott Wallsten, TPI senior fellow and president who is joining us today. Tell us about your work on Internet trolls. We’re especially interested in work that’s empirically based on with 3 million tweets that you downloaded and your analysis that is more empirical than almost any other analysis of what’s been going on. Tell us what you did.
Patrick: A year ago now, the House Intelligence Committee released a list of about 2,500 Twitter account that Twitter had identified as being operated by the Internet Research Agency. The Internet Research Agency (IRA) is a private entity in Saint Petersburg, Russia, that does work almost exclusively for the Russian government and they had been working domestically mostly, but there were some good evidence that they were involved in intervening and basically our domestic political conversation. When you said they were operating mostly domestically, domestically here or domestically in Russia, I’m going to call them the IRA.
Patrick: The IRA began their work, targeted social media in Russia in Russia, began trying to push basically the administrative law to a domestic audience in Russia and only when that was successful did they expand westward. They began in Russia. They talk a lot about the Ukraine, but our study is looking at their impact in the United States. Twitter had identified about 2,500 accounts and provided a list of those accounts to Congress and Congress then released that list. By that point, all of those had been shut down by Twitter and their tweets were no longer available on the platform, but some enterprising folks had been able to look back various websites that archive the internet, like the Wayback Machine is what it used to be called out. I think it’s called archive.org, and through that they were able to sort of recover some of the tweets but a very small fraction. Here at Clemson we actually have access to a platform that has a more complete archive of Twitter and using that platform we were able to dig up basically all the tweets from those accounts stretching way back.
Patrick: Some of the tweets are all the way back to 2009, and so we started digging. Now that platform is not made really for this. It’s made more to sort of monitor your own social media presence and do social media operations like for firms, but it does have this archive and so we were able to slowly through sort of tweaking the system here and there, uncover all of the tweets by those 2,500 accounts. In June this year when they expanded that list, they added another 1,000 names and so we started digging again and in the end we ended up with about 3 million tweets from about 3,500 accounts.
Scott: What did you do with them?
Patrick: First, we read them, my poor coauthor, read them all. I’ve looked at the data here but he did most of the reading and obviously we didn’t read every character of every tweet but enough to categorize the accounts.
Patrick: Basically these 3,500 accounts are not all the same. It’s not like some great for homogenous troll army. They are very specialized. Step one was to try to understand what sorts of accounts were out there and so therein I helped, but he did the brunt of the reading through these tweets and categorizing the accounts into one of basically four or five types.
They’re basically Left Wing trolls and Right Wing trolls and I’ll talk more about what. And then there are Local News accounts. These are accounts that purport to be in Baltimore and aggregating local news in Baltimore or in Washington or in Seattle. There’s 38 of these cities across the United States Senate. The second we called them sort of Local News feed accounts. And then there was a fourth kind of account that played the Hashtag Game.
Patrick: Are you familiar with the Hashtag game? The Hashtag game is a social media game where it is announced in advance that we’ll all get together at 6:00 PM on Wednesday and we’ll play the Hashtag game. Here’s how the Hashtag game works. Someone’s in charge, they will announce a Hashtag, like #rejecteddebatetopics and then everyone who’s playing the game will basically send out their own sort of funny takes on that Hashtag. If the Hashtag is #rejecteddebatetopics, your funny take might be a is a hot dog a sandwich or not like a Hashtag? Yeah, exactly. Right. Then you know, people if they think your take is funny, they’ll retweet it or like it or whatever. This seems like an odd thing for trolls to do. We’ll talk about that in a little bit.
Patrick: Why were they were doing that? This was sort of the fourth sort of troll. They played the Hashtag game and then there were about a million more. Those four categories were the biggest four categories. Then there were about a million tweets that we couldn’t categorize because they weren’t in English and I don’t speak Russian and so of the 3 million, about a million, we’re in other languages, the largest of which was Russia. We just, for analysis that we released in this paper, we just put those to the side because we didn’t feel qualified to analyze those.
Patrick: Once we broke them up into these categories, the question was, under what circumstances are each of these account types being used the most and how do they respond both to each other and to events in the world. We took sort of a 30,000 foot view of the output of these accounts in order to trace when they’re most active and if that vary by account type and how they react to the events in the world.
Patrick: That’s mostly what the paper’s about is what are these account types, when are they active and how are they different or the same, and trying to understand in very broad strokes like what’s the strategy behind this basically propaganda operation.
Scott: So these four types, they’re all from IRA, right?
Patrick: Oh yes. The same operator might run accounts of all four types.
Scott: So having the four types of part of their strategy.
Patrick: Absolutely, absolutely.
Scott: What do you think the overall strategy is?
Patrick: I think that the overall strategy seemed to shift over time. I think at first, the strategy was clearly just to sort of create unrest and division. It’s pretty obvious that they thought Hillary Clinton was going to win the election and they just wanted to set the ground work to make that when and her and then distractions unstable at the beginning.
Patrick: But, then, over time, it became clear that this was going to be a closer fight, maybe then they started to basically play both sides, play the right, play the left, and play both sides against the middle. What do I mean by that?
Patrick: Let me talk about one of the two Left Wing, Right Wing kind of accounts. The Left Wing accounts were basically extremist, left wing Democrat accounts. Not a lot of centrist policy proposals or support for centrist Democrats. This was like, there were a lot of accounts that were quite divisive and extreme. What do I mean? Certainly the most common sort of account on the Left Wing or fake Black Lives Matter accounts and they were accounts that basically pushing, not just not even a sort of mainstream Black Lives Matter view, but sort of the left extreme of Black Lives Matter. So, “let’s just burn down the whole stuff.”
Patrick: Similarly on the right, so there were no accounts that were tweeting mostly about, you know, reasonable tax policy, not on the Right Wing accounts. They were all about like “we need to shut down the borders immediately kick out everybody who doesn’t look like me. If we don’t kick them out first, they’re going to kick us out.” Very extreme sort of Right Wing account. Both the Left Wing accounts and the Right Wing accounts were very dismissive of the other side but also the middle of their own side. That makes sense.
Patrick: They were actually targeting and trying to rile up the centrist center left people who you would think that this was becoming a more prevalent view on the right or. Exactly, vice versa, that people on the right would see these extreme Left Wing tweets that it would make them think that the left has become more extreme or both.
Patrick: These people find people who might have these extreme views and make them more active. I think by inference, but if you look at the activities and the way that they act and the times that they act, it’s clear that they’re trying to bring in real Americans. You see this in the most active day of all. If you look through the whole 3 years with the data. If you look through the whole 3 years, the most active day is October 6, 2016. On that day it’s the Left Wing trolls that are the most active. That was a very busy weekend. I don’t know if you remember this weekend, in 2016. This was the weekend that Wikileaks released John Podesta’s emails. But that was also the weekend that the Access Hollywood tape came out. Hmm. So it was a very busy political weekend and it was also the weekend that the first sort of official intelligence assessment of the Russian interference in the election happened.
Patrick: So it was just a crowded weekend. But the first thing that happened on that weekend, which was October 6, was the most active day of the IRA trolls in our whole data period. So before any of the rest of this happened, what happened was the Twitter trolls went crazy and specifically the left trolls went crazy and we think that this was in anticipation of the Wikileaks announcement that they knew was coming and if you look at what they’re doing, they’re not talking about the like announcement. Obviously they can’t do that because it hasn’t happened yet. And so that was sort of given up the game. What they’re doing actually is they’re doing a lot of retreating and tagging of people who you could pretty much anticipate. We’re going to react very badly to the Wikileaks release. People who are strong supporters of Bernie who sort of felt like Hillary had probably gotten an unfair advantage in the primary process. And we’re sort of very out about being suspicious of her as a Democratic candidate and so you want to rile those folks up on the day before and leading into the day where the Podesta emails were released in order to get basically the reaction that you want when they’re released.
Sarah: Let’s talk about the distinction between retweeting and actually tweeting content. So, a retweet is magnifying of real humans but that has a different kind of impact than just misinformation?
Patrick: Absolutely. I actually don’t think there’s a whole lot of what you might call straight up “fake news” in here at all. Not really like, I don’t really like the term, “fake news,” but in terms of like just whole cloth, false claims, there’s not a lot of those actually in the IRA data, if you look very early, there’s accounts I didn’t talk about because they basically quit doing it, but there was an accountant that referred to as the fearmonger accounts. And what they did is that they exactly did that. They invented fake disasters and tried to get people agitated about them.
Patrick: But it didn’t work. Basically they used a strategy of about 30 accounts that they created and use to try to gin up a salmonella outbreak, coming up on Thanksgiving of 2015, invented this outbreak at Koch Farms, which is a real farm in North Carolina, and the claim was that there was salmonella and the Turkeys and that the turkeys were sold to Walmart and that, oh, I just got sick and my whole family got sick, so watch out for these turkeys. Okay.
Patrick: Totally made up. First of all, Koch Farms. It does exist, but they don’t sell turkeys to Walmart. If you look back at those sort of CDC records, there were no salmonella outbreaks of this type at this time, and they were blaming it all on the Koch brothers, which of course is totally unrelated to Koch Farms. It’s a totally different Koch, but this didn’t work.
Patrick: Nobody picked up the #kochfarms story. The accounts got shut down very quickly because Twitter realized that there was these false narrative that was all getting created by these accounts. It just doesn’t work. They found that over time that it was a lot easier it was a lot more effective to basically try to inject themselves or burnish the reputation already, existing claims that were not purely false, misleading, and so we see both those moves over time. They quit trying to make up their own story. Instead they start pushing stories that other people have pushed and they quit trying to do with themselves. Rather they try to bring people who would otherwise be sort of at the periphery of that conversation to the middle, so if what was successful for them trying to spread explicit misinformation, but if somehow it was rooted in some colonel or grain of truth or made it sound like an opinion.
Scott: How did the platforms go about identifying things like this? In this case you were able to tell that it came from the IRA, but is it always about looking for the source rather than the content?
Patrick: It’s difficult. We know a little bit about how Twitter identified these accounts. I mentioned an example where they shut down some accounts that likely had to do with the content, but there’s no way that that’s how they identified the accounts later on because they weren’t tweeting things that no one else was tweeting, right. They were things that were already being pushed. They identify them through network related things I suspect. We know that the IRA set up virtual private networks in the United States and use those to sort of bounce their signal so it wasn’t clear that they were coming from Saint Petersburg. It’s likely that they had a finite number of those that they could use.
Patrick: I suspect that over time, for Twitter, that’s how they identified these accounts that work, but only sort of after the fact and it’s a technological solution. Presumably there is a technological work around for the propagandist as well. I’m not an expert in the technology of the Internet and so I don’t know the degree to which that’s the sustainable strategy for stopping these things. I just don’t know. The next part of this is you’re measuring the inputs into sowing discord.
Scott: Is there evidence that it had the desired effect?
Patrick: We have no such evidence. It’s an open question and it’s sort of the million dollar question. I have some ideas about how you would go about trying to suss that out, but it’s hard. Usually when we do these scientific studies, whatever we do in economics, you want to call it science.
Patrick: We’d like to have a treatment group and a control group, right? Oeople that were affected by or touched by the trolls and then those that weren’t touched by the trolls. But in something like Twitter, it’s so hard to define a control group. Who were the accounts are, where were the places or what were the times that didn’t get touched by this. That’s the biggest challenge in trying to understand. We’d like to run sort of diff-in-diff regressions.
Patrick: But I don’t know what the first diff is. I can do before and after, but who’s controlling whose treatment? It’s hard. You could ask smaller questions that are not quite the question you want to answer, which is – do the people that were mentioned by the rules start to talk differently after they were mentioned by the trolls and they did before they were mentioned by this roles compared to, I don’t know, randomly selected Twitter?
Patrick: Users are randomly selected, politically engaged Twitter users, but there’s no guarantee that those randomly selected, engaged Twitter users were not directly or indirectly affected by this rule operation. Now it’s hard. I would love to answer that question. Obviously the reason you get into this and try to answer that question, but I’m still not convinced that there’s a valid sort of identification.
Sarah: Here at TPI, we tried to find households that actually engaged with the tweets in your dataset and we couldn’t find any, but at the same time, out of 100 million tweets, we only had 10,000 households. So again, it’s hard to tell if even that is a measure of influence. Who’s to say that one retweet at the beginning of Twitter trend would cascade into something else.
Patrick: You’re going to remember what I said earlier. My coauthor and I always joke that it’s not really about the words, so I think what they were trying to do on October 6 had nothing to do with the words that they said.
Patrick: I think it had everything to do with trying to get people who would say the words they wanted said into the middle of the conversation, so just looking at the tweets that the trolls tweeted. Might miss the point the bacon gets the Black Lives Matter accounts that you see on Twitter or that you think of when you think of Black Lives Matter to be one standard deviation, more extreme that it would have been without their interaction. Then that’s an effect. That’s the sort of effect that we think they’re trying to. They want to make the left say more extreme than they really are and the right seem more extreme than they really are and that’s not through their words. That’s the way they can try to move people around in the network structure and also try to get people to be more engaged and active than they would otherwise be.
Patrick: Twitter doesn’t track very much, at least to the extent that it’s possible that people inside Twitter can get the information. Is this something that I’m like just retweets and likes also would not capture you get retweets and likes of the original tweets by troll? No, I don’t think so.
Patrick: There’s two versions of the story, but let me tell the simplest one. The simplest one is why am I active on Twitter? I’m active on Twitter because I like to sort of interact with people and see what people are doing, but why do I tweet? I think it’s usually because I have some message I want to share and when am I most active on Twitter and I think I’m most active when I’m sort of getting some given take. So you know, I tweet something or nobody resolves it.
Patrick: Well that doesn’t really spur me to do my, but if I tweet and I get like two responses and retweet, well maybe I’ll keep talking about that thing and we think that that’s part of what they’re doing. They’re trying to sort of spur the right sort of people from their perspective to be active and so what would a like if I’m trying to measure that effect, I need to see not how many people like the trolls tweet or rather I need to see how many tweets do you get from people who the trolls retreat and like it’s sort of we’re calling it right now, second order agenda setting. It’s not about the words, it’s about the people. At this point, this is at the level of a hypothesis because the data requirements for answering this question is much more significant than the data requirements.
Patrick: To answer the question that I laid out in the paper that I’ve sent to you and that’s just purely descriptive like what are these guys doing? But now we get to the question of like what impact did it have? Obviously, that’s a much harder question and it’s subtle here, right? I think I’ll lay out a strategy and now somebody’s going to steal it, but it’s fine. I just wanted it to get out there, so I think the idea would be it would go and look at the, I don’t know, thousand most retweeted liked and I can’t do likes actually in our data, but the most retweeted and mentioned accounts by this roles. We can do that and then for each of them you find that kind of control account which is similar in some way. Maybe they’re based followers or whatever, but to think about that, that’s always the hard part and then you ask sort of what is the activity rate of those that got burnished by the trolls as compared to those that didn’t and then you might also ask a simpler question which is like are the sorts that got burnished by the trolls?
Patrick: Are they selective about that? Are they picking the right sorts of accounts for that? We need both those things to be true for my story to work right. It’s gotta work. It’s gotta happen that if I interact with you a lot, you become more active and it’s also got to happen that the ones they interact with are the ones that are doing what I claimed they’re trying to do, which is make the left and the right seem more extreme than they really are. That’s how you do it, but yeah, there’s a lot of data requirements for sure. We could do it with our data. We have some grant proposals in to try to get some more help. Right now it’s like Darrin and I on a shoelace. Yeah. People, I don’t think understand how resource intensive it is to do big data. Really any kind of a miracle work.
Patrick: Well, you got to remember that you can’t actually use the Twitter API to do this. You have to use this Social Studio Archive that we have and it’s not made for this and so you can’t just pull like 400,000 tweets given some search algorithm like it’s not made to do that and so a lot of sorta grunt to that sorta gathering feeds from this database and so yeah, it’s hard.
Scott: What do you think, this may not be something, you know, but what’s your impression of how successful the trolls think they’ve been? I mean at least in terms of whether it’s continuing, whether they sort of changed their hosts or are they trying to keep going?
Patrick: I can tell you a couple things. One thing to note is that we have like a big selection problem here. So our 3,500 accounts are the ones that Twitter gotten shut down.
Patrick: If you just graph out sort of the tweets by these guys, you’ll see that they all fall off a lot at the end of 2017 as sort of mid 2018. I think that’s probably totally selection. I doubt they’re less active. It’s just that we don’t have all of the accounts that were happening then and my evidence for that is the following two pieces. One, there were more tweets by these trolls in the year after the election. Then there were the year before, so this is not like an election thing. Then they were done now they were more active after the election. Why? Because I think they thought they were pretty successful in the election. The second piece of evidence I have is it become more effective over time. I told you about those fearmonger accounts that didn’t work at all. If you track for overtime how quickly these accounts pick up followers both in terms of per day and in terms of per action or you might think of an action that’s like a tweet or retweet or whatever.
Patrick: They get a lot better over time. These guys are getting better at picking up followers. You need followers for any of this stuff, right? They’re getting better at picking up followers, which suggests to me that they’re becoming more effective. They reveal that they are willing to invest more effort over time because we see more tweets over time. I mean, if you’re doing more and you’re becoming more productive, I don’t know why you’d stop, but we can’t see. I’ll say this, we’re tracking a number of accounts that still exist and are still tweeting. Darren and I strongly believe are run by the IRA, but we don’t have the resources that you would need to prove that sort of beyond a reasonable doubt because it’s a lot of the sort of behind the scenes internet traffic stuff. That’s what you really need and we don’t obviously don’t have access to that well.
Scott: Recognizing that maybe you can’t identify this with statistical significance, do you observe increased activity leading up to next week’s midterm elections?
Patrick: Compared to what?
Scott: I guess, trending over the last year, let’s say.
Patrick: There’s this regime change. When Twitter released the list in June of this year, they shut down a whole set of accounts so we can track those accounts but they all disappear and then we sort of slowly collect some accounts that we were able to find, but it’s not really a comparable set. The set that we were able to find, and again we’re talking about the order of 20 accounts, not on the order of 2000 and I don’t think that’s because there’s only 20 accounts. I think it’s because that’s all we could find. So I don’t know, they look a lot like the old IRA troll accounts, they got shut down, they’re super effective, just like the tail end of these sort of what we’re calling fourth generation troll accounts work.
Patrick: They’re actually not talking about the election that much. They’re doing the same sorts of stuff that the other accounts were doing, which is the blend, trying to make the world live more extreme than it really is. But I can’t say for sure that as a unit, the IRA is more or less active than it was a month ago. I can only see those accounts. Those accounts are more active, but it’s a small part. It seems like one of the four group types of accounts that Local News accounts would seem to be pretty well suited to dealing with congressional elections. Yeah. They’re like, I don’t know about the most mysterious, but they remain quite mysterious. Like what those accounts were up to. You look at those accounts in detail. You’ll see a day are not tweeting even borderline fakeness, like they are tweeting legitimate local news the whole time.
Patrick: Let’s take one like Baltimore Online, what does it do? It pulls news from a handful of legitimate local news sources in Baltimore newspapers, local TV as sort of thing, and it just pulls that content headlines basically sometimes with the links to the original, sometimes not with links to the original and just reads it. That’s it. That’s all it does. That leaves this question like it’s not saying things that are false, it’s not trying to push very extremist agenda in any obvious way. If you look and see if it’s like tweeting more about Trump or Clinton. Not really, it seems pretty innocuous. That leaves a mystery. There were 600,000 reads by these accounts, like what were they doing? We have some ideas. One thing that we’ve done is that we’ve compared the sorts of things they tweet about to those local sources that they’re pulling from and it is a case that they do not tweet a balanced selection of those local news accounts.
Patrick: There are some topics that they seem to really like and it’s not necessarily want you to think about. It’s not Trump versus Clinton, it’s not like they always retweet the Trump stuff and never like left stuff or vice versa or positive yet. Instead of that, what we see is that they are significantly more likely to retweet, retweet, but to tweet stories that are – violent. If a headline has the word “shooting” in it, there are about twice as likely to include it as the source material would indicate. “Shooting,” for instance, if you looked in all of the sources, source material for these local accounts, about one and a half percent of the headlines would have the word “shooting” at it. But in Baltimore Online troll account, it’s more like 3 percent. Hmm. Is that big? I don’t know. It’s twice as big from one point of view.
Patrick: From the other point of view, it’s like holding another 0.5 percent. Maybe it’s not that big of a deal. You can’t get too big. Right. Because then it might be obvious “murder,” “violence,” those sorts of things. Now this is preliminary, this is just in the couple months leading up to the election. We would like to do more trying to uncover what these accounts are up to. We got another grant proposal and to try to look at that, but based on what you’ve seen, it seems like it’s intended to make people think society is less stable. Exactly right. Actually they do not like stable institutions. We have another study that is under review right now looking just at the couple of months before the election and the last attacks, the right attacks and left for sure, but they both attack institutions so like nobody trust the media.
Patrick: The IRA doesn’t trust the media no matter if it’s the left trolls, or right trolls. They don’t really trust courts. Institutions are not popular on either side of the IRA, sort of troll continuum and it’s a big part of what they do, more than I think people realize.
Scott: That actually ties into your research, I mean, most of your other research on various types of institutions and how they function. Are you worried about the potential effects that these trolls on our various institutions?
Patrick: Am I worried now? I mean back when? When Russia was the Soviet Union, they did some of this. Right, so there was like they would fund sort of an anti-institution protests in the United States. Even then, is this a particularly dangerous? I mean, trust in institutions is down. It’s hard to know which is the cart and which is the horse. Am I worried you’re not lying awake at night? I sleep okay. I think it’s an interesting question to look at. I think it would be a surprise if they can have a big impact on their own if there weren’t already attacks on institutions happening. The United States. They can take advantage of situations and I hope that we set our affairs in order.
Sarah: Something else from your empirical work – do you have a sense for how many other sleeper cells there are, like how many other legitimate looking Twitter accounts are there? Are they just retweeting news and might have, I don’t know, long-term plans to statistically become more “violent” over time?
Patrick: About the IRA accounts, a lot of folks couldn’t figure out what they were up to and said, well maybe they were just sleepers and they were going to put them into action later on if they hadn’t gotten caught and shut down and I guess that’s possible, but for a long time to leave I’m asleep. They were operating for years. It just seems like if we couldn’t find anything that they were doing then I guess you’d have to fall back on that answer, but I think they can be both. They could both be biasing news today and also be ready in case. Are there other sorts of things like that? I don’t know. There are a lot of local news aggregators. We demand this like extremely high level of privacy on Twitter at worse. Kind of okay with it, which I mean, whatever. It’s the market and people can decide what they want, but like in Washington, DC, there’s probably. You guys are in DC, right? There’s probably 10 to 20 major news aggregators for DC where it’s not clear at all who runs them or what they’re doing on Twitter right now and they have thousands of followers. I mean not millions a lot. I’m like, are some of those operated by the propaganda arms? A foreign governments? I don’t know. We’ve found Iranian ones and Russian one so far.
Patrick: Twitter is kind of interesting how the different platforms have drawn that line and how they’ve changed over time. So Facebook used to be like this when it got started, there was no anonymity at all. Hardly, right. You had to sign up with your university email and you had to put your real name right there on your Facebook thing and then they added groups and those could be kind of anonymous and then you didn’t have to have a university. You could have some other email and slowly they’ve sort of trickled down to as much anonymity as Twitter has. It’s interesting. I wonder if the market. It must be the market drives it that way. I’m just surprised and I wonder, this might just be, you could imagine a world where the market demands verification. There seems to be demand for anonymity and for identification and it’s hard for them to square that.
Patrick: So I have two young children and like the Internet that I grew up on was way even way more anonymous than this one. Like nobody used names when I was young on the Internet and my kids are fine, like they don’t want that sort of anonymity I don’t think, and I’m happier with them on a system that wouldn’t have that anonymity, but you know it as advantages. If you’re in a repressive regime, you probably want to anonymity. We may be rethinking sort of that balance. I think over time we, I don’t mean as a government, I mean as a consumer.
Patrick: We’re like way off my research now.
Scott: Since we are off your research, let me ask a question. I’m a little hesitant to ask it because I’m afraid of creating a false comparison. False equivalency. Just ask you one way or the IRA, they started just as a Russian operate, I mean internal Russian operation where they tried to push the administrative line in Russia, so at what point does a group like that become trolls versus sort of pushing me over? Every federal agency has a Twitter account and some of them are all about saying how great everything that they want to do is and others pay PR firms to take care of their social media and that’s propaganda to show the line that Twitter claims to draw anyway is if you’re lying about who you are. Okay. If on the Twitter account of Homeland Security, then fine. The account says it right there. I kind of know what I’m getting. If I say that I’m like a young mother of three who’s trying to get by in Baltimore, but really I’m in Petersburg and that’s another thing.
Patrick: I know you said you didn’t look at the tweets of Russian, but were they doing that right from the beginning pretending to be someone else? They certainly weren’t labeling themselves as working for the administration, again, vision, all the tweets that I look at, but just from my reading on what the IRA was up to, they would just go on blogs or social media platforms in Russia and they would post as Russia is probably not what their real names, but I think that Putin is doing a great job or you know, I really don’t trust this guy, the major alternative to Putin. Oh, and that’s it. They weren’t pretending to be someone. They’re not in the sense that they were pretending to be a Russian person and they were Russian person, but they were doing this on the clock and I don’t think they were clear about that.
Scott: So I think we’re out of time, so thanks for your interesting discussion.
Patrick: I’m happy to talk about trolls all day.
Sarah: Thanks for much for taking the time to talk about. It was really interesting. That would be an agenda for a long time to come.