fbpx

People Lie When Answering Polls. Here’s How to Fix It

People Lie When Answering Polls. Here’s How to Fix It

Another election, another polling failure. Despite many adjustments made after the 2016 presidential election, in 2020 President Trump and Republican Senatorial candidates outperformed predicted results, leading to another round of election polling post-mortems. Common culprits cited included many people’s reluctance to answer random cellphone calls — making getting a representative sample difficult — and Trump voters’ low levels of social trust, which may lead them to disproportionately ignore pollsters.

But the much-discussed “shy Trump voter” theory remains a live one: Given how controversial Trump can be, some supporters may have hidden their true opinions when asked. Indeed, absent a truth serum and a bad decision by an Institutional Review Board allowing respondents to be injected with it, shouldn’t we be skeptical of polls that assume honest answers to sensitive queries?

This is not an original question. In fact, it is so un-original that social scientists recognized it decades ago, and a large body of research explores ways to overcome the problems inherent in expecting candor from complete strangers on controversial issues. Political pollsters have not availed themselves of the techniques developed through this research. Given their performance in 2016 and 2020, it’s time they did.

One newer, but now common, approach to measuring preferences — and getting around the subjects’ reticence — is “discrete choice experiments” (DCEs). DCEs are like a game of “would you rather?” They don’t ask respondents explicitly how much they like something or whether they prefer one good over another. Instead, they take respondents through a series of questions asking which, among a bundle of goods — or a single product with different attributes — they would prefer.

Estimates derived from DCEs are useful when measuring things people value but cannot typically buy (such as a clean environment), and help bridge the divide between what people say they will do in simple surveys (protect their privacy, say) and what they actually do in the real world (visit websites that collect personal information, despite warnings).

We have explored the privacy paradox in our research.\cite{princeHowMuchPrivacy2020} We don’t ask people directly how much they value privacy, knowing that the answer may not reflect their behavior. Instead, we ask them to make privacy tradeoffs.

To measure, for example, how much people value keeping their location private from their cellphone company, we ask them to choose among wireless plans offering a range of features, including location privacy. The respondent chooses between wireless plans A and B. Plan A allows the phone company to read your contacts and record your browsing behavior but does not send you advertisements or record your location. For this privilege, the company would pay you \$0.50 per month. Plan B allows the company to record your web browsing and your location and send you advertisements, but does not read your contacts. Under this plan, the wireless company would pay you \$3.50 per month.

Every answer to questions along these lines reveals information about how the respondent values different types of carrier privacy (and ads). With enough respondents, we can ask about hundreds of combinations, allowing us to get precise estimates for each value.

Again, without that truth serum we can’t force people to answer truthfully. With a well-chosen and sufficiently large number of combinations of choices – particularly in terms of the included dimensions and tradeoffs presented – DCEs can disguise the purpose of the survey, potentially reducing or eliminating the honesty problem. (In this context, people think they are supposed to value privacy, so they may give the socially approved answers.) Even if the respondent knows the survey’s purpose, DCEs can assuage concerns about revealing controversial preferences by avoiding any direct solicitation of those preferences via any one question.

The problem with moving this approach to political polling is that it is relatively simple to describe a set of privacy options and payments by a cellphone carrier. Describing a manageably small but meaningful set of characteristics voters care about in an election might be more difficult — but still doable.

As an example, two professors at the University of Copenhagen designed an experiment that used this methodology to elicit voters’ preferences of candidates for prime minister.\cite{hansenForcingVotersChoose2012} They asked a representative survey of 2,000 people to choose among 12 sets of options. Each set included some combination of a change in the respondent’s salary, a change in the national unemployment rate, and one of the two major parties’ leaders as prime minister. This approach allowed them to isolate relative preferences for one candidate over another and ultimately predicted a margin for prime minister that proved quite close to the actual outcome.

The key difference from an ordinary poll is that the survey never asks whether someone supports one candidate over another. Prime minister is one of several components of each alternative, and the choices can be designed such that respondents never choose between alternatives that vary only in terms of prime minister. A survey similar in spirit could isolate preferences about candidates in other elections by asking people to choose among varying bundles of changes and candidates.

In the case of Trump, a comparable survey may ask about preferences between bundles including, for example, President, Senator, Representative, and specific policies, such as local sales tax. Coming up with the right sets of comparisons would require careful research and professional evaluation.

However, to illustrate the idea, consider the following hypothetical comparison for a Michigander: Would you prefer A) James as Senator, Trump as President and a 6\% sales tax, B) Peters for Senator, Trump as President and a 5.5\% sales tax, or C) James as Senator, Biden as President, and a 6.5\% sales tax? Notice that a shy Trump voter can choose A or B without directly acknowledging a preference for Trump over Biden; they could claim it’s the sales tax and/or Senatorial difference driving their choice. Including one or more policy issues like a sales tax, in addition to the candidates, could aid in disguising intent, cause the respondent to think more about trade-offs, and perhaps better reach a shy Trump voter – is the pollster trying to learn about candidate preference or policy preference?

In practice, the survey would ask the respondents to choose among many other sets of combinations. Again, none of the questions would directly ask how respondents feel about either candidate or ask them to directly compare the candidates. A “shy” voter could show their preferences without explicitly stating which candidate they like better and potentially be less guarded due to uncertainty of the survey’s intent.

To our knowledge, using DCEs to measure preferences in an actual election has been limited to that one experiment, so more research is necessary to learn how to best apply DCEs to a political context. But Trump is hardly an isolated case: Elections often touch on highly controversial subjects, so the issue of non-truthfulness among respondents remains ever present. We have the tools to mitigate some of the problems inherent to “shy” participants in political polls, and we ought to put them to use.

Share This Article

View More Publications by Scott Wallsten and Jeffrey Prince

Recommended Reads

Related Articles

Sign Up for Updates

This field is for validation purposes and should be left unchanged.