Nate Cohn’s “A 2016 Review: Why Key State Polls Were Wrong About Trump” considers three theories to explain why Trump did exceeded polling projections in the Rust Belt, NC and FL. As Cohn explains:
At least three key types of error have emerged as likely contributors to the pro-Clinton bias in pre-election surveys. Undecided voters broke for Mr. Trump in the final days of the race, or in the voting booth. Turnout among Mr. Trump’s supporters was somewhat higher than expected. And state polls, in particular, understated Mr. Trump’s support in the decisive Rust Belt region, in part because those surveys did not adjust for the educational composition of the electorate — a key to the 2016 race.
“Some of these errors will be easier to fix than others,” writes Cohn. “But all of them are good news for pollsters and others who depend on political surveys.”
Further, Cohn adds, “At the annual conference of the American Association of Public Opinion Research (AAPOR), as well as at a number of other meetings held earlier this year, evidence pointed toward an explanation in one of these categories:”
A postelection survey by Pew Research, and another by Global Strategy Group, a Democratic firm, re-contacted people who had taken their polls before the election. They found that undecided and minor-party voters broke for Mr. Trump by a considerable margin — far more than usual. Similarly, the exit polls found that late-deciding voters supported Mr. Trump by a considerable margin in several critical states. These three results imply that late movement boosted Mr. Trump by a modest margin, perhaps around two points.
Cohn cautions, however, that there is a “tendency for respondents to over-report voting for the winner.” Cohn also note “the so-called “shy Trump” effect,” the notion that “Trump supporters took telephone surveys but were embarrassed to divulge their support for an unpopular candidate. If true, the “undecided” voters were really Trump voters all along; they just didn’t want to admit it to pollsters until after their candidate won.”
Cohn also discusses the arument that “likely-voter screens may have tilted polls in Clinton’s direction,” evidence in inconclusive. He also notes that an Upshot/Siennna survey found that “Mrs. Clinton’s supporters were likelier than Mr. Trump’s supporters to stay home after indicating their intention to vote.”
In addition, many of the polls failed to adequately ‘weight’ polls to reflect educational levels in keeping with demographic realities. It appears that less educated voters, were somewhat underrepresented in key state polls.
About 45 percent of respondents in a typical national poll of adults will have a bachelor’s degree or higher, even though the census says that only 28 percent of adults have a degree. Similarly, a bit more than 50 percent of respondents who say they’re likely to vote have a degree, compared with 40 percent of voters in newly released 2016 census voting data…Most national polls were weighted by education, even as most state polls were not.
The good news for pollsters, adds Cohn, is that the more accurate “performance of national surveys has been one of the better reasons to assume that last year’s misfire wasn’t a broad indictment of public opinion polls.” However, Cohn cites a need for better demographic data in state polls including more accurate samples of education levels by race. That would help pinpoint correct percentages weighting for whites with no college education in state surveys. Cohn notes,
But many lower-quality state pollsters did not even ask about education at all, suggesting that it wasn’t on their radar as a potential issue in 2016. That’s surprising. The potential for bias should have been fairly obvious, given the media coverage of Mr. Trump’s strength among less educated voters and the well-established difference in response rates along educational lines.
Weighting errors, ‘shy’ Trump voters and late-deciding voters all taken together may explain most of the state-wide polling failure to predict Trump’s success in the Rust belt and other battleground states. Then there is the thorny problem of low-level civic engagement, “the best-known response bias in polling,” which is a more difficult fix to implement. “It’s certainly possible that Mr. Trump’s white working-class supporters were less likely to respond to telephone surveys,” writes Cohn. “But the data, at least in the public realm, is not very clear.”
Looking toward the future, Cohn is not optimistic that state polls are going to get better, in part because newspaper budgets for polling are shrinking. Also, “The failure of many state pollsters to even ask respondents about education does not inspire much confidence in their ability to stave off less predictable sources of bias.”
It’s not clear how much an be done to correct for ‘shy’ voters and late deciders. But whether media-linked polls get their act together in state and local polls or not, it is incumbent on Democratic pollsters to at least address the weighting issue, if Democratic strategy is going to reflect demographic reality. Democrats, in particular, can’t afford to cut corners on their inside polls — if they want a reality-based strategy.