washington, dc

The Democratic Strategist

Political Strategy for a Permanent Democratic Majority

Midterm Polls Were Accurate Enough

One of the great post-election rituals in recent years has been an assessment of the polls we all obsessed over before the first ballot was cast. I wrote about that at New York.

In retrospect, the national polls didn’t do badly at all that year, as Nate Silver explained:

“Trump outperformed his national polls by only 1 to 2 percentage points in losing the popular vote to Clinton, making them slightly closer to the mark than they were in 2012. Meanwhile, he beat his polls by only 2 to 3 percentage points in the average swing state.3 Certainly, there were individual pollsters that had some explaining to do, especially in Michigan, Wisconsin and Pennsylvania, where Trump beat his polls by a larger amount. But the result was not some sort of massive outlier; on the contrary, the polls were pretty much as accurate as they’d been, on average, since 1968.”

Still, many Republicans have continued to believe that pollsters are generally part of a media establishment conspiring to undermine their confidence via the “fake news” of cooked data, as their leader has suggested:

“Nonpartisan House polls have historically missed the mark by an average of 5.9 points. This year it was just 4.9 points. Again, that means the average district poll was a full point closer to the result than usual….

“Statewide polling also had a strong year, although it should be noted that Senate and governors’ polling did pick fewer winners than usual. The average poll in the Senate was off by only 4.2 points. The average Senate poll historically has been off by 5.2 points, which means this year’s polls were a point better than average. Likewise, the average governor’s poll had an error rate of 4.4 points. That’s 0.7 point more accurate than the average governor’s poll since 1998.”

The reason for the “pick fewer winners” problem wasn’t so much polling error as the exceptional number of very close races. The gold-standard Cook Political Report rated nine Senate races and 12 gubernatorial races as toss-ups. There were a few races — which happened to be very high-profile contests — where the polls seemed to be off more than a hair, such as the Florida governor’s race, where the RealClearPolitics polling average on election eve showed Andrew Gillum up by nearly four points; it showed a slight lead for Bill Nelson in the same state; both lost by an eyelash. And the polls missed Mike Braun’s solid Senate win in Indiana. But the RCP averages correctly predicted the outcome of many cliff-hangers like the Georgia’s governor’s race and Senate contests in MissouriMontana, and Texas.

Where there were mistakes, they didn’t follow any partisan pattern, as Nate Cohn observed in his review of midterm polling:

“On average, the polls were biased toward Democrats (meaning the Democrats did worse in the elections than polls indicated they would) by 0.4 points, making this year’s polls the least biased since 2006 and nothing like the polls in 2016, which were three points more Democratic than the results.”

And if you get into particular types of races, as Harry Enten did, the partisan “errors” were mixed:

“The average governor and Senate polls were about a point more favorable to the Democrats than the result. The average generic congressional ballot and House district polls were less than a point more favorable to Republicans than the actual result.”

Since sky-high turnout (the highest as a percentage of eligible voters in a midterm in over a century) may have been the biggest surprise of the elections, and the one pollsters would have had the hardest time predicting, the overall accuracy and balance were especially impressive. Certain types of voters, however, still seem to marginally elude pollsters, notes Cohn:

“The higher-than-expected turnout might have inadvertently contributed to a 2016-like pattern, since lower-turnout voters in the big urban states tend to be nonwhite and Democratic, while lower-turnout voters in rural, less educated states tend to be white working-class voters.

“In the Times Upshot/Siena polls, undecided voters tended to follow a similar pattern: In the Sun Belt, the undecided voters tended to be nonwhite Democrats; in the North, they were more likely to be white voters without a degree.”

So unsurprisingly, polls again tended to underestimate Republican votes in states with big white working-class populations and to underestimate Democratic voters in states with large nonwhite populations. And very late trends in undecided voters — which polls always miss to some extent — may have mattered here and there as well.

From a consumer’s point of view (and no one consumes polls quite like a daily political writer like yours truly), the big new development in 2018 was the large battery of House polls conducted by the New York Times in conjunction with Siena College. The combine not only supplied rare data on competitive House races (where a lot of the polling is private), but hit the mark quite often, as Enten notes:

“[The] increase in accuracy [in House races] was driven in large part by the Siena College/New York Times polls, whose surveys made up the bulk of district level polling and had an average absolute error of just about 3 points. That’s nearly 3 points better than average, which is off the charts good.”

If, like me, you believe the answer to questionable data is more, not less, data, the proliferation of polls is a good thing, even if quality continues to vary. And while Republicans may continue to follow Trump’s cynical habit of attacking any information that doesn’t confirm their own biases, you’d hope that at least privately they’d concede that more competition produces a better and more reliable result.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.