A major source of angst for many Democrats the last couple of weeks has been the advent of several general election polls showing Donald Trump catching up with Hillary Clinton after earlier polls showed him in very bad shape. I addressed the challenge of dealing with such polls at New York today:
Most political junkies realize there are some polling outlets that have what is known as a “house effect” — a more or less systematic tendency to show results bending one way or another to an extent that makes their surveys consistent outliers. Few Democrats, for example, will panic over an adverse Rasmussen poll. But some “house effects” are the product not of partisan or candidate bias, but of deployment of methodologies that over time tend to produce outlier results. I really don’t think Gallup in 2012 was shilling for Mitt Romney, even though its polls regularly and significantly inflated his odds of winning; the venerable organization made transparent and earnest efforts after the election to analyze and correct its errors.
It’s also clear that some phenomena — high cell-phone usage, declining response rates, and the increased expenses of live interviewing — are making polling more perilous and less scientific than most of us realize. All of this explains why the experts tell consumers of public-opinion research to rely on polling averages, not individual polls, to understand what’s going on politically, and to examine trends rather than absolute numbers. When it comes to polls about distant events, like the November general election, significantly more caution is in order. Some would argue that a general-election matchup poll prior to the party conventions is pretty much useless.
So the current hype about Trump more or less catching Clinton in general-election support should be taken with a shaker of salt and perhaps active disdain.
In a New York Times op-ed today, political scientists Norman Ornstein and Alan Abramowitz discuss all of the problems with such general-election polls and the methodologies they deploy, and then add this important observation:
When polling aficionados see results that seem surprising or unusual, the first instinct is to look under the hood at things like demographic and partisan distributions. When cable news hosts and talking heads see these kinds of results, they exult, report and analyze ad nauseam. Caveats or cautions are rarely included.
That’s particularly true if these “cable news hosts and talking heads” find validation for their point of view from outlier polls. The fact that Republicans and Bernie Sanders-supporting Democrats have a common interest in showing Clinton doing poorly against Trump adds to the noise, to the point where it’s the only thing many people hear.
Maybe these polls will turn out to be accurate, but we just don’t know that now. As Ornstein and Abramowitz conclude:
Smart analysts are working to sort out distorting effects of questions and poll design. In the meantime, voters and analysts alike should beware of polls that show implausible, eye-catching results. Look for polling averages and use gold-standard surveys, like Pew. Everyone needs to be better at reading polls — to first look deeper into the quality and nature of a poll before assessing the results.
Alternatively, just be careful about jumping to conclusions.
There’s plenty of time before the general election to look at data with some perspective.