Since at least the beginning of the 1990s, we´ve been hearing a refrain periodically, and with ever-increasing volume: There are too many polls out there! They´re confusing! And they are far too influential!
Today´s Poltico offers a good summary of this year´s complaints about polls. What makes them different is that they are not just coming from those who deplore the impact of polling data on candidates and elected officials: they´re coming from polling experts, too, and from pollsters themselves, some of whom seem to think the data pool is getting flooded with pseudo-information generated by firms with poor methodologies:
Republican pollster Bill McInturff of Public Opinion Strategies put it bluntly: “Lots of them are simply lousy polls; they don’t accurately reflect younger voters, African-Americans and Latinos. This all contributes to underrepresenting Democrat support. Having said that, there’s a real debate about what will be the appropriate composition of the electorate.”
Unfortunately, all polls tend to get treated equally in the general political buzz:
The proliferation of public polling has also become a concern for many political professionals this cycle, as surveys often get the same free media attention whether they are done by established outlets or fly-by-night first-timers. And the headlines those surveys make influence not just campaign strategy but how voters make decisions.
It´s not just a matter of good and bad pollsters; there are legitimate questions about how to solve a variety of basic problems in obtaining a good sample and determining likelihood to vote:
{Mark} Blumenthal also said that the very real phenomenon of properly identifying voters when up to 30 percent of households in the country have only cell phones is adding significantly to the troubles with predicting who will vote.
This is a problem that´s been especially apparent in Survey USA´s state polls this year, which frequently show young voters going heavily for Republican candidates, apparently because of exceptionally small samples of such voters.
The big question, though, is whether there´s simply too much information out there, or perhaps not enough. The biggest change in polling this years is the frequency of state polling, particularly by Rasmussen, which has deployed a controversial likely-voter model all year long, and has typically shown a stronger Republican vote in general election trials than other pollsters. SUSA´s recent record of showing relatively strong Republican performance has intensified the impression–true or false–of an impending Republican tsunami; the implosion of Research 2000 eliminated what was once a counter-weight in the polling averages that most analysts and many campaigns rely on. PPP heavy entry into state-level polling this year has been helpful, in that the firm´s abundant disclosure of non-top-line data and its relatively “balanced” results lend it credibility.
In any event, a bit of thought should make it plain that the real problem here is not with any excess of polling data, but with its limitations, and moreover, with the media and partisan spinners who fail to conduct minimal analysis of top-line results. The Research 2000 fiasco was in retrospect a good thing, insofar as it showed that knowledgeable analysts were capable of policing the field to some extent. Unless polls begin to converge between now and November 2, we are going to have some clear winners and losers in the polling profession, and perhaps greater pressure for disclosure of methodologies and data, and some sounder conclusions about how to deal with common problems in sampling and surveying.