When Nate Silver released FiveThirtyEight‘s revised pollster ratings for the first time in two years, I was all over it at New York:
The letter grades Silver and his people assign to particular polling outlets is mostly based on accuracy over time, with adjustments for methodological quality and transparency. The database for the ratings “includes all polls in the final 21 days of gubernatorial and congressional elections since 1998 and presidential primaries and general elections since 2000. It also includes polling of special elections for these offices.” That’s a lot of data.
There’s a short list of outlets that earned an A-plus rating. Some are well-known nationally: Monmouth, ABC/Washington Post, and Selzer and Co. A few others limit their work to particular states or regions, such as Colorado-based Ciruli Associates and the Northwest-focused Elway Research. There’s one other, sadly, that has closed its doors (California-based Field Research). The longer list of “A” pollsters includes some very familiar names: Survey USA, Marist, Siena, Fox News, PPIC, and Marquette Law School.
At the other end of the spectrum, one very prominent online-polling outfit, Survey Monkey, gets a D-minus rating. An enormous number of pollsters do no better than C-plus, including a lot of smaller, state-based enterprises, but also nationally renowned and prolific sources of data like Zogby Interactive, Trafalgar Group, Rasmussen Reports, and Opinion Savvy.
Having a high rating is no guarantee of infallibility, of course. One of Silver’s A-plus pollsters, Ann Selzer, who has a long-established reputation for accuracy, missed the order of finish in her final 2016 Iowa Caucus poll of Republicans (she had Trump rather than Cruz winning), which was shocking at the time. And being inaccurate doesn’t mean an outlet’s data is useless: Survey Monkey may have a bad rating from FiveThirtyEight, but its vast sample sizes can provide valuable information on non-horse-race matters, and trend lines in even dubious polls can have some predictive significance. Polls can be misleading if you aren’t careful, but the answer to poor data is more and better data, not throwing it all out and relying on intuition, anecdotes, or academic models.
And as Nate Silver makes clear in the article accompanying his new ratings, the recent rap on polls — much of it attributable to a misunderstanding of what happened in 2016, with Donald Trump piling on with ignorant or malicious takes on the polls ever since — is largely a crock:
“Over the past two years — meaning in the 2016 general election and then in the various gubernatorial elections and special elections that have taken place in 2017 and 2018 — the accuracy of polls has been pretty much average by historical standards….
“The media narrative that polling accuracy has taken a nosedive is mostly bullshit, in other words. Polls were never as good as the media assumed they were before 2016 — and they aren’t nearly as bad as the media seems to assume they are now. In reality, not that much has changed.”
Some basic limitations of polling remain as important as ever: primary (and special election) polling is very difficult to do, and state polling is almost always less accurate than national polling. It’s also worth remembering that getting the winner right is no indicator of quality: some national polls that picked Trump to win in 2016 were not especially close on the actual popular vote totals, while some that picked Clinton to win were spot on (because she did in fact win the popular vote by more than 2 percent).
In the end it’s smart to pay more attention to the aggregate polling averages (most notably those maintained by FiveThirtyEight and RealClearPolitics) than to any one or two or three individual polls, and to avoid the temptation to hype polls showing one’s own “team” doing well while filtering out adverse findings — particularly if “your” poll is conducted by a firm with a poor reputation and “their” poll is gold-standard. When all else fails, you can just wait for actual elections and go with that. But even then, polls help us understand the “why” as well as the “what.” And that matters, too.