Paul Barber offers a rundown of the problems with an overreliance on polls, while Heather Libby goes further and suggests that we ignore national polls altogether. But I'll follow up on the argument I've made before that rather than taking any concerns about poll data as a basis for throwing polling out the window altogether, we should instead treat them as reasons for caution in interpreting useful information.
Barber focuses largely on the methodological issues involved in trying to get a representative sample from an electorate in which people are less and less inclined to respond to requests to participate in the first place. And there are certainly reasons to question each of the workarounds on their own.
That said, if we face the choice of either (a) lending at least some credence to the view that each methodology might have some merit while using competing polls (and ultimately electoral results) as a check, (b) buying completely into one style of poll and thus excluding all other data, or (c) trusting no polling information at all and thus relying solely on parties and pundits to tell us where an election stands, I'd have a hard time seeing how we're well served by any option other than (a).
And fortunately, the poll information we have is then compiled in ways which makes it relatively easy to analyze national-level data. So while we should absolutely question whether a single poll tells the full story (particularly in its subsamples), we can check with public aggregators for both a big-picture look at the national race, and a test as to the plausibility of new polling information.
Of course, those sites focus largely on the national level. So what about Libby's view that there's a meaningful distinction between national and riding-level poll data, and that we should pay attention only to the latter?
The problem there lies in the limited number of riding-level polls actually conducted. Parties, pollsters and media outlets may decide to conduct polls in ridings of particular interest - but we should have learned by now that national and regional trends make a huge difference in determining what ridings actually affect electoral outcomes in the first place. And then, if a small number of polls are conducted in a riding, a single skewed sample or methodological issue can grossly warp the results.
Again, those are cautions as to the use of riding-level data alone. But if we can compare a single-riding poll to see how it fits into broader national or regional pictures, then we have a far better chance of finding the right balance between the two.
And that should be our ultimate goal. While some partisans who should know better have been particularly motivated to cherry-pick polls to tell only the story they want told, the fact is that all polling information is potentially useful if we recognize its limitations. And rather than looking for excuses to throw out some or all of the data we have based on either partisan preference or methodological squabbles, we should instead be incorporating it into a full analysis of what's happening around us.