Pollster.com has an interesting piece on the confusing disparity among all of the polls taken for this election. In particular, “likely voter model design depends significantly on judgments that pollsters make about how to model the likelihood that any voter sampled will actually turn out and vote in the election.”

The author, Clark A. Miller, an associate professor at Arizona State University, notes that “the trials and tribulations of climate modelers — and also their approaches to addressing skepticism about their judgments — offer three useful insights for pollsters working with likely voter models”:

Reader support makes our work possible. Donate today to keep our site free. All donations DOUBLED!
  1. Transparency — climate models are far more complex than most polls, but climate modelers have made significant efforts to make their models transparent, in a way that many pollsters haven’t. (In much the same way, computer scientists have called for the code used in voting machines to be open source.) By making their models transparent, i.e., by telling everyone the judgments they use to design their model, pollsters would enhance the capacity of other pollsters and knowledgeable consumers of polls to analyze how the models used shape the final reported polling outcome. They would also do well to publish the internal cross-tabs for their data.
  2. Sensitivity — climate modelers have also put a lot of effort into publishing the results of sensitivity analyses that test their models to see how they are impacted by embedded judgments (or assumptions). This is precisely what Gallup has done in the past week or so, in a limited fashion, with its “traditional” and “extended” LV models and its RV reporting. By conducting and publishing sensitivity analyses, Gallup has helped enhance all of our capacity to properly understand how their model responds to different assumptions regarding who can be expected to vote.
  3. Grist thanks its sponsors. Become one.

  4. Comparison — climate modelers have also taken a third step of deliberate comparisons of their models using identical input data. The purpose of such comparison is to identify where scientific judgments were responsible for variations among models, and where those variations resulted from divergent input data. Since the purpose of polling is to figure out what the data are saying, it is essential to know how different models are interpreting that data, which can only be done if we know how different models respond to the same raw samples.

Miller concludes:

The reason climate modelers have carried out this activity is to help make sure that the use of climate model outputs in policy choices was as informed as possible. This can’t prevent politicians, the media, or anyone else from inappropriately interpreting the outputs of their models, but it can enable a more informed debate about what models are actually saying and, therefore, how to make sense of the underlying data. As the importance of polling grows, to elections and therefore to how we implement democracy, pollsters should want their polls to be as informative as possible to journalists, politicians, and the public. Adopting model transparency, sensitivity analyses, and systematic model comparisons could go a long way toward creating such informed conversations.

We will know in a few days which pollsters’ models were correct.

Whether the climate models that say we are headed towards very serious climate impacts are right, or whether the ones that suggest we are headed towards catastrophic impacts are right, the actual outcome will take far longer to determine. And, of course, we still have time to affect that outcome — starting with this election!

Grist thanks its sponsors. Become one.

This post was created for ClimateProgress.org, a project of the Center for American Progress Action Fund.