I've been expecting an election post-mortem from Nate Silver comparing the 538 predictions to (1) the polls themselves, and (2) the other polling compilations (pollster and real clear politics). I'm surprised that it hasn't yet appeared, because it's an interesting question that deserves some sophisticated analysis, namely, when you try hard to learn as much as possible from aggregated polling data fit against demographics, how close does it get you?
In absence of the sophisticated analysis, we can do something simple minded: we can compare the predicted spread in the national popular vote to the actual spread (which is by now settled, regardless of what comes out of Missouri). Kos has some work on this here and here. For aggregators, 538 missed the popular vote difference by 0.4%, while Real Clear Politics and Pollster both came in over 2%. Score one for Nate.
Now for individual polls, Kos lists 14 pollsters (CNN, Rasmussen, Gallup, CBS, etc) and their deviation from the final popular vote ranged from 0.5 to 5.7, with median error around 2.5%. Conclusions to draw from this: Evidently quality aggregation adds significant value to the projection capability of pollsters. Given 14 different pollsters making predictions, the odds are that at least one of them would be nearly spot on by chance (so, not especially useful knowledge because you can't predict WHICH one will be spot on). So that Nate beat the whole field is remarkable.
Another conclusion is that low quality aggregation adds little value: it's not enough to simply average results. Pollster and Real Clear Politics came in about like typical single pollsters.
So, Nate evidently owns the quality aggregation. It will be interesting (to me at least) to see how this stands up to more sophisticated analysis.
Tuesday, November 11, 2008
Subscribe to:
Post Comments (Atom)
1 comment:
You should send this to Nate Silver. Either he's too focused on the unresolved Senate races or is too humble to extol his own virtues.
Post a Comment