Why The Press Is Getting The Wrong Message Out Of The 'Nate Silver Walloped The Pundits' Story

from the small-sample-sizes dept

Let me start off by saying that I've been a longterm Nate Silver fan, back before he was the "fivethirtyeight" guy, and when he was just some random guy whose statistical models were helping my fantasy baseball team kick ass. And let me follow that up by noting that even more than being a Nate Silver fan, I'm a huge fan of statistics in general. I think that statistics should be a required class in school and that a combination of statistics and economics (the two go hand in hand) literacy (or lack thereof) is a major problem today, leading to numerous bad policy decisions. Finally, I've never been a fan (at all) of political punditry that focuses on the "horse race" aspect of politics. So, given all that, it has certainly been fun to follow the secondary storyline from last night -- which is how Nate Silver and his statistical genius "crushed" the pundits in predicting the election -- to the point that every single major press "pundit" was flat out wrong, and it looked like Silver had a perfect crystal ball. And, given how much Silver was attacked for being a "stats guy," (or for being biased, rather than neutral) you can certainly understand why it's tempting to wish he'd do something like Whitney McNamara's mock blog post:
In many ways, I agree that yesterday was the "moneyball moment" in politics, in which the prognosticators were shown to be faulty, while the number crunchers were shown to be accurate. Hell, it was a much stronger example than the Moneyball case in baseball, which never had a "victory" quite as clearly aligned with the numbers.

Of course, if you look at what's happened to baseball since "Moneyball" and the success of the first statistical analysis guys, it should be a reminder that statistical prognostication is still about the probabilities -- and not about true predictions. And this is where the "suddenly-in-awe" pundits are still getting confused. They seem to think that Silver or other statistical modelers suddenly have a magic crystal ball with which they can predict the future. But probabilities and predictions are different, and Silver himself would likely admit (and, actually, did admit) that when you're dealing in probabilities, you're still going to be completely wrong some percentage of the time (he can even tell you what percentage of the time!) Even if the probabilities show a 90% likelihood that a certain event will happen, it still means that one time out of 10, you're going to be wrong.

Unfortunately, our brains don't deal that well with probabilities. We don't think in probabilities. Because we're dealing with a (mostly) binary situation, we assume that as soon as the probabilities tilt in our favor, it means that a "win" is somehow assured, and mentally, the probabilities turn into a prediction. It's very, very difficult for our brains not to think that way.

So I'm thrilled to see statistical analysis "win" over the moronic pundit-class who thinks that "storylines" or "momentum" (or, um, the ultimate in believing in anecdotes over data, "my friends see more yard signs" for one candidate) are valid methods for prognosticating. But it seems that the press, by going on to insist that Silver and his ilk are the new magic prognosticators, are missing the point just as much as those who thought the election could be predicted by political pundits.

Statistics is a tool for highlighting the probabilities. I'm sure that Nate Silver clones are going to be appearing a lot more on TV during the next major election cycles -- and I think that's a step forward. But now it seems like some people are expecting Silver and other stats guys to be right every time. And that's going to lead to backlash, just as the "failure" of Moneyball-type analysis to always get it exactly right resulted in some backlash in baseball. There will be data analysis in future election cycles -- likely from Silver himself -- that is wrong. That's the nature of probabilities. It will happen. And, unfortunately, people will then suddenly go back to arguing the opposite: that the stats geeks were "wrong."

But, as they say in the stats world, these are small sample size issues. Believing that statistical analysis is a perfect tool for predictions based on a single election is almost (though not quite) as weak as some of the traditional political punditry methods for predictions.

Hopefully, as with baseball, after a few years, the whole idea that these are entirely separate worlds will melt away. In baseball, every team now uses detailed statistical analysis as a tool, and most seem to understand that it suggests probabilities that help them find underexploited opportunities. But no one relies on it as a crystal ball that predicts the absolute future. Hopefully we'll reach that same sort of equilibrium in political analysis as well.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: elections, moneyball, nate silver, politics, predictions, press, probability, pundits, statistics


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    Skeptical Cynic (profile), 7 Nov 2012 @ 12:50pm

    And hopefully we'll reach a point...

    where political analysis actually involves analysis of quantifiable data.

    Nah, never happen.But we can still hope.

    Good wish though Mike

    link to this | view in thread ]

  2. icon
    John Fenderson (profile), 7 Nov 2012 @ 12:53pm

    What I wish people would learn from Mr. Silver

    Excellent analysis.

    Nate Silver knows his stuff, but it's a shame people only listen to his predictions. As you imply, it's even more interesting is when he talks about statistical analysis itself. We can all learn a lot that is directly applicable to our everyday lives from him.

    A short example is an interview I heard with him where the interviewer said something about how Nate is claiming Obama will win. His response was on the nose: that he wasn't claiming any such thing. Obama obviously could lose. He's just quantifying the odds. In other words, if Obama lost, it wouldn't mean his predictions are wrong, only that the less likely outcome happened.

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 7 Nov 2012 @ 12:58pm

    The Law of Large Numbers is poorly understood by the public, and I think that misunderstanding is hardwired into humanity's thought patterns

    It's also very rarely wrong.

    If people understood it, they'd understand why casinos never lose, and why insurance isn't a ripoff.

    They'd understand that just because your uncle smoked a pack a day until he died at age 95 it doesn't mean tobacco doesn't cause cancer.

    They'd also understand why Nate Silver is right.

    link to this | view in thread ]

  4. icon
    Whitney McNamara (profile), 7 Nov 2012 @ 1:02pm

    Heh - didn't expect to see that when I clicked through on the link.

    But yes: that Nate Silver is simply trying to present the results of analyzing a large amount of data, and then trying to offer some context around that seems to be getting lost pretty quickly in the "50 for 50" uproar.

    As was pointed out above, Silver wasn't every saying that Obama was going to win, he was saying that the available data indicated that Obama had a higher probability of winning. Silver could have been entirely correct in his analysis and still seen a Romney win.

    Also: two minutes after posting that I realized that I'd missed the obvious headline -- I think this version is better, but the ball was already rolling on the other one: http://tumblr.absono.us/post/35203726587

    link to this | view in thread ]

  5. identicon
    arcan, 7 Nov 2012 @ 1:04pm

    Statistics is killing the pundit industry.

    we must outlaw it to protect our multi-trillion dollar industry with 70 billion employees

    link to this | view in thread ]

  6. icon
    Paul Renault (profile), 7 Nov 2012 @ 1:08pm

    Me, I want to read those 6 comments.

    ..but something seems to be wrong with my mouse.

    link to this | view in thread ]

  7. identicon
    arcan, 7 Nov 2012 @ 1:27pm

    Re:

    i forgot a couple parts it will also protect us from terrorists for the children.

    link to this | view in thread ]

  8. identicon
    Rich, 7 Nov 2012 @ 1:30pm

    Re:

    Yup, The Law of Large Numbers and anecdotal evident. I have beaten my head against the wall trying to explain those to my family for years. "But my friend sally paid for the extended warranty and lucky thing to..."

    link to this | view in thread ]

  9. icon
    Krish (profile), 7 Nov 2012 @ 1:43pm

    Bayesian

    As a Bayesian, I have to point out that not only does our mind have a problem with probability but it's also not trivial to even define what one means by a statement like "Obama has a 85% chance of winning". For spinning coins, we can define the probability of heads by the frequency of heads in an ever-increasing number of coin spins. But the 2012 presidential is a non-repeatable event and all that breaks down.

    link to this | view in thread ]

  10. icon
    yaga (profile), 7 Nov 2012 @ 1:44pm

    Re: Me, I want to read those 6 comments.

    Can you connect to the activation server?

    link to this | view in thread ]

  11. icon
    BentFranklin (profile), 7 Nov 2012 @ 1:46pm

    Obligatory XKCD link: http://xkcd.com/1131/

    link to this | view in thread ]

  12. icon
    charliebrown (profile), 7 Nov 2012 @ 2:06pm

    There is a very high probability that I enjoyed that article. Unfortunately, there is also an incredibly high probability that none of my friends will "get it". I could be wrong, though. In fact, I hope I am!

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 7 Nov 2012 @ 2:23pm

    Silver didn't help matters when he challenged Scarborough to a straight up, even odds Romney/Obama bet (which makes no sense in light of Scarborough's insistence that the race was a 50/50 toss up).

    link to this | view in thread ]

  14. identicon
    Anonymous Coward, 7 Nov 2012 @ 2:28pm

    Re: Re:

    How are the mice going to do the chat shows?

    link to this | view in thread ]

  15. identicon
    Anonymous Coward, 7 Nov 2012 @ 2:43pm

    A perfect Nate Silver example of the 90%

    Heidi Heitkamp won in the North Dakota Senate race against Rick Berg even when Silver's number gave Berg a 92.5% probability of winning.

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 7 Nov 2012 @ 2:54pm

    A quick example of probability analysis.

    Imaging you are an excellent stock picker. When you are right, you make $50. But when you are wrong you lose $500. Your right 90% of the time. Over the course of 10 trades, your down $50.

    Reverse the situation. Your a horrible stock picker, but manage your losses well. When your wrong, you lose $50. When your right, you make $500. Your right only 10% of the time, but over the course of 10 trades, your up $50.

    You can be right 90% of the time and lose money. You can be wrong 90% of the time and make money. In these cases, probability is less important than managing risk.

    Nates numbers are great as long as you accept that he will be wrong occasionally and that's part of the model and to be expected.

    link to this | view in thread ]

  17. icon
    ltlw0lf (profile), 7 Nov 2012 @ 3:26pm

    Re:

    The Law of Large Numbers is poorly understood by the public, and I think that misunderstanding is hardwired into humanity's thought patterns

    I thought it was amazing that we all think logarithmically naturally when we are babies, but then learn to think linearly later on in life. We know the difference between a few and a lot, but when numbers start getting really large, we forget what that means. There has to be an evolutionary driver for this.

    Of course, I disagree with Mike that Statistics should be taught along with Economics in school. It *should* be taught at home, and then reinforced at school. Teaching it as part of Sesame Street or something like that. And critical thinking and theory of knowledge. But then again, kids should be allowed to be kids too. Figuring how to teach while making it fun is the trick, and I don't think we've mastered that yet in our current old-folks run (grind the new teachers down) education system.

    link to this | view in thread ]

  18. identicon
    Anonymous Coward, 7 Nov 2012 @ 4:03pm

    Re: Bayesian

    it's also not trivial to even define what one means by a statement like "Obama has a 85% chance of winning".

    Is it easier to cope with this idea?:   Given what we know before the election, and given a specific model of the system, then that model tells us there are 0.6 bits of information to be obtained from the election.

    In contrast, the statement that the election “is a tossup” is an assertion that —using a different model— there is 1 bit of information in the actual election results.

    link to this | view in thread ]

  19. icon
    Duke (profile), 7 Nov 2012 @ 5:10pm

    Like Predicting the Weather

    Predicting the whether or not it will rain is pretty difficult; lots of factors to consider, complex equations, often a huge mess. So how did people used to predict it? Based on past experience ("it usually rains this time of year"), anecdotal evidence ("someone said it rained nearby yesterday") and a bit of superstition ("the gods say it will rain"). Some of the predictions end up being right, some are wrong. Those who are right enough of the time (or are able to convince people they are) become "wisemen", revered and asked for their opinion increasingly often. But people still end up getting wet (or have to carry an umbrella with them all the time).

    Then comes science, statistics and analysis, with a healthy dose of supercomputers. Now you get as much data as you can, use the best models you can, run it all through computer simulations and you end up with a set of probable outcomes; an 80% (+/- 5%) chance of rain tomorrow, and a 60% (+/- 15%) chance of rain next week. At first this new technology is distrusted, but after a few successful predictions (particularly when the "wisemen" get it spectacularly wrong at the same time). Now the "wisemen" are out of a job (or have to move to increasingly gullible groups of people), and bitter about it. But people can judge for themselves whether or not to take their umbrella, and most can stay dry.

    Unfortunately, we seem to still be at the narrative-based "wisemen" stage of politics (both forecasting results, and policy-making/voting); where what matters is the story, the emotional appeal, the personality. It would be nice if we could move on to the evidence-/logic-based stage, but while we may get there with forecasting, I have a feeling that evidence-based voting and policy-making is still a long way away...

    link to this | view in thread ]

  20. icon
    Duke (profile), 7 Nov 2012 @ 5:26pm

    Re:

    Surely it does make sense (for Silver), as Silver is getting the better odds (from my limited understanding of gambling)?

    Let's say Silver thinks Obama has a 3/4 chance of winning. Scarborough puts it at 1/2. They both bet $A.

    So if Silver is right, he expects to get back 3/4 x $2A = $1.5A.
    If Scarborough is right, Silver expects to get back 1/2 x $2A = $A.

    Whereas Scarborough expects to get back $0.5A and $A respectively.

    However certain Silver is that he is right, he expects to get his money back, or make a profit. Whereas even if Scarborough is right, he still only expects to break even.

    So, what does this tell us? The Scarborough doesn't seem to understand probability. Which suggests that Silver's prediction is probably the more reliable.

    [Disclaimer: I have no idea who Scarborough is (although I've been there, and apparently there's a fair), what the bet actually was (although I vaguely remember reading something about it late last night), and know very little about gambling.]

    link to this | view in thread ]

  21. identicon
    Anonymous Coward, 7 Nov 2012 @ 5:33pm

    Re: Re:

    Well, yeah, it would be a good deal for Silver, but there was no reason to believe Scarborough would or should accept the offer, so it's a silly offer to make.

    It's like me saying there's a 100% chance of X, and you saying you think there's actually a 10% chance of not X, and me saying "Oh, yeah? why don't you put your money where your mouth is and bet $1000 that X won't happen." It makes me look stupid, since you never actually put your mouth there (even though it would be great odds for me).

    link to this | view in thread ]

  22. identicon
    Anonymous Coward, 7 Nov 2012 @ 5:35pm

    Re: Re:

    Also, FYI, Scarborough didn't take the bet to my knowledge (since it was a ridiculous offer, probably), so the wager doesn't really shine any light on Scarborough's probability understanding (other than that he refrained from accepting a ridiculous offer).

    link to this | view in thread ]

  23. icon
    Duke (profile), 7 Nov 2012 @ 5:45pm

    Re: A perfect Nate Silver example of the 90%

    I was bored, so went through the list of senators. Out of 33, they got 2 wrong - one possibly due to underestimating a third party, the other due to inaccurate polls.

    In Montana, they gave the Democrat a 34% chance of getting re-elected, putting him 48.4%/49.9% behind. He got 48.46%, but the Libertarian knocked his opponent down to 44.90%. So that may be an oversight (perhaps of the original polls as well, not taking the Libertarian into account.

    In North Dakota, they gave the Democrat an 8% chance of winning, with 5% in the polls. Yet she won by 1%. There it seems that the polls were mostly out.

    So that's 2/33 wrong, or about 6% error rate. This is one of those interesting situations where the mistake actually helps support the prediction.

    link to this | view in thread ]

  24. icon
    Duke (profile), 7 Nov 2012 @ 5:48pm

    Re: Re: Re:

    Ah, OK. That makes more sense (or rather less sense from his perspective). I could probably work out what odds he should have given (presumably ... his estimate + 1/2 all over two?), but can't be bothered now.

    link to this | view in thread ]

  25. identicon
    Anonymous Coward, 7 Nov 2012 @ 7:18pm

    Re:

    "insurance isn't a ripoff"

    If they were to follow the rules that is

    link to this | view in thread ]

  26. icon
    Spaceman Spiff (profile), 7 Nov 2012 @ 9:06pm

    Stats and Lies

    I like statistics. I like them a lot, and I use them extensively in predictive analytics problems, but then I couple them with more rigorous analytic methods, such as Kalman filters, to predict how systems are trending (yes, this system will fail in 6 days if we don't replace the disc drive now). Hence, I am a great proponent of the old saw that "there are lies, damned lies, and then there are statistics"... :-)

    IE, it is a tool, and all tools fail at some point - sometimes catastrophically!

    link to this | view in thread ]

  27. identicon
    Anonymous Coward, 8 Nov 2012 @ 12:19am

    “We ran the election 66,000 times every night..."

    How about someone doing data mining and running election simulation for their ultimately successful campaign?

    Remember Obama's AMA on Reddit? The article suggests that this was driven from their data analysis. It showed that many of the people they're trying to reach was there...

    link to this | view in thread ]

  28. identicon
    Anonymous Coward, 8 Nov 2012 @ 12:59am

    Re:

    Law of large numbers is misused so much on Wall Street, that I doubt understanding it is the main problem. I think the application and breakdown of it is even more important than getting to know it. Nate Silver could end up getting second leveled by people relying on his analysis too much. If enough change behaviour accordingly, this aggregation kills the application of law of large numbers and that is the real problem!

    link to this | view in thread ]

  29. icon
    Richard (profile), 8 Nov 2012 @ 5:49am

    Re:

    Imaging you are an excellent stock picker. When you are right, you make $50. But when you are wrong you lose $500. Your right 90% of the time. Over the course of 10 trades, your down $50.


    You are assuming that the probability analysis takes no account of the scale of the gains or losses. In reality it would do so. The numbers ARE reliable. Your problem is that you are only looking at half of them.

    link to this | view in thread ]

  30. icon
    orbitalinsertion (profile), 8 Nov 2012 @ 7:19am

    Re: Re: Re:

    It's more silly than that, if I understand correctly. If one thinks the odds are 50/50, what exactly would one bet on?

    link to this | view in thread ]

  31. identicon
    Anonymous Coward, 8 Nov 2012 @ 7:22am

    Didn't Nate predict 5% for libertarians?

    link to this | view in thread ]

  32. icon
    orbitalinsertion (profile), 8 Nov 2012 @ 7:50am

    Re: A perfect Nate Silver example of the 90%

    You know, when the weather says 80% chance of rain, it doesn't always rain. Science and math suck. And conspiracies!

    link to this | view in thread ]

  33. identicon
    JEDIDIAH, 8 Nov 2012 @ 8:09am

    The problem with ideal models.

    > If people understood it, they'd understand why casinos never lose, and why insurance isn't a ripoff.

    Insurance is a ripoff because the process is corrupt. They have legions of lawyers trying to give them an excuse to not pay you. They might try to welsh even without the opinion of a coverage lawyer.

    It's a conflict of interest problem inherent to "for profit" insurance.

    The concept is not wrong, just problematic when allowed to be handled by Ferengis. It's the gap between ideal models and actual practice that turn people off of "policies lobbied for by economists".

    link to this | view in thread ]

  34. identicon
    Rob, 8 Nov 2012 @ 8:18am

    Re:

    One could make the bet a little more meaningful by upping the ratio of payouts to match your probabilities. If one believes Obama has a 90% chance to win, one should consider it an even bet to offer 9:1 odds.

    link to this | view in thread ]

  35. icon
    Niall (profile), 9 Nov 2012 @ 5:13am

    Re: The problem with ideal models.

    'Welch', not 'Welsh'. Millions of Celts just cried out and were silenced by that insult!

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.