from the random-ain't-so-random dept
You may have heard by now that the political website Daily Kos has come out and explained that it believes the polling firm it has used for a while, Research 2000,
was faking its data. While it's nice to see a publication come right out and bluntly admit that it had relied on data that it now believes was not legit, what's fascinating if you're a stats geek is
how a team of stats geeks figured out there were problems with the data. As any good stats nerd knows, the concept of "randomness" isn't quite as random as some people think, which is why
faking randomness almost always leads to tell-tale signs that the data was faked or manipulated. For example, one very, very common test is to use
Benford's Law to look at the first digit of data in a data set, because in a truly random set, the distribution is not what people usually expect.
In this case, the three guys who had problems with the data (Mark Grebner, Michael Weissman, and Jonathan Weissman) zeroed in on just a few clues that the data was faked or manipulated. The first thing they noticed was that when R2K did polls that tested how men and women viewed certain politicians or political parties (favorable/unfavorable) there was an odd pattern: if the percentage of men that rated a particular politician favorable or unfavorable was an even number, so was the the percentage of female raters. It seemed like these two points always matched up. If the male percentage was even the female percentage was even. If the male percentage was odd, the female percentage was odd. Yet, as you should know, these are independent variables, not influenced by each other. That 34% of men find a particular politician favorable should have no bearing on why an even percentage of women find that politician favorable. In fact, this happened in
almost every such poll that R2K did, to such a level as to suggest it being as close to impossible as you can imagine:
Common sense says that that result is highly unlikely, but it helps to do a more precise calculation. Since the odds of getting a match each time are essentially 50%, the odds of getting 776/778 matches are just like those of getting 776 heads on 778 tosses of a fair coin. Results that extreme happen less than one time in 10228. That's one followed by 228 zeros. (The number of atoms within our cosmic horizon is something like 1 followed by 80 zeros.) For the Unf, the odds are less than one in 10231. (Having some Undecideds makes Fav and Unf nearly independent, so these are two separate wildly unlikely events.)
There is no remotely realistic way that a simple tabulation and subsequent rounding of the results for M's and F's could possibly show that detailed similarity. Therefore the numbers on these two separate groups were not generated just by independently polling them.
The other statistical analysis that I found fascinating was that when you looked at weekly changes in favorability ratings, the R2K data almost always changed a bit. But, if you look at other data, no change is the most common result. As they point out, if you look at, say, Gallup data, you get this nice typical bell curve:
But if you look at the R2K data, you get things like the following:
Notice that substantial dip at the 0% mark. That seems to indicate a likelihood of faked or manipulated data, from someone who thinks that "random" data means the data has to keep changing. Back to Grebner, Weissman and Weissman:
How do we know that the real data couldn't possibly have many changes of +1% or -1% but few changes of 0%? Let's make an imaginative leap and say that, for some inexplicable reason, the actual changes in the population's opinion were always exactly +1% or -1%, equally likely. Since real polls would have substantial sampling error (about +/-2% in the week-to-week numbers even in the first 60 weeks, more later) the distribution of weekly changes in the poll results would be smeared out, with slightly more ending up rounding to 0% than to -1% or +1%. No real results could show a sharp hole at 0%, barring yet another wildly unlikely accident.
Kos is apparently planning legal action, and so far R2K hasn't responded in much detail other than to
claim that its polls were conducted properly. I'm not all that interested in that part of the discussion however. I just find it neat how the "faux randomness" may have exposed the problems with the data.
Filed Under: data, faked, random
Companies: dailykos, research 2000