Wisdom of the crowds revisited: 10,000 readers of the Volkskrant can't be wrong

We're about three weeks after the elections in The Netherlands but it seems like the discussion on the opinion polls is not finished yet. Especially in Belgium there is a lot of interest in the subject, probably because next week there will be municipal elections here.
The general feeling in the media is that the accuracy of the polls in The Netherlands was not good. I tried to put the so called bad results in perspective by expressing the difference between a poll and the actual result in function of the sample size. You can find that blog post here.
I would like to come back on this subject, after reading an article on Volkskrant.nl, the online version of De Volksksrant, a popular newspaper in The Netherlands.  If you speak Dutch you can read that article here. The title says that 9069 readers op Volkskrant.nl made a wrong prediction of the election results.

There are a few remarkable things about this article. To start with, the winner, i.e. the participant who deviated the least of the actual results, won with a deviation of 11 seats. I don't know what kind of predictions were allowed, but if you had to re-distribute the 150 available seats, the sum of the absolute value of the difference should always be an even number. I assume that the winner's guess did not sum to 150 or that smaller parties were grouped somehow.

A second remarkable point is that they admit that the average prediction of the participants deviated strongly from the actual results, just like the opinion polls. To start with, making an average prediction is probably not a good idea, furthermore the distance between that average and the actual results was 26 seats, which would put it (slightly) behind all other polls. Those other polls scored 24,24, 18 and 18 for "De Stemming", TNS, Maurice De Hond and Ipsos respectively. Apart from the fact that the average prediction of the participants of Volkskrant.nl who participated to this "game"  was actually worse than the worst polls, this admission contrasts sharply with the popular idea that opinion polls are either wrong, or when they are good, could as well have been predicted by an educated guess. Apparently this does not hold for the readers of the Volkskrant, or at least not those that participated. It also contrasts to what is known as "The wisdom of the crowds" which states that "the aggregation of information in groups results in decisions that are often better than could have been made by any single member of the group". It's very likely that James Surowiecki, the author of "The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations" was not referring to this type of election predictions, but it's nonetheless an interesting observation. 


A third remarkable point is that in a large group, i.e. 9096 participants, even the best result is still 11 seats away from the actual result. Without further information this is difficult to assess. So I made a small simulation study in which I randomly distributed the 150 available seats 9096 times and calculated the sum of the absolute values of the differences. Thankfully the results are very far off. In a second simulation, I attributed the 150 seats with a probability proportional to the election results of 2010. Notice though that the difference between the two election results is 46 seats, so it's not that hardly anything changed over those two elections. Either way, when I did this I got a result of 10 seats, even better than the Volkrant winner. But of course we're looking at the best result, the minimum of the sum of the absolute value of the differences, only, so this could have happened accidentally. So I replicated the experiment 100 times and found the following distribution.
 
The graph illustrates that the best Volkskrant result 11 (probably 10 or 12) is indeed not too bad from this perspective, but could also have been achieved by having 9096 readers randomly choosing with a probability proportional with the previous election results.
Some of you might be tempted to plot the differences observed with the opinion polls on this graph as well, but that would not be fair in that they did not result from 9096 replications.  However  if we would have had 9096 opinion polls and selected the best one, you could.

 
  

Comments

  1. Nicely done. If I see it right you're basically saying that the 9096 Volkskrant readers did not outperform chance - or chance informed by the 2010 results, at least. It seems to me, though, that these readers were also led astray by the poll results in the days before the elections. The pollsters are maybe right to claim that their results more or less adequately represented the party preferences of the people who made up their mind up to that point, but nobody could foresee the 'skewed' preferences of the voters deciding on what to vote in the 24 hours before the actual elections - bandwagon effect or not, both conservative liberals and social democrats peaked at the last instance.
    What the 2012 elections have taught us, I'd say, is that with a larger and larger fraction of the electorate making up their minds at the last instance, it becomes harder and harder for pollsters to predict the outcomes. At the next elections, pollsters and media alike are well-advised to present poll results as 'prediction of the election results based on the 50/60/70% of the electorate who have decided on their votes so far', with the additional remark that the actual results may deviate considerably. And if Volkskrant readers would ignore the polls, or make a good guess of the last-minute 'bias', they might very well beat the polls. What do you think?

    ReplyDelete
  2. I agree that giving more clarity would certainly help, or as you put it: "pollsters and media alike are well-advised to present poll results as 'prediction of the election results based on the 50/60/70% of the electorate who have decided on their votes so far', with the additional remark that the actual results may deviate considerably.". I'm not sure about The Netherlands, but in Belgium this is often explicitly stated by pollsters (in dutch: "een peiling zonder voorspellend karakter"). I guess this has less to do with the desire to be methodologically correct, but rather to have an excuse in case the poll results deviate strongly from the actual results. Furthermore, media are not always willing to include this type of methodological warnings, even when the pollster is willing to do so.
    I'm not a political scientist but I understand there is some debate about the proportion of people who make a last minute decision and the effect of polls on voting behavior. Incidentally yesterday there was a radio program on opinion polls and the upcoming municipal elections in Belgium. On that program a survey was mentioned where, if I understood well, half of the respondents were asked whether they thought opinion polls had an influence on voting behavior, and the other half whether there personal choice was influenced by polls or not. The proportion in the first group was (much) larger than in the second group.
    Either way, I have no information on the timespan that the Volkskrant readers could cast their 'vote', but it would be interesting to see whether the evolution in the readers' choice correlated with the changes reported in the polls.

    ReplyDelete
  3. Two things I forgot to mention. First even when a methodological disclaimer is published, the actual article should be in line with that disclaimer. Often that is not the case. I believe this is one of the points that have been stressed on peilloos.nl as well.
    Secondly maybe wisdom of the crowds comes from women: more than 80% of the participants were men ;-)

    ReplyDelete

Post a Comment

Popular posts from this blog

(small) samples versus alternative (big) data sources

Market Research and Big Data: A difficult relationship

Why The Nielsen Company is an #mrx Big Data company avant la lettre