How Accurate Were the Pollsters in Flanders? You be the Judge

Last weekend, elections were held in Belgium. Not only for the European Parliament, but there were also federal and regional elections. As usual some people were quick to dismiss the polls. Isabel Albers,  General Editor of De Tijd / L'Echo, made it clear on Twitter that she was not impressed:

Some losing politicians made similar claims. Groen (the flemish Green party)  co-chair Nadia Naji described her party's results as better than what the polls had predicted. Let's have a look at that claim:



In 5 of the last 6 polls the true result was well within the usual confidence intervals (more on confidence intervals later). Only in the most recent one the true result was just outside of the 95% conidence interval, but notice that, to some extent, this is because the sample size in the last poll was 2000, which is higher than usual. As the blue line suggests, if you average the most recent polls, you would come very close to the actual election result. (More on poll averaging later.). BTW, I published this chart on Twitter before Nadia Naji made her claim, so I'm not cherry picking methods nor is this  "hindsight bias".

Before I show the charts for the other Flemish parties, let me explain in more detail how they were constructed. First I downloaded the results of polls of the last 10 years from Europe Elects and plotted them against the true election results as published by ibz. The polls are the black dots, the true results are red. I then calculated 95% Bayesian credible intervals using non-informative priors. Those are shown as the grey bars around each black poll result. It would lead us too far to explain why I prefer Bayesian credible intervals over the more traditional confidence intervals. In essence, the primary reason is that while interpreting confidence intervals requires linguistic acrobatics, credible intervals are straightforward to interpret and explain: Bayesian credible intervals are what laymen often think confidence intervals are. That said, if you're interested, I wrote two blogposts  A note on observing zero successes and A Frequentist and a Bayesian go to a bar ... about Bayesian approaches.

Finally I summarised the poll results per party with Locally Estimated Scatterplot Smoothing (LOESS, that's the blue line in the chart). Again without going into the technical details, in a very broad sense you could consider it as a type of moving average.

So let's have a look at the other parties then, starting with Vlaams Belang and NVA:








Clearly, even the polling averages were quite off for those two parties.

As a sidenote, there were rumors that NVA commissioned a poll showing Vlaams Belang at 33%, but the results were never published. Since we don't know what the size of the alleged poll is, we simply calculate how likely it is that the true percentage of Vlaams Belang voters is 33% given a certain published poll. In a Bayesian context doing this is very easy. If you're not fussy about the prior, a few lines of R-code will do. We do this for the last 10 polls (all of them from 2024) and we come with the following list of probabilities, expressed as percentages:

            0.000000000145195
            0.001232714
            0.0000001043081
            0.000000421385
            0.00006976477
            0.006973164
            0.0000006901956
            0.000004786569
            0.00000003034463 
            0.00007677729

Clearly, if you trust the polls, at no point in 2024 the claim of 33% was even remotely likely. If you don't trust the polls, why order a secret one then? A result like that could only occur if something major happened right before the secret poll, such as an extremely favorable debate, a scandal involving other parties, or a major international event that drastically shifted intentions. None of that happened, so  I can't believe that the Research Department of NVA would have taken a result of 33% for Vlaams Belang seriously. So I suspect this rumour is false, or the study was dismissed by NVA internally because it was not credible enough. A final possibility is that this poll was only done in a province were Vlaams Belang scored better than the average in Flanders. But even then it would be an unlikely outcome. 


So far we had Groen, Vlaams Belang and NVA. Now let's look at the remaining parties, CD&V, Open VLD, Vooruit and PVDA respectively.




  









While you should judge for yourself, let me, at this point, share my conclusions:

To be clear, it might also be a polling error. The hypothesis of a late shift, where late deciders opted more for NVA, will need to be confirmed (or disproved) by academic research.

Either way, my view is that dismissing the polls altogether, like Isabel Albers did, doesn't seem to be based on the facts. In short, her view is that the polls did not accurately predict the election trends. The mistake she makes is assuming that 'the election trend' in the media reflects the objective trend (where parties like Groen and Open VLD are significantly losing ground). Instead, it is the element of surprise, largely influenced by the polls themselves, that shapes media trends. From this perspective, polls become self-destroying prophecies that can never truly capture the 'election trend' as portrayed by the media.
A more nuanced view was offered by Radio 1 (see Peilingen voorspelden overwinning Vlaams Belang, maar het werd N-VA: waarom zitten voorspellingen er soms naast?). While I agree with what Prof. Dr. John Lievens says w.r.t. polls here, I must say that I don't always agree with the criticism coming from the academic world in general. I wrote about that in Dutch: Over kaduke telescopen en de glazen bol van Madame Soleil. Interestingly, one of the shocks of election day was the announcement of an exit poll by the ULB (that's the French speaking free University of Brussels) that had the far left party PTB/PVDA as the biggest party in the French speaking part of Belgium. A few hours later is was clear that this was not going to happen, and since then I didn't hear anything about it. Apparently academics prefer to criticize commercial companies, and not their colleagues. I can only observe that what Belgian academics failed to do, my colleagues from Ipsos The Netherlands did quite well, as you can see from this table (this is in Dutch. My apologies for that. The last column is the difference between the Ipsos The Netherlands exitpoll and the actual Dutch results in percentage points) :     

That said, the methodology for an exit-poll is completely different from gauging voting intention beforehand. So, let's come back to the opinion polls. What you might have observed is that for quite a few parties and elections the blue LOESS line comes quite close to the actual results (the red dots). In some cases even the polling average is quite far from the actual results. With a bit of imagination you could see the blue line as a Poor Man's Poll Aggregator. What poll aggregators do is, instead of relying on a single poll, which might have biases or inaccuracies, combining results from various sources to create a more reliable and accurate picture. They often use sophisticated statistical techniques to weigh and average the results of the individual polls, accounting for factors like sample size, polling methodology, and the historical accuracy of the polling organizations. Let's illustrate that last point by looking at the chart for Vlaams Belang again, but now we colour the dots per polling company.



What we see is that between 2015 and 2020, Kantar systematically showed lower numbers for the radical right party Vlaams Belang than the other pollsters. Taking so called "House effects" into account can improve the overall accuracy of a poll aggregator's estimate. 
As far as I know, poll aggregation is not done in Belgium. Prof. Dr. Tom Louwerse maintains Peilingwijzer, a poll aggregator for The Netherlands, in the US we have 538, RealClearPolitics and a few others. Alexandre Andorra, the host of the excellent podcast Learning Bayesian Statistics, has done some work in that area for France. Maybe someone should attempt poll aggregation in Belgium? 

Comments

Popular posts from this blog

(small) samples versus alternative (big) data sources

Market Research and Big Data: A difficult relationship

Why The Nielsen Company is an #mrx Big Data company avant la lettre