Nov 27 2012

Catalan elections 2012: evaluating electoral forecasting

Published by at 9:08 am under Uncategorized

I have been tempted to use “The magnitude of the tragedy” as a title for the post, quoting Quim Monzó, a well know catalan writter. Mostly because the impression, the perception, is that nobody predicted such results.

So the elections have produced a new Catalan government with less seats for the previous winning party (CiU), but substantially more seats for the two left-wing parties that also support a referendum on Catalan sovereignty -ERC and ICV-). The difference in the interpretation of the results between Spanish and International press is astonishing, with Catalan press more alinged with the interpretation of International press. See Divisive Election in Spain’s Catalonia Gives Win to Separatist Parties at the New York Times, or this editorial from the Financial Times, Barcelona’s draw, that emphasizes that two thirds of the parliament supports a referendum.

Results

The following figure shows the results of the elections in vote share for each party (black dot and discontinuous black line), along with the prediction of the pooling the polls model and its uncertainty, as well as the point predictions and errors associated with each of the individual polls.

gr-partits-separats-periode_rellevant-resultats.png

It is worth mentioning that, except for CiU, all parties have at least one poll that predicts their actual result within the error margin.
The model, also, is capable of guessing the tendency of the polls.

An evaluation of the behaviour of the “pooling the polls” model

As for how well it behaves the pooling the polls model, I have done some comparisons using two different measures of precision of the polls:

  • MND: Mean of the normalized differences (also, or more well known as the Mean Absolute Percentage Error). The idea is that the real result of a party in the election is compared to the predicted value in the poll and a percent of different is extracted. Then, a mean is computed for all the parties in the poll. The result can, therefore, be interpreted as the mean difference in percentage between the results and the predictions. So a value of MND=0.2 means that the predictions for this poll diverge, on average, by 20 percent from the results. The MND is appropriate when there are polls that do not have predictions for all parties, since the missing values do not bias the results. However, MND gives equal weight to four times a difference of 10% or a single difference of 40% and three exact predictions.

  • SQD: Sum of the squares of the differences. Here the idea is to try to penalize larger differences in the predictions. The difference between the prediction and the actual value is squared. All those differences in the polls are then added and the square root is taken. The SQD is not so appropriate when there are parties without prediction, because it assumes that the difference is zero. But is a more fine-grained tool to measure big divergences between the predicted value and the result, because it penalizes big differences in the distribution of supports.

The function in R to compute those values is:

mnd <- function(pred, res) mean(abs(1-(pred/res)), na.rm=TRUE)
sqd <- function(pred, res) sqrt(sum(((res-pred)^2), na.rm=TRUE))

The following figure shows the values of the mean of the normalized differences for all polls with a fieldwork in the last two months before the elections, as well as the relationships between the MND and the sample size of the poll and its date of fieldwork.

fig-mnd.png

The figure shows that the model does not do a very good job, or at least only slightly better than a simple average of the polls. This is because no penalization is done on larger differences.

When the sum of the squares of the differences is used, the results change substantively. The model is, by far, the best prediction. This is due to the penalization for larger differences. So the model seems to be the best overall compromise for all parties.

fig-sqd.png

It is also worth mentioning that the two largest surveys (the CIS and the CEO, the two official survey bodies from Spain and Catalonia, respectively) have very different performance. The CIS does a very good job (as it is traditional), while the CEO is performing really bad. It is worth mentioning that this is the third time that the CEO does an estimation of the final vote share. So “cooking” the plain results has to be substantively improved at CEO.
It is also important to notice that as the date of the fieldwork is approaching the election date the surveys tend to be more accurate. So, once again, it does not seem to be very reasonable to have this law that does not allow to publish survey results since a week before the election day.

Translation from vote share to seats

The performance of the translation of the vote share to seats has proved to be quite good. So my fears and doubts about it have diminished, but not faded away absolutely.

To sum up

The combination of the MND and the SQD suggest that the model performs quite well for big parties and less well for small parties. Also, that the model is an evident improvement of a simple plain mean.

Let me emphasize again this point: pooling the polls does not do wizardry with the predictions. It is only a way to take a sophisticated mean of the polls. But obviously all its virtues rely only on the quality of the sources: the polls. If no polls were indicating evidence of less support for the winning party, the sophistication of the model can’t compensate for it.

4 responses so far

4 Responses to “Catalan elections 2012: evaluating electoral forecasting”

  1. Congratulations. Is it my idea or maybe it can be true that your model without direct vote data could have been more accurately, specially for CiU estimations?

    • xavierfim says:

      Thanks.

      I don’t think so. Mostly because the last poll with direct vote is the february barometer of the CEO, which is far away from the election date to be able to cause anything. For the last two CEO barometers I use their estimation, using the aforementioned february barometer as an anchor for callibrating the old CEO series of direct vote and the new series of estimated vote.

  2. I second the congratulations! MND = Mean Absolute Percentage Error (MAPE)?

Leave a Reply