All over Twitter today has been discussion of a Sunday Times MRP model which reportedly showed that if an election were held today, Boris Johnson would lose his majority and his seat. It’s a striking result but not totally inconsistent with recent national polls which have shown Labour making gains. It caught my attention, though, because of the vast amount of discussion it was generating, relative to normal polls. At least part of the reason is that this is an ‘MRP’ model.
MRP stands for Multi-Level Regression and Poststratification. In short, the MRP method means taking a large sample of poll respondents (in this case over 20,000) and using a model (usually logistic regression) to predict how each individual respondent will vote, based on a combination of individual-level data (e.g. race, gender, age, education) and information about their local area (in this case, presumably, their parliamentary constituency). These levels are the ‘multi-level’ part of MRP. The final part of the method, poststratification, involves using this model to predict how each demographic group in each local area might vote. For example, the model might predict that a white, 50 year old woman who went to university has a 30% chance of voting Labour.
In the poststratification stage, this percentage is predicted for every demographic group, then multiplied by that group’s population in an area. This gives a final predicted vote share for the party in each area. Under electoral systems which base election results on geographic areas – such as in the UK and US – it is argued that MRP can successfully predict election results on an area-by-area basis. Part of MRP’s popularity was that in 2016 it performed much better than state polls at predicting Donald Trump’s surge in some key swing states. Then, in 2017, YouGov’s MRP model was one of the only predictions to correctly suggest a hung parliament. However, MRP is still just a statistical model and relies on regular polling data.
With that in mind, it’s easy to see why the Sunday Times’ MRP model, produced by FocalData, is getting a lot of attention. I am not subscriber to the Sunday Times, so I could not see the full article, but helpfully FocalData released their constituency data publicly. It was when I started scrolling through these results that I began to have some doubts about their validity. The Liberal Democrats, for example, were predicted to get 8.7% nationwide – a fairly solid result for the party in post-2015 terms – but lose all but two of its seats. Meanwhile, Labour appeared to be making huge gains in many seats it had never been competitive in before (including my home constituency – North Cornwall – where the FocalData model predicts Labour to go from 9% to 30%).
Taking a closer inspection of the data reveals a worrying pattern. For all parties which the model predicts (Conservative, Labour, Liberal Democrat, Green, Brexit, SNP and Plaid), the predicted change in vote from 2019 appears to be massively dependent on the 2019 vote share itself. All parties performed worse in their best seats from 2019, and relatively better in their worst seats.
In the plots below, I use Pippa Norris’s past election results data to compare the FocalData prediction to the 2019 election results. In these plots, the x-axis is the 2019 election result and the y-axis is the FocalData predicted vote share. The black line represents x=y, as a visual aid. Points above x=y are constituencies where parties improved on their 2019 vote share in the FocalData projection. For each plot, a linear trend line shows how the FocalData model predicts parties performing relatively worse in their best seats, whilst relatively better in their worst seats (that is, the blue regression line tilts to the right of the black x=y line).

This is pretty unusual. Although there is often some correlation between election results in one election and party swing at the next, this is usually much smaller. For comparison, the same chart is reproduced for the change between 2017 and 2019, without the Brexit Party which stood no candidates in 2017:

Here you can see a much more typical relationship. Constituency results vary between the two elections, but this variance is fairly similar across the range of seats (from unwinnable to marginal to safe), as is shown by the fact that the linear trend line has a gradient of close to 1 (matching x=y).
To show this in another way, I performed a very simple linear regression for each of the Conservatives, Labour and the Liberal Democrats at elections in 2015, 2017 and 2019 (previous elections are not comparable due to boundary changes), to see what proportion of the variance in vote share change is explained by the preceding election vote share (R2).

Here we can see how different the FocalData predictions are to previous election results. The previous highest proportion of variation explained by vote share was in 2015 for the Liberal Democrats. This has a simple explanation: the Liberal Democrats collapsed everywhere, so in places where they had higher vote shares, they had further to fall. Aside from that, vote share at the previous election typically explains under 10% of change in vote share at the next. For the FocalData model, it explains over 70% for all three parties, and a staggering 95% for the Liberal Democrats.
By way of comparison, I repeated the regression, this time including the estimate Leave vote of each constituency (from Chris Hanretty’s estimates). One might expect that Leave vote has a large impact on vote share changes, given how much it impacts our politics. Instead, the combined vote share and Brexit vote models yielded increases in R2 of just 1% for the Conservatives, 5.3% for Labour and 0.3% for the Liberal Democrats.
What does this mean in practice?
In terms of the validity of these predictions, the relationships I have identified lead to some pretty big questions. While it would theoretically be possible for all parties to perform relatively poorly in their best seats and well in their worst seats, this seems extremely unlikely. On top of this, there are so many examples of constituency predictions which seem wildly unrealistic on the face of it that there must be some underlying issue.
One prominent example is Brighton Pavilion. Here, despite the Greens more than doubling their support nationwide in the FocalData model, Caroline Lucas is projected to have her majority cut by 15%, from 34% to 19%. Although individual constituencies do not always go the same direction as the country, what reason is there to think Brighton Pavilion would swing 15% against the Greens during the party’s best election ever? More so, what demographic or local variables in the MRP model could cause this to happen?
Another is Twickenham, a Liberal Democrat stronghold. Here the Liberal Democrat vote is projected to fall by 24% while the Liberal Democrat vote nationwide falls by 2.9%. This would be a bigger decrease for the Liberal Democrats in Twickenham than their catastrophic 2015 defeat. Part of the reason the FocalData model resulted in only two Liberal Democrat seats is similarly huge declines across the seats they won in 2019. Meanwhile the Liberal Democrat vote stays almost wholly in tact across much of the rest of the country.
There are many more examples which include huge swings, and these swings do not appear to be particularly well correlated with virtually any demographic or political indicators.
Why might this have happened?
Why this might have happened is a question I have been really stuck with. I am not an expert on MRP, although I have briefly looked at the method in the past. My original instinct was that the model used to predict voting intention was underfit. With a sample of 22,000 (1/5th of YouGov’s final MRP model in 2019), the average constituency had just 34 respondents. It might simply be that the model was not picking up constituency-level variation, and was flattening all constituencies towards the national picture (thereby making Labour seats less Labour, Conservative seats less Conservative, and so on). This might be the case.
However, if the model was underfit one would also expect there to be examples of large swings in the opposite of the predicted direction (i.e. there should be areas where demography overstates a party, as well as understates). In fact, swing by constituency is extremely well predicted by previous vote share (residual standard error of just 2.234), to the extent that it almost looks like someone multiplied the result in 2019 in each constituency by some factor to arrive at these predictions, plus some random noise.
Given this lack of variation, it might instead be that the model is overfit, but without full methodology it is difficult to tell how. One potential method which could have caused overfitting is if FocalData used some form of auxiliary model to estimate the distribution of individual-level votes from 2019 across demographic groups. That is, if they modelled current voting intention based on another model of 2019 votes.
(EDIT: my conversation with FocalData’s CEO here suggests this might be the case)
This other model could mean that the variables predicting individuals changing their vote are dominated by their 2019 vote, leading the model to predict similar proportions of voters to change their vote (for each party) for all demographic groups. For example, if the model predicted that 10% of Labour voters would vote for someone else, that would decrease the Labour vote by 0.5% somewhere where the party won 5% in 2019, but 8% somewhere where the party won 80% of the vote in 2019. This would explain why vote share change is so highly correlated with 2019 vote shares.
In any case, since I started writing this post I noticed The Guardian and The Daily Mail also picked up the story on the projection. Clearly, the MRP label and relatively large sample size give the model a credence not afforded to most conventional polls. In that case, it also ought to be carefully scrutinised.