Table of Contents
Ask a meteorologist what the weather will be like in two weeks and they’ll simply shrug their shoulders and say it’s impossible to know. Forecasts could be made – based on wider atmospherical patterns, historical trends, satellite monitoring and what not – but the range of possibilities and margins of certainty will be unhelpfully wide due to an absence of hard data.
But as the date in question draws closer, bringing with it clearer and more reliable information, those margins narrow, allowing the experts to say with more confidence whether it will rain, snow or shine in the days to come.
Professor Professor Mike Tildesley, one of the government’s many scientific advisers, uses the analogy to describe the complexities of modelling Covid-19. “In 10 days’ time, you might model that there’s a 10 per cent chance it will rain or be sunny, and only a 1 per cent chance it will snow. But of course, when you actually get to that day, only one of these things is going to happen.”
When it came to Omicron, and given the limited data that was available at the beginning of the wave, many scenarios for what might happen over Christmas were modelled by the experts. These varied enormously in scope and their actual likelihood of coming to pass. But none were definitive or predictive in nature. “A lot of people seem to think we should be dealing in certainties, but that’s not possible,” says Prof Tildesley.
So what did these scenarios outline? The very worst-case projection – the one “the newspapers tend to emphasise,” says Professor John Edmunds, a fellow government modeller – showed that 6,000 people could die a day at the peak of the Omicron wave, with tens of thousands of daily hospitalisations, if restrictions weren’t imposed. The best-case scenario showed up to 400 deaths a day and just under 3,000 daily hospitalisations.
In the same way that we accept the meteorological uncertainties at play when attempting to plan for the weekend after next, these broad range of possibilities were used by the government to prepare accordingly for the winter wave and decide whether it needed to reach for a small umbrella or invest in a new set of waterproofs.
On this occasion, No 10 went for the light-touch approach – one that thankfully paid off. Meanwhile, the worst-case scenario modelled by the experts – “which might only have had a 1 per cent chance of happening,” says Prof Tildesley – never materialised.
Yet along with other members of Scientific Pandemic Influenza Group on Modelling (SPI-M), Prof Edmunds and Prof Tildesley have been heavily criticised for the models they helped to produce for ministers in early December, when fears were escalating over Omicron.
The government – and the public – wanted to know how many people would die, how many infections there were going to be, how high daily hospitalisations would surge, what restrictions needed to be imposed. A far cry from forecasting the weather.
The challenge at hand was complicated by a lack of data around the severity of Omicron and its immune-evading abilities. Without these details, the range of scenarios produced by the modelling was seemingly pushed to the very limits of reality.
“It was only shortly before Christmas that we had decent data that Omicron was actually less severe,” says Prof Edmunds. But by that point, the headlines had been set, with the critics already pointing out – somewhat obviously – that 6,000 people a day were not in fact dying from Omicron.
Another major misunderstanding is that the models were predictions – which is certainly not the case, says Professor Graham Medley, the chair of SPI-M. “We are illustrating possibilities for government,” he says. “The models are scenarios to help the decision-makers understand the implication of different policy choices.”
He also makes the point that by the time SPI-M’s modelling is released into the public domain, it’s already out of date. “We’re then left trying to explain something which is old news really, in the sense that it’s already been reviewed by the government and shaped policymaking.”
The experts will admit, though, that theirs is far from a perfect art – quite the opposite. All members of SPI-M who spoke with The Independent admitted that human behaviour is the most technically challenging part of the modelling puzzle.
When it comes to a rise in infections and the prospect of new restrictions – as we saw in December with Omicron – there are extreme levels of variation in how people respond.
Many will withdraw from society entirely, even before the government has recommended doing so. Others, believing themselves to be vaccinated to the hills, will carry on as normal, until the law says otherwise. And some will ignore all forms of guidance regardless.
“It’s very difficult to know how these reactions vary in different parts of society as well,” says Prof Tildesley. “People in different age groups and communities might respond in different ways. So that’s something that’s challenging to incorporate into the models.”
It’s a reason why human behaviour isn’t integrated into many of the SPI-M models – different university teams produce different scenarios, which feed into the data escalated to ministers – and further explains the wide range of possibilities that were presented to ministers last month.
Without this piece of the puzzle, critics believe, the scenarios will always be wide of the mark – and there’s certainly an argument for incorporating it into the modelling, especially when there are a slew of international studies and mobility data from the last two years that demonstrate how behaviour has changed throughout the pandemic.
Indeed, at a recent Sage meeting, it was acknowledged that the anticipated increase in hospitalisations “has not been seen so far.” Scientists theorised that “this may be due to higher vaccine levels of protection against hospitalisation, slower waning of vaccine protection, or the impact of precautionary behaviours amongst the most vulnerable and those around them.”
However, this particular shortcoming does not indicate that the modelling is useless and untrustworthy. Instead, it is a reminder that is not omniscient. The experts would say as much themselves. “It’s not an exact science,” says Prof Tildesley.
But at a time of mass polarisation in society, the perceived failures of the Covid modelling have been exploited by those seeking to pick holes in the UK’s pandemic response and demand retribution for the restrictions we’ve faced.
One not particularly bright MP went as far to invoke the words of Winston Churchill in his criticism of the modelling, saying “that never before has so much harm been done to so many by so few based on so little, questionable, potentially flawed data”.
One SPI-M member, who asked to remain anonymous, summed it up well. “In the end, it’s all political. And in a political argument, tarnishing one of the pieces of evidence that seem to be supporting the thing you don’t want to happen, it’s fair game.” Sadly, it’s not one the scientists have ever sought to play.