Skip to main content

Creating Curves: The Uses and Abuses of Predictive Mathematical Modelling

Saltdean Beach, March 2020
How many people read the average academic journal paper? Poke about the internet and you’ll get a few answers: the average paper is only read by 10 people and half are only read by their authors and reviewers. These stats have questionable sources, but it’s likely that academic and fiction publishing have similar patterns – there’s a handful of Harry Potters and a very large amount of dusty paper.

Research funders have been concerned about this for some time. They don’t want to hand out six figure research grants only to receive a few journal papers and a handful of receipts for academic conferences in sunny climes. In response they dreamed up the 'impact' agenda, encouraging academics to strike out beyond their ivory towers and engage with public policy and the business world. Most research reporting now requires you to list all of the wonderful things you’ve done to create impact.

One group who probably won’t need to write an impact statement is the Imperial College COVID-19 Response Team. If a pushy bureaucrat insisted, they could probably get away with a single line saying, “What were you up to over Easter 2020?”. Because the work behind their snappily titled paper 'Impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand' is the reason why everyone is currently stuck in their homes.

This is the famous paper of 16th March that estimated the UK would experience over 250,000 coronavirus deaths unless immediate action was taken to surpass the epidemic. Faced with such a death toll the Government turned away from a policy of mitigation (or herd immunity as some put it) to a full-on lockdown.

The Imperial paper is an epidemiological modelling study, using complex maths to try and predict how the epidemic will spread. The model used is not the only one out there; many of the reports you'll have read on coronavirus will be based on a model of one type of another. The models give us the predictions we’re now so used to hearing: how many people may get infected by the virus, how many may die and when the lockdown could conceivably end.

At its heart modelling an epidemic relies on some pretty simple concepts. You have a number of people who are susceptible to a disease, they get infected at a certain rate and (if they don’t die) they recover and become immune. Producing a model of this gives the now famous curve that we’re all trying to flatten. You can produce a rudimentary curve using these concepts in 5 minutes using a spreadsheet - see my effort below
Beneath this there are two main model categories:
  • Equation based models use complex maths to predict how the epidemic will propagate across a population, say that of the UK. 
  • Agent based models try to model the behaviour of small groups of people or individuals. Equations that predict the behaviour of each ‘agent’ are produced, the modeler hits the go button and the agents all run around interacting (and infecting) each other for the time period of the model.
Both models suffer common drawbacks. The first is that no model is perfect. The underlying ‘physics’ of the model will be a simplification of known reality, which may introduce errors. Some key factors influencing the results will be unknown to the modellers – these range from Donald Rumsfeld’s ‘known unknowns’ to full on Outside Context Problems.

The second is that a model is only as good as the data fed into it; the old ‘rubbish in, rubbish out’ problem. At the present point in the epidemic the data is sketchy at best – we don’t really know how many people have the virus or even how many are dying where it is the principal cause of death.

This was the point made by an Oxford University study in late March. Widely reported under the headline ‘Coronavirus may have infected half of UK population’, what the study actually said was that the number of deaths in the UK and Italy could be explained by a range of possible scenarios. These ranged from ones where not many people had the virus and lots of them died, to ones where a lot of people had the virus but very few of them died.

Even now, several weeks into the global epidemic, models still output very different results depending on what assumptions the modellers feed into them. The Conversation – an excellent site where scientists write for normal folk – has several articles on coronavirus modelling. One of these noted  that ‘in none of the predictions for the UK do we see fewer than 14,000 fatalities or more than 260,000 (if no interventions were taken)’. This is intended as a complement to the models, but quite obviously the lower number is a national tragedy and the upper a national disaster unparalleled in peacetime history.

With so much variation you might wonder why the UK and most other countries are relying on models to guide their response. The reason is quite simple. Say you are Boris Johnson (sorry) and you can hear from one of these two experts:
  • Expert A says that, based on her longstanding experience of epidemics, between 100,000 and 300,000 people are likely to die but that closing schools, shops and businesses will save many lives.
  • Expert B says that, based on a cutting-edge model, their central case is that 250,000 people will die. Closing schools, shops and business will reduce that to 30,000. If you have any other ideas they can run them through the model for you.
One provides an aura of accuracy and technological shininess, whilst the other seems to rely on woolly human intuition. Politicians understandably want hard numbers to guide policy decisions that affect millions of people and entail enormous expenditure. This is why similar predictive mathematical models now often form the primary evidence driving decision making in both the public and private sectors – when you're trying to predict the future hard numbers look good.

The only trouble is the familiar problems of imperfect models and dodgy input data raise their heads in all but the simplest situations. Convinced? Well you can skip to the end of this essay. But if you’re not convinced here’s a few examples.

Air Quality

In the early 2000s the UK Government confidently predicted that poor air quality was going to be consigned to the history books. Based on a small number of monitoring sites, and knowledge of incoming tighter standards for vehicle emissions, models suggested that EU air quality limits would be met without the need for any further actions.

As most people know, this turned out to be wide of the mark and many UK towns and cities still suffer from unacceptable levels of air pollution. There are two reasons behind this. The first was spotlighted by ‘Dieselgate’ – vehicle manufactures were building to pass emissions tests and not for real world performance. Outside of the lab emissions were much higher than the laboratory test cycle suggested.

The second was that the real world turned out to be a lot different than model land. National models simply can’t take into account the complex reality of traffic flow and building layout in every town and city. A requirement in the 1995 Environment Act meant that Local Authorities went looking for air pollution hot spots, often armed with distinctly low-tech solutions such as diffusion tubes. To many people’s surprise they found these hot spots everywhere, frequently contradicting the national data reported back to the EU.

Whilst dodgy data has now been refined, Government models continue to contradict observed reality. To choose one example close to home, based on one monitoring station in a park and a whole lot of modelling the Government’s most recent report says that nitrogen dioxide levels in my home town of Brighton are A-OK. The must come as some surprise to the local authority, who have declared the whole city centre an ‘Air Quality Management Area’ for this pollutant, based on their own extensive monitoring and local modelling.

The Economy

Ok, here’s an easy one. What were the three most significant economic events of the past 20 years and how many economic models spotted them?

The answer is the 2001 terrorist attacks on the twin towers, the 2008 financial crisis and our current coronavirus pandemic. These events, and the response to them from Government all around the world, have largely driven the direction of the global economy. And how many models spotted them in advance? Well, none of them, obviously.

The reason is pretty simple when you think about it. If we were able to predict these events we’d do something about them: the terrorist plot would have been crushed, risky financial assets wound down and the pandemic nipped in the bud. They cannot be put into a predictive model as they’re unpredictable events that exist outside of the scope of the model.

Yet these models are used to guide action in both the public and private sectors. When the Chancellor confidently suggests he’s got room to increase spending in the future he’s quoting from a model. When models can flip from feast to famine in an instant (such as the latest International Monetary Fund Reports pre and post the pandemic) you have to wonder how much better they are for planning purposes than, say, writing last year’s economic growth figure on the back of an envelope.

High Speed 2

In the credit crunch film ‘Margin Call’ a sacked banker tells his colleague that he once worked as an engineer and built a bridge. He reels off a series of figures on the amount of time the bridge saved people, concluding he saved them thousands of hours driving the long way round. The scene’s purpose is to show that the banker’s mathematical genius may have been better employed doing something socially useful, rather than pushing dodgy mortgage-based securities.

The banker’s speech could have gone on a bit longer though, expanding into how much money those time savings were worth compared to the cost of building the bridge. Because these days we don’t build transport infrastructure on an ‘if we build it, they will come basis’. Instead investment is driven by the economic case. Here cost-benefit analysis is used to compare economic (and other) benefits with the cost of building the thing. For the rather expensive High Speed 2 (HS2) line this means working out the economic benefits of a fast, high capacity railway line.

The economic case for HS2 is based on an enormously long appraisal period, covering 60 years beyond the anticipated opening date of the railway (which was thought to be at least a decade away when the original economic case was put together). This is a little like a transport planner in the dying days of World War 2 trying to anticipate how many people would be using a railway today. Even with modern computers they’d have struggled to accurately estimate passenger numbers.

Passenger numbers are only one part of the equation though, as the economists need to calculate the (costed) benefit of the time everyone saves by using the speedy train. To go back to our 1940s transport planner, they’d probably assume that business travellers could do very little work on the train, as laptops and mobile phones weren’t on anyone’s radar back then. The HS2 business case also can’t take account of any technologies that arise between now and the end of the appraisal period. An additional criticism is that it barely takes account of current technologies and assumes that time on the train is (mostly) dead time.

Most of our railways were built in Victorian times, with railway companies and their backers taking a gamble that they’d be popular and make a good return. Many of those lines remain wonderfully useful over 100 years later, but others were either bad bets from the off or failed as cars and lorries exploded in popularity. For all the high-tech sheen of cost-benefit analysis HS2 represents a similar gamble today.

Climate Change

Climate change is one of the biggest threats facing the world. Even with current national commitments under the Paris Agreement, average temperature increases will range from 2.9°C to 3.4°C above pre-industrial times by 2100. To keep that rise below 1.5°C, carbon emissions need to be cut by 45% by 2030, and reach net zero by 2050.

Those figures come from a 2019 UN Environment Program (UNEP) report, and are pretty much in line with most expectations of climate change. And the numbers themselves come, of course, from a predictive model.

The most commonly used climate models are General Circulation Models (GCMs). They divide the atmosphere, oceans and cryosphere (ice sheets) up into a large number of three-dimensional boxes. Equations are used to simulate the processes in each box and the transfers between their neighbours. The modeller presses fast forward, and the model predicts where we’ll be decades in the future.

Similar GCMs are also used for weather forecasting. As you’ll know through experience, weather models are now wonderfully accurate for the next day or two, but you wouldn’t use one to plan a BBQ two weeks in the future. This, apparently, is because tiny fluctuations in the atmosphere cause forecasts to veer away from reality after a few days, the so-called butterfly effect. Over the timescales used in climate models these fluctuations cancel each other out. Daily changes in rainfall, temperature etc may be unpredictable, but the long-term trend in climate can be seen.

To ensure they’re accurate climate models are calibrated and tested by getting them to model the climate of the past - feed in the data for (say) 1970 and see how the model shapes up against the actual measured data for the past 50 years. We can also check them against their peers, and sure enough most model predictions are in the same ballpark. We’ve also got to the point where predictions from older climate models can be checked against reality; so far they seem to have performed pretty well.

There’s two criticisms you can level at climate models. The first is that climate modellers may have a tendency to cluster around the same result. If you develop a new climate model that comes to a significantly different result you’ll receive an awful lot more scrutiny than if your model agreed with everyone else. Not all of this will be particularly friendly, as you’ll be suggesting that a number of heavy hitters have got their life’s work a bit wrong.

Modern models benefit from much improved understanding of atmospheric processes and vastly greater computer power than their predecessors from 30-40 years ago, yet tend to come to pretty much the same conclusions as their forefathers. Climate modeller would say that’s because they nailed the physics early on, whilst sceptics would say that modeller are clustering around the ‘right’ result.

The second criticism is that models may have got it right so far, but that there’s no guarantee they will do in the future. Factors that modelers are either unaware of, or aware of but can’t replicate in their models, may come and trip them up in the future. As we’ve seen, this is certainly a feature of economic models - they predict the future reasonably well for a while, then become spectacularly wrong when an unexpected event strikes from outside of model land. As financial advisers are keen on saying, past performance gives no guarantee of future returns (hello Neil Woodford).

I’m not suggesting here that the results of climate models should be discounted, but that they may provide false certainty that a certain level of emissions will lead to a defined amount of warming by a set date. The science of climate change is robust: we have a theory backed by observational data, and increasingly we’re able to see the impacts. The actual impact of continuing on our present course will be somewhere between ‘warmer but tolerable’ and Armageddon – the mere possibility of the more extreme scenarios should be enough to inspire action.

A number of studies have also attempted to meld together climate and economic models, most notably the lauded 2006 Stern Review on the Economics of Climate Change . These have come up with some impressive sounding results, particularly on the cost of inaction vs the cost of action. However, as you might imagine combining two models with questionable predictive power may not lead to the most accurate results – the uncertainty simply multiplies.

Where the wheels really come off this type of approach though is through the fact that in long term economic models everything plays second fiddle to your anticipated level of economic growth. Economic models usually assume there are no natural limits to growth and the global economy will grow exponentially over the modelling period. The modeller’s assumption of how quickly the economy grows (2% a year? 3%? 4%?) has a far greater influence on the predicted wealth of the world in 50 years time than even huge global threats such as unabated climate change.

All Models Are Wrong, Some Are Useful

This essay isn’t trying to say that all predictive models are useless - they’re not. The old saying is that all models are wrong, but some are useful. Since the phrase was coined we’ve seen tremendous advancements in models and the computing power they rely upon, but the same rule holds true. No model of a complex system is ever going to be much use in making accurate long term predictions, but many are wonderfully useful in three main areas.

The first is short term predictions. You don’t have to invoke the memory of Michael Fish to understand that weather forecasts have become far more accurate in the modern era. Forecasts are both more accurate and localised, so if they predict it’ll rain tomorrow on your village fete they’ll more than likely be right. The drivers behind this improvement are satellite data and supercomputer models, although it’s worth noting that whilst computing power has increased exponentially the improvement in forecasts has been more linear - their accuracy has roughly improved by a day per decade (i.e. a three day forecast today is as accurate as a two day one ten years ago).

Second comes ‘what if’ analysis. In economics, for example, you might want to estimate the growth and employment impacts of increasing or decreasing a particular tax on economic growth and employment. A model gives you a way to do this, showing you what the impact might be whilst highlighting unintended impacts elsewhere in the economy. The model might not be able to predict the future, but it can give an idea of what might change from business as usual, at least under a scenario where the model’s basic assumptions hold true.

Another, simpler, way of demonstrating this is retirement savings. A new graduate can model their income in retirement under a number of scenarios. The model won’t provide an accurate income prediction as there are too many unknowns involved: how their career will progress, whether they’ll be unemployed for a significant period, whether they have a family, etc. But the model would tell them that they’re far more likely to be subsisting on a diet of porridge and baked beans in retirement if they fail to save, and way more likely to be comfortably off if they start saving into a pension at an early stage.

Finally, creating a model helps us to understand a system. Say you were a clever engineer but had no idea how an internal combustion engine works. You were allowed to observe a car in action (but no peeking below the bonnet). You’d see petrol being pumped in, the wheels turning and the exhaust burbling. You’d probably come up with a number of different theories of how the engine worked, most of which would be incorrect. Making a model of your engine(s) would help you test those theories and really understand what was going on under the bonnet.

Similarly, models help us understand complex systems such as climate. When scientists come up with new theories around climate systems, models can be used to see if they’re credible. As with ‘what if’ analysis, the model isn’t predicting the future but is helping us better understand what has, and could, happen.

A Diversity of Views

Nassim Nicholas Taleb’s 2001 book ‘Fooled by Randomness’ imagined a conversation between a mathematician and a gangster in a casino. A dice has been rolled 20 times delivering a 6 on every occasion. The duo were asked whether the next roll will also be a 6. The mathematician says that each roll is independent of the last, and the chances are therefore 1 in 6. The gangster replies that it will without a shadow of doubt be a 6, as the dice is very obviously loaded. Sometimes it’s better to get opinions from a range of different viewpoints.

This essay isn’t suggesting scrapping the use of predictive modelling in policy making, but that modellers should be one amongst a diversity of expert voices. Opinions should not be relegated to a second tier because the expert lacks a shiny model that outputs hard numbers. For policy or investment decisions that might take decades to bear fruit we should call them what they are – an informed punt. We’re really not that good at predicting the future, and models do not help if they provide a fake sense of certainty.

Finally, models should be continuously verified. If reality starts to steer away from the model (as it did with air pollution) we need to understand why as soon as possible. This process should equally assume that the model and the observational data are wrong, rather than one or the other.

For climate change this might involve abandoning the predictive model-based notion that 430 parts per million of CO2 in the atmosphere will create a 2 C rise in temperature, and instead take a risk based approach. A committee of experts, including modellers, could suggest pathways of actions we can take that have high, medium and low risks of significant climate damage. Society could then make a decision on what level of expenditure it was willing to make in view of the anticipated risk.

Will Anything Change?

The 2008 financial crisis shone a harsh light on the complex models used to underpin key parts of the financial system. Models suggested that credit crunch style events should only happen once every few thousand years, whilst in reality the house of cards collapsed rather sooner. At the same time many experts suggested that significant unknown (and therefore un-modellable) 'Black Swan' events tended to come along rather more frequently than we might think.

The lesson that we seemed to take away from this though was that those models were wrong, rather than all long-term predictive models of complex systems are inherently wrong. Policy makers and business leaders have remained wedded to mathematical modelling for the same reason they always have been – they provide an aura of scientific certainty in an uncertain world.

Coronavirus is shining a similar light on the world of predicative modelling, this time on the epidemiological models predicting the health impact of the virus and the economic models forecasting the financial damage. As normal, those models will be wrong to a varying degree.

The degree of attention they’ll receive when we inevitably pick over the ashes will depend upon how wrong they are. If the Imperial model that sent us into lockdown differs significantly from reality then expect a lot of questions to be asked.

The chances of such a scenario sparking a significant shift away from predictive mathematical modelling of complex systems though are low. As with a virus, once decision makers are infected with the artificial certainty that predictive models provide it may be pretty hard to shift.

Comments

Popular posts from this blog

The Green Deal - is it any good?

The Green Deal launched this week in a blaze of publicity, unfortunately much of it negative. The scheme has been the Conservative’s flagship home energy efficiency policy since before they even came to power, so the big question is: is it any good? The short answer is that it’s good for some, but not for others. At its heart the Green Deal is a loan scheme for energy efficiency and renewable energy measures, but rather than homeowners paying back the loan directly to the bank the repayments are made via a charge on their electricity bill. The two key innovations of the Green Deal are that the repayments should not exceed the anticipated savings on your energy bill (i.e. your bills will be lower afterwards, even with the loan repayments) and that the loan stays with the property rather than you (if you move the new owner takes over the repayments). The Green Deal is an attempt to address one of the big problems with some energy efficiency and renewable energy measures. Upg

MPs Blast Government Over Air Pollution Failures

Parliamentary Select Committees are rarely kind to Governments. With a remit to scrutinise the Government's record they normally pick policy areas where heals are being dragged rather than shining examples of success. But even against this background accusing the Government of 'putting the health of the nation at risk' is quite strong stuff. This statement is contained in a new report from the Environmental Audit Committee (EAC), 'Air Quality: A Follow Up Report'. The document is a successor to the EAC's 2010 ' Air Quality ' report, a wide ranging examination of the previous Government's record on air quality. The new report is shorter and more focused, examining what's changed since the previous report was published and making suggestions for key policy changes. As with any select committee report the Committee invited submissions of evidence from organisations and individuals, in his case attracting 26 written submissions. They also heard in p

Electric Cars Spark Into Life, But Can We Really Swap Pump for Plug by 2040?

Did you hear about the man who ran over his neighbour with an electric car? He was convicted of assault with battery. Expect to hear more terrible jokes like this, as the UK Government yesterday pledged to ban the sale of petrol and diesel cars by 2040. The UK joins the French Government, who have the same deadline to bring an end to cars powered by the venerable suck-squeeze-bang-blow . This pledge is nothing new: it just builds on a similar plan outlined in 2011 , with the language firmed up from an ‘ambition to end the sale’ to ‘will end the sale’. The big question has to be whether this policy is realistic. Luckily for us 2017 has seen quite a few opinions on this subject. In the furthest reaches of blue corner sits Stanford University economist Tony Seba , who thinks that all cars sold by 2025 will be self-driving electric Uber pods. On similar (but less extreme) lines sits the car manufacture Volvo, who say that all of their cars will be electric by 2019 (although this inc