Black swans could be the new white

Just how accurate are your predictive marketing models? When it comes to deciding where to deploy hard-won budgets, it is increasingly – and encouragingly – the case that some form of forecasting gets done first. That often means running “what-if” scenarios through the customer insight and analytics team.

You would imagine, therefore, that decisions about how to run the UK economy would also be subject to the same process. They are – but here is the rub. The chances of the Government’s forecasts for GDP being accurate to within 1 per cent are just 40 per cent.

When you think that a 1 per cent variance in output could be the difference between growth and recession, that seems remarkable. Even more so when decisions that affect every one of us are being made at odds of 3:2. It makes you realise just how bold politicians really are being when they stand up to explain how they think the world is going to look.

The economy is a tremendously complex thing to try and forecast, of course, with huge numbers of dependent and independent variables. External factors can also completely blow a prediction off course. How many public sector statisticians factored in the failure of Lehmann Brothers to their 2008 models?

No wonder the concept of the “black swan event” has gained such traction across the forecasting and wider business community. There have been so many previously-unforeseeable events across the last two years that producing any model with a degree of confidence has become challenging.

So what can analytics teams do to cope with this level of uncertainty and still support their lines of business with an evidential basis for their decisions? The first thing is to take in as much data as possible into every model built, even if it is a response prediction on a small-scale email campaign. Processing power is no longer the issue and the bigger the sample size, the more accurate the model.

The second is to create more models as part of the process of answering each question. Instead of seeing a business decision as a single entity, it can often be broken down into multiple components, each of which can be modelled and those models interlinked.

The third action is more challenging to the usual approach. Data is often normalised before being put into a model, with outliers stripped away. What we have now realised is that those pieces of data at the margins may be the most powerful indicators of a step change ahead.

Like the Chaos theory notion of a butterfly’s wings causing earthquakes, data from recent history that seems to sit outside of expected norms could indicate that the paradigm is flawed and needs to be rethought. Big insights like that are what analysts live to produce.