It’s a truth universally recognised that what gets measured gets managed. But when we nod our heads sagely in agreement, what exactly are we signing up to?
Measures and measurement are wonderful things. So wonderful that they obsess many modern organisations to an unhealthy degree, as an Advanced Institute of Management (AIM) Research and ABA Research seminar last week underlined.
We all know, for example, that every measure is an artefact of the processes/definitions used to create it. “Market share” is a classic: how big your market share is depends on how you define the market. One senior executive at a large European packaged goods company recently admitted that his marketers were a decade late waking up to the threat posed by Aldi partly because their retail share figures hadn’t included it: Aldi had been defined as being in a different market. A patient dying on a trolley in an NHS hospital accident and emergency department is not counted as dying in the hospital’s care. So hospitals wanting to reduce their “death in care” measures can simply leave people to die on trolleys.
Also, as Peter Hutton of BrandEnergy points out, every measure expresses its own implicit model of what makes things tick. A speciality retailer measured profitability by store and rewarded store managers on this basis. In doing so, it implicitly accepted that overall profitability was a product of individual store profitability. Result: individual stores from the same company entered into cut-throat competition with each other, including undercutting each others’ prices to win the same customer’s business.
What we ask measures to do is also important. For example, there’s a strong tendency in organisations to turn performance measures into targets, notes AIM Research’s Professor Andy Neely. The trouble is, most performance measures-turned-targets quickly succumb to Goodhart’s Law: that “any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” (In plain English, “when a measure becomes a target, it ceases to be a good measure.”)
Similarly, says Neely, it’s crucial to distinguish between measures that are used for reporting and for learning. Reporting tells you how well you’ve done, but the reported number doesn’t necessarily give you any insight into how to do things better. Organisations that use measures to learn – to understand how things really tick – end up working very differently to those that use measures to put “pressure” on reality (to use Goodhart’s excellent word). Like that speciality retailer, how many performance-related pay and bonus schemes do you know with perverse behavioural effects?
Then there’s the psychology of measurement. On the surface, it’s all about objectivity. So why do people get so passionate about measures? Because underneath, they trigger all sorts of primordial emotions, including a sense of being in control, of power (exercising control over others), of security (if I’m surrounded by measures, I won’t get unpleasant surprises) plus high politics: “cover my back” protection, argument winning, and so on.
We shouldn’t forget the intrinsic nature of measurement either. To measure something you have to be able to count it. This means you have to be able to separate it from its surroundings, and each separated part has to be similar in some vital way. This excludes qualities such as beauty and truth and things that are intrinsically connected such as “meaning” and “culture”. If it matters, it probably can’t be measured.
In addition, measuring devices measure only what they are designed to measure. We don’t measure sound by putting it on kitchen scales; we can’t measure human motivation by looking at financial numbers. We humans have five different senses to capture five different dimensions of reality. Many organisations measure just one or two dimensions, say finances and operations. Over-reliance on these measures renders them effectively blind to other dimensions, such as the human factors that underpin much of their performance.
Does your organisation distinguish clearly between which measures are for milestone reporting, targets, learning and for monitoring and control (such as dial-turning)? If not, try it as an exercise. Has it made each measure’s implicit model explicit? Does it recognise and adjust for the limits of both measures and measuring devices? If not, somewhere along the line, its measures are probably causing some damage. How much damage precisely? Well, nobody has measured that. But here’s some anecdotal evidence.
At last week’s seminar, two senior departmental heads from household name companies (one public sector, one private sector) admitted they routinely make up the numbers on some regular reports because these reports are irrelevant to their “real” jobs. If they didn’t, they would have to waste time chasing after the data. Then they would have to waste more time justifying negative variances, proposing remedial action and so on. One manager said he’d been inventing figures for five years in two separate jobs: no one had ever noticed.
How many companies measure the cost of their measurement systems? Recently, Ford estimated its formal reporting procedures costs it $1.2bn (&626m) annually. And an IBM Business Consulting Services survey of marketing managers in 2002 found that 89 per cent of them didn’t understand how their measures aligned to business objectives, and that on average they spent 5.4 hours a week on measurement and reporting.
Multiply those 5.4 hours by their hourly salary and you get a feel for what measuring and reporting metrics can cost a company. And for what returns? According to Neely, when it comes to measuring the value of measurement, there is scant empirical evidence that most measurement systems do anything to actually improve organisational performance.
A more important cost is flawed decision-making. Speaking at the seminar, Alison Bond of ABA Research warned that many companies’ measurement systems had become “parallel universes”. Managers live in these parallel universes, oblivious of the pitfalls just discussed. Invariably, decisions made in this parallel universe only add to the problems faced by operational staff dealing with day-to-day realities. Just look at the frustration created by public sector targets today.
Neely called this Plato’s Cave management – based on Plato’s mythical cave dwellers who, sitting by the fire and seeing shadows dancing on the wall, came to believe that by changing the shape of the shadows they could actually change reality.
In the worse cases, managers fall under the sway of a cult superstition: the superstition of rationality, believing that numbers are more “objective” than other forms of evidence, and that making decisions on the basis of a set of numbers is the most rational way to behave. What’s more, they are emotionally committed to this belief even though it’s no more rational than sacrificing a chicken and examining its entrails.
Measurement is crucial to learning, which in turn is the basis of better performance. But right now, are your managers sitting in Plato’s Cave? Are you using measures to manage? Or are your measures managing you?v
Alan Mitchell, email@example.com