Unintended consequences face all of us if we treat data without caution. Research is an obvious example. Conclusions drawn on sampling of a given population can all too easily be skewed, exaggerated or simply only applicable in a specific set of circumstances. Just ask Coca-Cola, the victims of the most famous mis-reading of a research finding ever.
Suppression data has suffered from a similar problem as the Coke v New Coke taste-off, except with the results in reverse. Sometime in the mid-1990s, Royal Mail carried out an exercise in which it mailed a sample of households which it had flagged as goneaways. The mailing got a response rate which any commercial marketing campaign would have found acceptable.
Since then, data users have taken this as evidence of the risk of over-suppression. Yet the Royal Mail research should have told us something very valuable – that before deciding whether to suppress, you first need to decide on the definition of a goneaway and also whether the status of the individual, rather than the household, really matters to you.
Ever since the suppression data market got going in the mid-1990s, there has been a hotlycontested argument over postal versus actual goneaways. The first of these is simply an item on which somebody was written “Return to Sender”. The second is a confirmed customer telling a supplier that they have moved.
So the source of goneaway data is critical, yet often obscured in multi-source suppression files. Deceased data ought to be more absoulte, but can be prone to misreporting, even if to a lesser degree. Whether the data user cares about these level of accuracies is a more challenging issue. Nearly a third of all direct mail activity is still not applying basic suppression, according to some studies. Those companies would argue that they do not want to remove individuals who might still be in residence or alive (no doubt citing the Royal Mail findings from its goneaway mailing as proof).
This can be positioned as putting the interests of those individuals first. It is in fact the opposite – giving response rates preference over best practice. So how can suppression data make the case that it should not only be used, but applied with discretion?
THE SUPPLIER’S VIEW
At the risk of sounding like a complete heretic, I’m no great fan of the “S word” – suppression. Perhaps the new Dr Who can travel back to the early Nineties and rename the process as something a little more, well… positive. To me suppression has always been a means to leveragingthe greatest insight and ROI possible from data – not just removing records.
First let me tackle that hoary old chestnut: why suppress?’ “Yes, the Royal Mail experience of fifteen years ago which David cites indeed saw an unsuppressed mailing to goneaways generate a commercially viable response rate. But what was the nature of the offer? And what was the
volume?” If I had sufficient budget, I could simply carpet bomb entire regions and, if my offer was sufficiently attractive, probably achieve an effective response rate. “But along the way, I’d have needlessly spent several hundred thousand quid.”
Sound familiar? That was what dumb, unsuppressed and, dare I say, unsophisticated direct mail was like. But then the public became alarmed at senselessly using the equivalent of 4 million trees per year and nigh on every major brand scrambled to appear green. The estimated £50 million that misaddressed and discarded items cost UK businesses annually began to receive attention, Then finally, and most recently, recession has left everyone grappling with new rules and game plans.
Powering these activities is data – the cleaner, the better
Replacing this “bigger, faster, more” ethos are client retention strategies, brand loyalty, response rates and ROI. But powering these activities is data – the cleaner, accurate and secure the better. And suppression remains customer and prospect data’s best friend.
Look at the issue of over-suppression. In the worst case scenario, otherwise marketable name and address records can be inadvertently suppressed. So preclean data segmentation is incredibly important. Similar problems can also arise when using deceased suppression products which contain a sizable amount of unverified information.
To my mind, the four most important criteria to look at when choosing suppression data are accuracy, recency, coverage and price. Wherever possible, try to ensure that all of the suppression files you’re using contain only verified and non-assumed data. Recency is also becoming an
important selection criterion.
Suppression files which take months to compile and update may impair your response rates. Don’t constrain your marcoms strategy by using suppression files with anything less than the maximum coverage available. Otherwise you won’t be playing with a full data deck, so to speak.
And finally, there’s that old suppression devil – price. Once again, “pick ‘n’ mix” files may look like cheaper, viable options. But the additional costs you’ll incur by needlessly marketing to customers who have moved, died or simply aren’t interested in your offer will cost you infinitely more in terms of brand damage and/or lost sales.
THE CLIENT’S VIEW
Head of campaign services, Nectar
Data quality and supply manager, Nectar
Data accuracy is fundamental to everything we do. The ability to understand consumer behaviour and direct highly-targeted communications to customers has become an important mainstay of B2C marketing over the past ten years. It is difficult to imagine trying to manage a nationwide loyalty scheme like Nectar without effective data suppression.
Long gone is the age of generic, household-level offers. In order to achieve the highest response rates and ROI, campaigns need to be personalised and driven by accurate, up-to-date data. There are over six million different coupon offer combinations accompanying Nectar’s
quarterly point statements, for example. The words, “Dear Householder”, just aren’t in our vocabulary.
An unsuppressed campaign would be inconceivable today from a brand management perspective. Delivering a commercially-viable response rate as cost-effectively as possible is always paramount. We’re in the loyalty business and trust has to underpin the millions of mutually-beneficial relationships between programme partners and consumers. Trust that we’re keeping personal information up-to-date, secure and accurate, that we’re awarding points correctly, and that offers will be relevant.
Assume nothing is our default mode for supression
As for over-suppression, it enters the equation only when marketers clean data on an occasional or ad-hoc basis. This ill-advised practice may be a carryover from the early days when suppression was seen as an IT function by some marketers. Today, we must rely on keeping all of our datasets as up-to-date as possible. We regularly use commercially available suppression files to ensure that cardholders’ Nectar experience is as seamless as possible.
The accuracy of all suppression data we apply is incredibly important. Verified data trumps assumed data every time for us. This means that
“Return to Sender” isn’t the sole indicator we’d rely upon. Our matching and testing routines are such that we cross-reference files against various data sources to determine its status. If in doubt, we match and test again.
“Assume nothing” is our default mode when it comes to suppression. We only wish the estimated 30 per cent or so of UK direct mailers that aren’t using suppression would do so and help eliminate the term ‘junk mail’.
So in our opinion, giving response rates preference over suppression/data management best practice simply isn’t viable. It can damage brand image, annoy customers and won’t comply with the requirements of regulators, such as the ICO and ASA. If client retention and profit maximization are the goals, then it makes sense to suppress.
Turn down the volume, turn up the value. That has been the primary strategy of companies who continue to use direct mail. Keen to leverage the strengths of a tangible contact, they have focused on ways to keep costs down while still achieving response and conversion objectives.
In doing so, many organisations continue to maintain the divide between targeting and suppression. Willingness to spend on good quality data for positive selection, combined with predictive modelling, has typified those brands still in market with their mailings. Conversely, suppression data is seen as a separate line of cost and even an unnecessary.
The retrenchment to customer retention marketing is partly to blame for this. Organisations assume that they have better knowledge of their own customers than any third party. That assumption is false, since many customers barely recognise that they have a relationship with the company, let alone think to tell them if they have moved.
Shifting towards hot leads as the basis for positive targeting is only serving to reinforce this distinction on the prospecting side of things. Marketers assume that an individual who indicates that they are in market for a product or service can be found at the address they have just provided. Lead generation data providers do nothing to contradict this view.
Yet it is surely self-evident that many of the products and services which bring consumers to market relate to house moves. How many respondents who indicate they are in market for consumer durables or financial products will give the address they are moving to, rather than the one they are moving from?
Even if the buyer of a hot lead acts quickly, they could find a promising contact cuts out halfway through the sales cycle. So suppression needs to be applied even to the most apparently recent of data sets in order to maintain its responsiveness and ROI.
With budgets more constrained than ever and marketing performance under close scrutiny, any wasted effort has become unacceptable. So why continue to overlook the one clear process that could strip out wastage and optimise results? It is time for data managers to unify the living and the dead.