Missing links in the data chain

Results from our online Data Analytics Survey show that customer databases are widely held, but suffer from process and quality issues

Possession may be nine-tenths of the law, but it is only when you make use of those possessions that you start to acquire some rights. Owning a car is meaningful only if you take it out onto the roads for a drive, for example.

So it is with customer databases – just having a compiled set of data is not the same as carrying out data-driven marketing or evidence-based decision making. At worst, it could just mean an electronic mailing or telemarketing list.

Since the early 1990s when I first started writing about data, the level of companies claiming to be in possession of a customer database has been nine out of ten. The results of the latest Data Strategy online survey is no different, with 94 per cent claiming this asset.

The extent to which it is being put to use for value-adding processes, such as data mining and analytics, is less extensive, however. Three out of ten companies do not have this resource in their organisation. If they did, it is possible that some of the actions they took as a result might be skewed, given that nearly four out of ten do not have data quality programmes in place.

This gap between holding data and using it is not really a reflection of scale. Among the 95 responders to the survey, 15 per cent are operating very large scale databases of five million-plus records, with 17 per cent holding between one and five million. But at the other end of the scale, 20 per cent have under 50,000 records and 12 per cent between 50,000 and 100,000.

Lower volume customer databases are almost certainly run by companies operating in business-to-business markets and these comprised a bare majority (52 per cent) of responders to the survey. With a spread across virtually every industry sector, it is clear that database practices are at very different levels with no obvious pattern.

A useful indicator of these differences can be found in the information being held on the customer database. Demographics and contact data are pretty standard – although there is a surprising gap in population of evening telephone numbers, which are held by only 36 per cent. Even allowing for B2B records, where this would not be useful, , this leaves a quarter of B2C organisations unable to call customers during prime contact periods.

Email addresses have been captured by 81 per cent, which is encouraging but still leaves an important gap in contact data. An email address without the right permissions does not get you very far, however. The survey did not ask about opt-in rates – how many would have been able to answer accurately?

Where customer databases are being less well exploited is in the application of segmentation and transactional indicators. Only just over half (52 per cent) hold a segment code, while 56 per cent have purchase history by value and 62 per cent purchase history by product. Leveraging these dimensions is probably the difference between using data to drive the business and just holding data within the business.

One explanation for these gaps can be found in the sources from which information in customer databases is captured. Sales records dominate, closely followed by operational system extracts, online data capture and email responses. Customer services records are also used.

Yet all bar the first of these provided feeds to less than the majority of databases, suggesting gaps in the process for building customer information. Or perhaps it is a disconnect between the business and the database – while marketing has access to the database in 86 per cent of companies, only 52 per cent have a customer insight team. The board only sees customer data at one third (35 per cent) of organisations, clearly those with a more advanced business culture.

That is probably why customer databases are primarily being used for straightforward functions like markeing campaigns (82 per cent) and marketing planning (73 per cent) or customer management (66 per cent). While it is encouraging that 59 per cent use customer databases for predictive analytics, only 47 per cent use it in business planning and just 17 per cent for risk management.

Even so, customer databases are still very much in favour. Six out of ten companies say they are extremely important to their business. That explains why 39 per cent plan to invest more in this asset this year, while 29 per cent will maintain existing levels of spending.

Of greater concern is whether this investment is being directed towards the right areas. Nearly four out of ten (38 per cent) of respondents said that their organisation did not have a data quality programme in place. Given the turbulence in both business and consumer markets at present, simply capturing data is not enough – it needs to be maintained, validated and enhanced.

This could explain why satisfaction levels for the information held on customer databases are in the balance. Only 11 per cent said
they were very satisfied and 39 per cent that they are reasonably satisfied. Against this, 24 per cent are somewhat unsatisfied and 2 per cent very unsatisfied. The neutral position held by 15 per cent speaks of potential quality issues that have yet to surface.

Where data quality is being addressed, the primary drivers are highly tactical – to improve the performance of marketing, customer relationships, the business overall or sales. More strategic reasons lag behind – compliance was named by 61 per cent of those companies with a data quality programme in place, but only 20 per cent named their environmental policy or CSR strategy.

Linking the tactical-level needs for data to be accurate and up-to-date with the strategic coporate desire to demonstrate best practice is the moment when the status of data fundamentally changes. It is no coincidence that data governance programmes exist in those organisations who take a more holistic view of every action.

Viewed in terms of their maturity, it is clear that data quality mangement is currently lacking in a common standard and set of approaches. Equal numbers of respondents reported that their own programmes were standalone projects by individual functions, run by functions in line with corporate strategy, run centrally by a data governance function or run centrally by IT.

Responsibility for data quality is also widely distributed among job titles. Database managers are given the task in 58 per cent of companies with a data quality programme, with 46 per cent leaving the task to a database analyst. A further 8 per cent rely on their data manager to run the programme.

Specific data quality roles are beginning to emerge – 17 per cent of companies with a programme have appointed a data quality analyst, while 19 per cent have a data quality manager. The most strategic role – data steward – is only found in 8 per cent of companies specifically addressing data quality.

Among the four out of ten businesses who have yet to tackle the issue, there is a recognition that it may have an impact. Principal concerns are that poor data quality may drive up marketing costs, damage customer relationships or lead to a potential loss of sales. Brand reputation is also a concern.

More strategic problems receive less attention. Inaccurate business reporting worries half of those companies with no data quality programme, but compliance failures and fraud are only just beginning to be identified as problem areas. Typically, it is the immediate consequences of poor data quality that get noticed – email bouncebacks, postal returns and customer complaints.

The main obstacles to introducing a data quality strategy are not tactical, but strategic, however. Company culture is the primary barrier, together with a lack of internal processes and human resources. Cost is a barrier, but so too is the lack of executive sponsorship, which was named by 28 per cent of those organisations who have not implemented a data quality strategy.

As the use of data mining and analytics continues to grow, the absence of data quality projects could become a submerged problem whose impact is felt elsewhere, without being identified as a cause. With seven out of ten companies using analytics, the threat is that the outputs from these techniques get skewed by flaws in the underlying data sets.

Looking at what data mining and analytics are used for makes these potential errors evident. Customer segmentation (63 per cent) and customer profiling (58 per cent) are the major outputs. Each of these can provide a faulty view of the customer base and business potential if there are missing or inaccurate variables in the data being profiled and clustered.

Response analysis is in place at 41 per cent of all companies who responded, suggesting either that the remaining six out of ten are not concerned about how their marketing performs or that they do not have the measurement loops in place. Sophisticated modelling and analysis is the preserve of an elite group who make up one-third of the total sample.

These are the businesses who are running response and conversion propensity models, LTV and RFM analyses. In understanding where best practice sits in the total market for customer databases, this gap is as suggestive as that between companies with a data quality programme and those without.

If your organisation is assembling customer data and then just counting variables or grouping customers in rudimentary ways, then it is not leveraging the real power of the information. Worse, it is ceding marketing opportunities and business potential to those who have a deep insight into whether customers are likely to purchase and what their future value might be.

Investing in customer databases is not the same as investing in data analytics. In fact, it is a two-stage process, with the database coming first and the analytical platform following on. That is why only 32 per cent of the sample have a dedicated analytical database, since it is an expensive, if productive, resource to create.

The rest rely on more temporary measures, such as a regular extract of the full data set (29 per cent) or an ad-hoc full data extract (24 per cent). Some have to make do with sample extracts either on a regular basis (12 per cent) or ad-hoc (25 per cent). Many use multiple ways to build an analytical data set depending on the business need.

But there is still room for improvement in data mining and analytics. Among those who are doing it, only 31 per cent are extremely or very satisfied, whereas half are only reasonably satisfied. A small proportion (11 per cent) are not very satisfied, but none are completely disappointed.

The root cause of disatisfaction with performance could be accuracy. Just over one third of those practicing analytics had a positive view of its accuracy, but nearly half are only reasonably satisfied.

This only serves to underline the interconnected nature of data inputs and outputs. Building a customer database is a necessary first step towards improving all aspects of a company’s activities, from its operational effectiveness and marketing performance to strategic planning and compliance.

From this initial stage, the information being gathered has to be managed and maintained. Unless it is kept clean and accurate, the investment will degrade. That has a knock-on effect on data mining and analytics – assuming that the right environment and resources to support this have been put in place.

Customer data does have an element of the Forth Bridge about it – no sooner have you finished painting it than you have to start all over again. What makes that worthwhile is the strategic links that are created as a result. And there are no shortcuts.


    Leave a comment