New Tools of the trade

The new breed of tools in the latest systems is enabling those in the know to slice and dice data in ways that will be of direct use.

In 1955, the novelist and poet Sylvia Townsend Warner listened to the General Election results on the radio and described hearing an “electric computator, purring and being fed with statistics, like a great cat”. It was, she said, the size of a small bedroom.

Forty years later, we don’t seem to have moved very far. Computers may no longer be the size of a small bedroom, but they still need looking after, coaxing into action and feeding with statistics. We all know, courtesy of our laptops, that technology is getting faster and cheaper, but don’t be deceived. To the backroom boys of IT, size is everything – they just want to cram it into a smaller space.

The effect on marketers has been profound. The appeal of newer, faster, more intelligent technologies is that they will facilitate much smarter, cheaper, more targeted campaigns. It’s amazing how such a sophisticated bunch as marketers will fall for this hokum.

In fact, as always, any technological developments that will be of real use to marketers tend to be complicated, expensive and require massive injections of time, effort and cash. Some companies know this and don’t mind – after all, every product has to have early adopters – but don’t say you haven’t been warned.

One of the initial problems is the tendency of the IT industry to come up with even more meaningless acronyms and jargon than the marketing industry, so when the two get together, there’s a huge and creative opportunity for hype and obfuscation. What you really want to know, of course, is whether there’s anything useful, in hardcore everyday marketing life, beneath the babble about fuzzy logic, neural networks and intelligent databases.

The main impetus behind the latest developments is the miniaturisation of all IT components. Making computers smaller and cheaper not only results in smaller, cheaper computers, but also enables the technicians to look at the possibilities of linking small, cheap computers to produce slightly larger, expensive computers. This is quite different from the idea, now completely discredited, of building very large, expensive mainframe computers.

Today’s expensive computers are pricy not so much because of the hardware involved but because they can run more complicated software, and sit within more complex networks.

What this means in real terms is that there’s an awful lot of information being stored all over the place. What will cost the money, and the effort, is trying to get to it, and use it to do all those things marketers want to do, like stop wasting half the direct mail budget on people who aren’t the least bit interested in the product.

The church of IT is a broad one, and there are several different factions approaching this problem, for the simple reason that solving it is going to sell much more hardware, software and consultancy services. Advances are being made on both the hardware and the software front, in the bid to build bigger and better systems.

On the hardware side, much of the effort is being devoted to linking together more and more computers, to end up with vast repositories of data. Depending on the approach taken, these are either symmetrical multi-processing (SMP) machines or massively parallel processing (MPP) machines. The differences are arcane – suffice it to say that SMP systems, which work by clustering systems together, have proved easier to use with existing commercial operating systems and application software than MPP systems which work, as their name suggests, by lining up systems in parallel. Some of the really big data crunchers, such as Walmart in the US, have started buying MPP systems to handle huge amounts of information about customers and buying patterns.

On the software side, two main things are needed. First, the ability to make the system itself act in a smarter way, and second, the ability to fish out the relevant data, and only the relevant data.

Developments like neural networks aim to put more intelligence inside the system itself. If a database sometimes seems like a huge cardboard box, rather than a well-organised filing system, then it would be useful, argue the proponents of such technologies, if there’s an intelligence at work within the system, helping to sort things out.

Paul Gregory is managing director of Recognition Systems, a Manchester software house specialising in neural networking. Gregory’s thesis is that most of the data – 80 per cent, he reckons – collected by organisations about their customers is never turned into useful information.

One of the problems with existing systems, he says, is that they rely on linear logic. What’s needed is something that will be able to identify the non-linear patterns that give vital clues to customer behaviour. Step forward, neural networking.

“We’re beginning to see the benefits of neural networking over existing methods, particularly for companies – say, in the finance sector – that have to build lots of relationship models.”

He quotes the example of a book club mailshot. Traditional computing, says Gregory, will prompt the database to send someone regular mailshots about gardening books because they bought a couple of books three years ago. Neural techniques, he says, would stop further mailings because they might, for instance, recognise the correlation between the purchase of gardening books and moving house.

According to Gregory, Sun Alliance Insurance has used neural technology to improve its direct marketing campaigns, using a system from his company that recognises which customers are most likely to respond. Sun Alliance, he says, has found that by mailing only 20 per cent of its target database, it could achieve the same level of response as mailing 63 per cent.

The drawback of neural systems is that they must be fed with huge amounts of data to give them the necessary examples from which they then draw conclusions. As yet, therefore, they tend to be confined to the top-end companies which are able to spend a great deal of money and effort getting the data into the system in the first place.

What most of the database companies are trying to do, on the other hand, is come up with smarter ways of manipulating the data that already exists within their databases. Peter Weston, business development manager, retail, at database company Informix, says the drive of IT development reflects the move from mass-marketing to segmentation. “It’s moved away from those Vosene ads of the Seventies, where this was the shampoo for everyone in the family, to a situation where there are now six different shampoos in the bathroom, and the information is needed to support that,” he says.

Like all the major relationship database suppliers, Informix has kept a close eye on the development of parallel hardware, which needs parallel software to exploit its full potential. The example always given is that a parallel-based system comprises 20 filing cabinets; to make best use of it requires the ability to file information in all 20 simultaneously. Informix’s version of this is called the Online Dynamic Server. It is for SMP systems, although there is a souped-up system – the XPS – for MPP systems.

As Weston points out, however, it’s not the ability to file the stuff away that matters, it’s the ability to get it out and use it. “The main difference between these latest systems and some of the previous decision support systems lies in the amount of data that can be stored in an ad-hoc way and then analysed using the new breed of relationship tools.”

Relationship tools, though they sound ominous, are the knives and forks of these new systems, enabling those in the know – and it is important to point out that these are not simple PC tools – to slice and dice all that information a range of ways that will be of direct use to the marketing department.

One high-street electrical goods chain, for instance, is using database technology and tools to produce better-targeted mailshots on its extended warranty offers, ensuring, for instance, that a mailing goes out a month before the customer’s manufacturer’s warranty expires.

Ray McGinley, product consultant at Computer Associates, says the latest databases allow stores to move their processing out from a centralised department and into local branches, to enable much quicker analysis of product sales.

But there are still problems to be solved. If you move processing to a local branch, for instance, head office will still want much of that data, and information in all databases must be correct and up-to-date. McGinley also points out that marketing departments aren’t going to sit down, build and run their own massive database. “I don’t see people building systems just for marketing,” he says. “It tends to fall out of the other systems.”

Despite this, there is a big drive to provide marketers with the information they need, and the tools to use it. It is said that one white goods manufacturer is asking its field service staff to try to make a note of people’s other appliances when out doing a repair, for feeding back

into their census of brand market penetration.

On the other hand, marketers have been doing this sort of thing for ages. Technology merely allows them to do it a bit faster.