The very public problems of Diane Modahl and the Brent Spar oil rig are, surprisingly, connected. Both are high-profile cases where the results of chemical analysis were disputed. In Modahl’s case, her urine sample was kept in the wrong conditions before it was tested. As for Greenpeace, its sampling error over-estimated hydrocarbon content by a factor of 100.
These are the cases that make the headlines. The story behind the headlines is even more fascinating and concern the inherent difficulty in analytical science to produce reliable results.
Studies have shown that any two laboratories can analyse the same sample and produce very different findings, sometimes with damaging consequences, including the loss of millions of pounds every year. It affects the efficiency of British industry; our concern for the environment; quality of life; and our capability to trade or compete internationally.
The effects could be felt by any one of us. We have all given a blood sample, for example, and been happy to hear that we’re in good shape. I don’t suppose any of us has stopped to think that the analyst might have got it wrong.
Aspen Business Communications has started working on a campaign for a product called Value Analytical Measurement or VAM.
For the past two years we have been working with the DTI, researching the market and preparing for a campaign to market this new initiative, which the DTI believes will reduce problems of unreliable chemical analysis.
But why the problem in the first place? After all, analytical chemists and physicists are highly trained and dedicated professionals
The reason they sometimes make mistakes is simply that analysis is very difficult and certainly not an exact science. Measurement is always an estimate. New technology and ever more demanding customers further challenge the powers and skills of the analyst. It’s not surprising that errors can and do creep in. What the DTI has done is to identify six areas of laboratory practice where failure can occur and has developed a proven methodology to eradicate error.
It is this methodology which is called VAM. Its aim is to increase the reliability and comparability of results which was pretty much the platform on which VAM was marketed to the analytical community until we were appointed.
Given the scale of the problem as we understood it, we thought laboratories would be falling over themselves to adopt the initiative but we soon found out it wasn’t going to be as simple as that.
The problem lay in the positioning and proposition for VAM. Laboratory owners and directors had already been bombarded by quality initiatives, ISO 9000 and science-specific accreditation schemes, all aimed at improving reliability. The task facing the DTI was to define VAM’s role in the scheme of things. Why should laboratories want to adopt VAM?
Working with the DTI’s contractors, the National Physical Laboratory and the Laboratory of the Government Chemist, we set about researching the market place to define VAM as a product and establish a selling proposition. We used two research techniques. The first, in-depth interviews, held face to face with quality managers and the second, 400 telephone interviews using NOP’s CATI – Computer-Aided Telephone Interviewing.
We chose CATI because it enables an interviewer to follow very complex routing through a questionnaire. The marketing programme for VAM is on a tight budget and we needed this research to give us all the answers. So we constructed a number of hypotheses which we wanted to test. Depending on what respondents said about barriers to adoption and whether they were from the private or public sector would alter the direction of the research. CATI is powerful enough to cope with this.
What we found changed the direction of the marketing programme. Although the DTI was marketing to scientists whose professional pride in the quality of their work is renowned, it was the commercial realities facing their business that would determine their reaction to VAM. VAM is a non-audited system so there is nothing to show a potential customer that you are a quality outfit – there’s no marketing advantage as such. Laboratories have to derive a direct benefit and it had to reflect the commercial necessities.
The research was conclusive. VAM would only make sense if it could be shown that any organisation would enjoy financial benefit from the investment in yet another quality programme, and moreover one that we couldn’t say is better than the previous ones, because it is needed in addition.
Knowledge of the product combined with the research results led to a whole new usp – that VAM would reduce costs and risks associated with unreliable measurements.
In the direct marketing and product literature, all we talk about now are the direct costs – such as analysts repeating measurements, and hidden costs – such as litigation and compensation.
The case studies and testimonials are the same – all centred on the financial. Wherever possible we try to put some figures to it all – we have to prove to the potential customers that VAM really can save more money than it takes to implement it. There can be no hollow claims in this market. The scientific community is not easily sold to.
The research was completed earlier this year and in June our test mailing yielded close to four per cent response within a sample that had little exposure to VAM and an over-exposure to other schemes.
It is probably too early to tell whether we have the right positioning for VAM for the entire market but we will put our trust in research again. After the next mailing we are planning some further interviews, but on a much simpler scale.
By using CATI the first time round we have identified the parameters by which VAM will be judged by the market. All we have to do now is find out which are the most acceptable, credible and powerful.