Optimising campaign effectiveness in 2010

Ken Parnham, managing director of Adtech UK & Ireland, offers his advice on how to get the best results from online marketing campaigns.

Ken Parnham
Ken Parnham

The phrases “Survival of the fittest” and “getting more with less” have never been truer. The online advertising market is a tough place to make margin and in a challenging economy, it’s become even harder. In 2010, publishers need to look ahead to next year and focus on maximising inventory, seeking out ways to increase cost per mile (CPM) margins and to minimise false click through rates (CTR’s).

There are many “ad optimisation” services and providers operating in the market now, but there’s actually a lot publishers and agencies can do simply by re-evaluating their existing ad serving solution.

Many such technologies already have several optimisation features for campaigns within them, but it’s something users might not be fully aware of or have experience with.

An example of such a feature is an eCPM (effective CPM) capability which enables the campaign that is generating the most revenue to be delivered with the highest priority. It also checks the “weight” of campaigns (defined by the generated revenue), compares the calculated weight with competing campaigns and steers the priorities so that the highest possible revenue is reached.

To continue the theme of getting more out of existing technology, publishers should also take a look at newly emerging features like re-targeting. This enables publishers to target users who have previously seen a certain campaign, clicked it or made a transaction – making campaign delivery penetrate to a deeper level. In short, there’s a lot of opportunity to maximise what you’ve already got – or to move to a solution which gives you that flexibility.

Aside from ad optimisation, publishers and agencies will have to be more clued up on making sure that click counts are clear and accurate – to minimise discrepancies and ensure campaign effectiveness!

It’s something I take very seriously as the online ad market has thrived based on its transparency and ability to provide metrics on campaign success. The Internet offers far more specific figures than television or radio – through supplied banners, clicks and duration of stay – but this is a source of lively debate.

Publishers, media agencies, advertisers and adserving providers; while all of them track adserving, invariably come up with different results. This is completely normal and stems from a number of causes including pop-up blockers, sluggish servers, incorrectly programmed advertising media. What aggrieves the industry more though are the high counting discrepancies.

Deviations of below five per cent are regarded as exemplary. Figures in the higher single-digit range are also accepted. However, discrepancies in excess of ten per cent necessitate close examination.

The best way to reduce discrepancies is to employ standardised counting methods. The IAB has defined globally applicable measurement guidelines and their implementation will ensure reliable values when recording ad impressions. These measurement guidelines recommend the following measures:

Cache busting
Temporary saving of advertising media, which have already been saved to the browser or proxy cache, could give rise to counting discrepancies. This can be avoided through a variety of techniques to avoid this, e.g. adding a random number to each adrequest or a timestamp. Here the browser requests the advertising media again from the adserver whenever the page is loaded rather than retrieving it from the cache.

Filtering
Page views by software programmes such as robots, spiders or web crawlers (non-human activities) may force up the number of ad requests artificially. Providers should filter out these impressions on the basis of blacklists and robot lists issued by the IAB, meaning that they are not included in the report. This also applies to bad requests that are only partially received by the adserver.

Viewcount
Unlike reporting ad impressions, viewcount logs users’ adviews. The procedure gauges whether a banner was successfully displayed in the browser.

The described resources are aimed at reducing counting discrepancies, although they will never be able to eliminate them completely. Occasional differences in figures will occur owing to comparing apples with pears as it were.

Certification and standardisation are the best way of avoiding excessively high counting discrepancies and the subsequent search for causes. It’s ours and the industry’s responsibility to minimise click fraud so that clients have confidence and trust in providers.

Ad servers need to be able to analyse any suspicious data and have a robust method of viewing the data to confirm or deny any potential click fraud. It’s a function every publisher/advertiser should be asking for when looking for a trustworthy solution.

Finally, businesses should always look for the IAB certificate – the seal of quality, which is reviewed annually, and confirms the highest degree of transparency and precision when it comes to ad serving and reporting of display ads via the adservers.

Our stance on this drive is absolutely clear – any kind of standardisation will serve to increase our industry’s credibility – that can only be to ours and our customers benefit.

Recommended

Is the next customer revolution happening?

I’ve never forgotten the powerful mantra, “Never sell to a stranger”, emblazoned on the front of a bright red brochure from Ogilvy Direct in the mid-1980’s as I started my career in data marketing.