The biggest problem facing users of Web search engines today is the quality of the results they get back. While the results are often amusing and expand users’ horizons, they are often frustrating and consume precious time.
How true of the search experience today, yet these words were written in 1997 – in a paper by two Stanford University PhD students, Sergey Brin and Larry Page, outlining their new search engine, which they called Google. To read their paper for yourself, search for “The Anatomy of a Large-Scale Hypertextual Web Search Engine” using any of the major search engines.
Today, with the explosion in internet use worldwide and the huge volume of information that is being added to the Web on a daily basis, quality is even more of an issue than it was in 1997.
Users equate quality with relevance. They get annoyed when they get results that are, as far as they are concerned, irrelevant and time-wasting.
Paul Frampton, head of digital at Media Contacts, says: “What I search for is exactly what I want to get back, and in an ideal world should address 100 per cent of my needs. A search engine’s lifeblood is its quality.”
Unfortunately, most search engines use complicated mathematical algorithms to generate a list of results. That means they can be very fast at searching the Web and finding information that appears, on the surface, to be relevant. But in reality they may be totally off-topic.
In natural search, that might happen because key words have more than one meaning. Alternatively, a website may be so poorly designed that while a search engine’s indexing software, or “spiders”, can see it easily, human users can’t.
A third possibility is that the site has been designed to pull in as much Web traffic as possible, through various underhand methods, such as deliberately hiding well-used key words in parts of the site structure that the spiders will see but people won’t.
Computer programs have no imagination and no discretion, and – as yet – do not understand the importance of context.
As Bill Staples, head of search at Sapient Interactive and former senior manager for UK search at Ask.com, says: “If you asked a policeman in London the question ‘Paddington?’ you may expect a response of how to get there, but probably not a history of the station or an offer of a stuffed toy.”
On the other hand, in pay per click or paid-for search, you might get irrelevant results because someone has been deliberately buying key words that do not match the information on their site or the products they sell, just to get traffic.
State of mind
Even when the results strictly speaking answer the search terms, they may not match users’ state of mind. If an individual is in the research stage of the buying process, rather than looking to make an immediate purchase, pages full of hard-sell advertising copy will be irrelevant.
Laurent Boninfante, head of search at OMD Digital, observes: “The challenge for advertisers and agencies is to engage the consumer and not the search query. In a direct response context, where maximising return on investment is paramount, the temptation is to emphasise in a sponsored result how good a deal the advertiser is offering. That can alienate a consumer in a research mindset.”
Gavin Ailes, business director of finance and multichannel at search agency The Search Works, points out: “There is a misconception that paid search results are less relevant than natural listings.”
That is not true any more, he adds, because as the advertiser has the chance to control both the creative message and the landing page, the user can be directed to the most relevant place on the site with little or no additional navigation needed.
In the early days of the Web, search engines could get around issues of quality and relevance by incorporating actual human intelligence. What they did was to use real people as researchers – either answering all the queries, or editing results generated by software programs to weed out the irrelevant ones.
The switch to purely software-based search engines was driven by the explosive growth in the size of the Web, and the almost exponential growth in the volume of data that was accessible.
However, new search engines are being developed that intend to revive this human-based model, by coupling it with social networking. For example, Jimmy Wales, the man credited with founding Wikipedia, has just launched his “open source” search engine, Wikia Search.
In an interview promoting the launch, Wales said: “Good quality search is becoming a commodity item. The search quality of Google, Yahoo! and Ask are actually very similar. So the idea that Google is some kind of technological power house is no longer true.”
Wales’ idea is that the people who use his new search engine will themselves help with the searching, so hopefully replacing some if not all of what other engines do with mathematical formulae with human intelligence.
Even Google has had to revisit the idea of the human searcher. As Nilhan Jayasinghe, head of natural search at agency Spannerworks, says: “Google has always tried to do everything with algorithms, but last year it admitted to adding manual reviewers for certain areas.”
Aleksandra Bosnjak, an analyst with research company StrategyEye, points out that when it comes to incorporating the human element into search “big search engines such as Google, Yahoo! and MSN appear to be less innovative than small user-generated content sites and targeted search engines”.
She points to user-generated content aggregator as an example of how social search can work. “Users post news stories, which are then promoted to the front page of the website through algorithms that manage a ranking system.
“The ranking system and the promotion of articles are prompted by the users of the community who can ‘dig’ the article, meaning they find the article worthy of being promoted to a higher rank.”
Bosnjak adds: “Social search engines leverage the power of humans to filter search results and return more relevant results. The development of hybrid search technologies, which combine crawling and algorithmic search with human input, has seen a significant rise. The search results are influenced by an editorial process as a result of the input of company editors, the search community, or contact groups in some way. Ultimately, this intends to help users take short cuts to relevant content.”
Examples she lists include human-powered search engines, such as Mahalo, ChaCha and Bessed; search engines that use the search community for rating the results in some way, such as NosyJoe, Sproose and Yoople!; and search engines that leverage bookmarking and social networks, such as Nsight, Gravee and Searchles.
Be that as it may, the search market is likely to be dominated by the likes of Google, Yahoo! Search and MSN Search for some time. So the major engines are looking for ways to drive up the quality of the searches they deliver.
At the moment, the focus seems to be on analysing the user experience – so if a user visits a site that has come high in the results for a search they have conducted but almost immediately goes back to the search engine, or moves to a completely different site, then the assumption will be that that first result was not relevant. That will be factored into future searches, so that sites which fail to hold users’ attention will be penalised in some way.
In paid-for search marketing, one of Google’s early innovations was the Quality Score. Matt Mills, search director at digital media agency EquiMedia, says: “The Quality Score is based on relevance of the site content, ad copy and keywords. A higher Quality Score means lower costs for keywords and a higher ad position.”
He adds: “The search engines are really pushing advertisers to improve the quality of their sites and improve users’ experience. After all, the engines are themselves competing for traffic and gaining consumer trust in their search results.”
The experts agree, though, that there seems to be little point in marketers deliberately setting out to deliver search results that fail to meet users’ expectations. While that may have worked when the Web was young, today’s users are more sophisticated and far more savvy.
As Tino Nombro, managing director of search agency Ambergreen, observes: “If they are asking, then they want to know and, more than likely, they want to buy. There’s no point in trying to fool somebody and get found for something you don’t sell. What you should be doing is taking people who are likely to become your customers to the information they want as quickly as possible.”
Amanda Davie, head of search at agency i-Level, puts it even more forcefully: “As always, content is king. You can do more damage to your brand by not having high quality content on your landing page – by not delivering exactly what the consumer is searching for and therefore by disappointing or delivering a poor user experience – than by not bothering to market to them in the first place and letting your competitors capture them.”