Online research: the crack cocaine of media evaluation

The low cost, fast turnaround and ease of doing online research has turned it into the crack cocaine of media evaluation; we know it’s bad for us but it is also addictive and gives us an instant high.

So a big thumbs up and round of applause should go to the IAB in the USA. They have just released an independent review of the methods used to measure online advertising’s effectiveness via the internet.

This was a very brave move indeed by the IAB, given that these ‘surveys’ consistently claim that online advertising spend is significantly more effective than spend on established media. The IAB across the Atlantic took aim at many of its members’ own feet.

I doubt there was the sound of champagne corks hitting the ceiling when the results came in. Conducted by one of the leading research specialists in the USA, the review concluded that much of online effectiveness research is seriously undermined by extremely low response rates, problems of survey design and a lack of evidence that it is weighting the data to account for inherent biases in the system.

Most of these surveys work on an ‘intercept’ approach, which means that respondents are invited to take part in a survey via web pages which are serving the online ads of the brands being evaluated. It is a bit like asking people sitting in Burger King and eating Whoppers if they prefer Burger King and Whoppers to McDonalds and Big Macs.

Talking of whoppers, I am regularly shocked by how many people in our industry take these studies’ findings seriously. I was at the MRG Conference in London when one such online study was presented. It demonstrated that expenditure on a series of banner ads had been around twice as effective as spend on TV. In a moment of frustration, I asked the media agency presenting the research the following question:

“If, twenty years ago, I had presented research selling the effectiveness of newspaper advertising by saying we had recruited a sample of readers of a newspaper, they had responded to an invitation to take part in a survey that was on the same page as the ad being evaluated, and they had completed the survey in their newspaper before sending it off by post, and the research then concluded that newspaper advertising was by far the most effective for that brand, would I have been taken seriously?”

I never got a satisfactory answer.

Research into advertising effectiveness needs to be scrupulously fair. It needs to be unbiased and comprehensive. We cannot restrict our questions to online panels, as they only represent the 70-odd per cent of the population that are regularly online and also skew towards heavier online users. We cannot recruit them via the pages on which the advertising to be evaluated sits, as that introduces yet another level of bias. And we shouldn’t even be asking them to complete the survey online, as the context of the questions will add another bias towards online.

In short, and in line with the results of the IAB’s investigation, there are far too many biases to make the research even remotely viable. It is flawed before it starts – and that is before we factor in additional failings such as the short-term nature of the research (some media channels, most notably television, carry on delivering value many months after the campaign ends), or the fact that a single exposure to the online creative is given as much of a weighting as multiple exposures to other media channels.

This is an issue that Ipsos has already raised in the UK.  Studies that have previously always demanded intellectual rigour and methodological discipline have been dumbed-down, seduced by the instant ‘hit’ of data showing the results that were wanted in the first place. In the area of advertising effectiveness, which should surely be the most rigorous and scientific of all advertising research activities, we have developed an approach that offers plenty of data but very little insight, and that is fundamentally wrong.

But it is crack cocaine, so it is hard to wean people off it. So, well done the State-side IAB for tackling this issue – as it puts much of the data of its supporters under the spotlight – and for offering rehab.  Media research relies on mutual trust between the commissioner of that research and its audience, and it is only by taking a leadership role, as the IAB has done in the States, that we can ensure the many positive advantages of online research are not misused and that we have a set of insights we can trust and use.

 

Follow Thinkbox on Twitter

  • http://www.adalyser.co.uk Sam Mikkelsen

    Interesting stuff Dave, I think its natural for industries to fight their own corner and fly the flag for themselves, but you are right, there needs to be unbiased look and keep a neutral perspective on things. There are a lot of digital agencies out there that have great methods and experience and are brilliant and knowing where to engage people and how to do it, but reliance on one media type for any campaign is dangerous, you need to have a good mix in my humble opinion depending on the company and the product.

    All media types work to a certain degree but there needs to be different touchpoints to properly engage with the customer, it can’t all come from the web only!

  • Tom Sainsbury

    Good article Dave. Totally agree that online survey’s never fail to show up how well advertising has worked. However there are so many inconsistencies with data analysis and research analysis in media that it seems al ittle biased for a TV man to be focusing just on onlines failures.

    You look at Postar in outdoor, Barb in TV and the claims made by all forms of media using research panels made up of only a few hundred then you start to realise that where the real problem lies is a dependence on detailed data and results that is all too often impossible to accurately discover. When you start looking at the margins of error in most pieces of research you realise very quickly that going beyond the most basic facts will generally result in inaccuracy and wildly inflated claims.

  • http://www.thinkbox.tv David Brennan

    Thanks Sam & Tom for your considered comments. I think there is a difference beween media evaluation research using one of the media being evaluated as both recruiter and research environment, regardless of the other biasses I have mentioned. The problem is that we have never before had to worry about the media channel also being the research channel. That is not something that affects the media currencies such as POSTAR or BARB, by the way, which actually put a lot of effort into being both representative and media neutral. I also think there is a secondary issue regarding how little rigour we put into assessing online research methodologies, maybe because there is so much more of it to assess these days…

Latest jobs Jobs web feed