Menu

Why Classic A/B Testing Fails

This article was authored by Andres Corrada-Emmanuel, the Lead Optimization Engineer at Nanigans. He concentrates on optimizing budgeting, bidding, and pacing algorithms for Nanigans’ Ad Engine platform. For 15 years, Andres has researched and developed patented technologies in the speech recognition, information retrieval, machine learning and computer vision fields. Andres comes to Nanigans from DataXu, a Demand Side Platform (DSP) company, where he was the Lead Optimization Engineer. Prior to DataXu, Andres was an original member of the Dragon Systems team, which developed the first commercial continuous speech recognizer that is used in Apple’s iPhone 5. Andres attended Harvard University for his undergraduate degree and the University of Massachusetts at Amherst for his Masters and PhD.

Social media platforms like Facebook are now being asked the perennial advertising question: How effective are my ads? Initial hype and seemingly obvious synergies are giving way to hard questions about how exactly advertising on Facebook generates value. According to a New York Times opinion piece Can Social Media Sell Soap? some are not impressed. The online advertising industry as a whole needs to step-up to this challenge and construct new ways to answer this basic question.

Traditional ways to measure ad lift or efficacy are not always up to the task. The classic approach for testing is the A/B approach. Split your audience into two groups – the A group is exposed to your ads; the B group sees public service ads (PSAs). Measure the different responses from the two groups and calculate the “lift” from your ads. Sounds simple. It is not always that easy.

We don’t drive in cars to measure the speed along different routes. Advertising is the same. We want results, and getting to our destination is more important than measuring how we get there. Traditional A/B testing does not play well with this focus and many of its assumptions are themselves untestable. Take, for example, the assumption that we can have an “unexposed” audience on Facebook. Really? Users don’t stop seeing ads in other media. What we need is to expand the testing toolkit that marketers can use to understand the role of Facebook advertising in the context of their whole marketing plan.

Additional challenges include finding a balanced split in regard to other characteristics that affect ad campaign effectiveness (both market and user characteristics), the multitude of challenges caused by limited data as we go down the conversion funnel, and controlling for possible differences in bid and pacing dynamics across the A/B split.

At Nanigans, we are developing new ways to do measure effectiveness. Take again, the assumption that we need to do A/B splitting of audiences. Why not just show brand and PSA ads to users in the same audience, and measure not lift but the effect the ads have on speeding up the actions of users based on which ads they happen to see? Some users will first see a PSA ad and then organically convert sometime later. Others will see a brand ad and also convert afterwards. Do the brand ads make users convert faster? This is a measurement methodology that complements the reality of marketing campaigns. It measures how the Facebook ad helped to drive value within the context of the overall campaign and doesn’t take credit for driving the action.

Interested in finding out how Nanigans ROI-based advertising solution for Facebook can work for your company? Contact us today!

Leave a reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>