menu close
menu close
back to Insight Page

Why Bake-offs Are Ruining Ad-Tech

Written by: Cheryl Morris, VP Marketing

As a marketing leader, have you ever been tasked with selecting a new ad-tech vendor or platform? If so, then you’ve gone through the process of identifying a few different options, and then considered testing them against each other to see which performs best. Typically, the testing period, or “bake-off” is between multiple competitors, at minimal budgets over a short period of time. Whichever company performs the best is ultimately the one chosen by you, the customer. You’ve probably even said something similar to the following to B2B sales reps:

“We’re currently evaluating a few vendors, so we’d like to test your performance against Company X and Company Y. Whichever platform performs the best is the one we’ll select.”

While the conceptual notion of a bake-off is sound, there are so many moving parts to consider that the traditional bake-off has become obsolete. Not buying it? Let’s consider the state of ad-tech bake-offs.

1.) Short-term Test: Most bake-offs last for just a few weeks. Short-term campaigns are understandable for event based marketing perhaps, but often the vision is too shortsighted. Advertisers need to think bigger, longer-term and focus on stability, scale and direction of downstream revenue/value over time. By engaging in a one-month test for example, there’s a greater emphasis on early acquisition early on in the month rather than at the end.

The confusion here is that a customer that was acquired on day 1 of a month (or test) will continue to buy during the rest of the month and will complete additional purchases, positively impacting the lifetime value. If this same customer is acquired on the last day of a month (or test), then their value would simply be that of a single day. Ideally, advertisers should be focusing on a rolling, proper measurement window across all acquired customers and revenue optimized campaigns.

2.) Deceptive Results: During a short-term bake-off, performance can appear better than it actually is if any of the following tactics are in play.

A.) Retargeting: While retargeting is a wonderfully efficient form of customer acquisition, it cannot scale an ad campaign on its own. During a bake-off however, with minimal test dollars, it’s possible to run the majority of a test campaign with re-targeting and show exceptionally efficient results. If the business is then won, and the test budget accounted for only a small percentage of the actual budget (which is probable), then you can be sure that future cost metrics will be negatively impacted because re-targeting won’t be able to generate the volume that’s consistently needed.

B.) CPA: The majority of ad-tech bake-offs are evaluated via CPA performance. While this is understandable due to the short-term testing period, CPA can often lead to arbitrage which is one of the core problems with ad-tech performance measurement today. Here at Nanigans, we’ve officially pronounced CPA dead and renamed it Customer Permitted Arbitrage. In short, arbitrage is a way for vendors to show strong short-term performance that benefits them, and in addition results in dismal performance for the customer over time.

C.) Pricing & Profitability: One of the more unfortunate realities of the ad-tech industry is around results during a bakeoff. As a customer, make sure you know with 100% certainty how pricing works pre, during and post-bakeoff and also have an understanding where your vendor is making their money. It’s nothing new unfortunately, but the fact that it persists today is why we felt compelled to highlight these tactics. Both AdAge and AdExchanger recently published articles citing practices in which vendors will forgo profits during a bakeoff, and then once they win the business will “adjust” their way of doing business to increase their revenues. With a CPA based model, it’s quite simple for vendors to make their profit margins back by acquiring cheaper conversions at the misfortune of the customer.

Advertisers should be thinking about the lifetime value of their return on investment and what the compounding value of their ad spend over time looks like. By measuring on a short-term CPA basis, advertisers are opening themselves up to enormous risk as CPA results can be easily manipulated to the best interest of the vendor, not the customer.

3.) Competitive Evaluation: When testing multiple vendors during a bake-off, it’s not unlikely that the playing field may not be level for all parties involved. It’s imperative that all vendors involved in a bake-off have the same opportunity to perform. If not, how can you tell who truly performed “the best.” Questions to consider include

A.) Was the exact same detail and opportunity given to each vendor involved?

B.) Does each vendor have the same opportunity for data integration?

C.) Are all vendors truly starting on a level plane?

In the spirit of competition and transparency, we here at Nanigans always prefer that the vendors we compete with be given the same opportunities that we are, or else we don’t consider the playing field to be legitimately worth the time invested for all parties involved.

4.) Budgets: When it comes to bake-off budgets, often a “minimum” will be assigned, perhaps $10k (which is a common test budget). Reasoning for this budget is usually a combination of risk management and arbitrary allocation. While this type of mindset can help to guard budgets in the short-term, the way to test with long-term vision and scale in mind is to identify the specific segments to test based on pre-selection practices (which can vary by channel and buyer).  A few practices to consider are as follows:

A.) Avoid Misleading Data: Smaller budgets can be misleading. If you are truly testing an optimization platform, it’s essential to focus on scale.  If you’re testing a small, precisely targeted segment and it performs well, that doesn’t mean it will successfully scale. If you spend $100 on a segment for example, and receive 2 acquisitions, will this number of acquisitions increase linearly as spend increases? Is the sample size truly representative of the entire inventory and targeting set

B.) Test Affinity Models: Building affinity models (section 2) around the best performing segments will help give you an understanding of performance at scale.  In addition, affinity models will help with marketing sizing at the macro-level, as well as illuminating the size of the market that will perform within your goal.

C.) Establish Statistical Significance: The simplest way to determine statistical significance is to engage a binomial equation utilizing a desired performance lift of positive versus negative segments.  An example of this is to determine an estimated CTR and post- click-to-action (conversion) rate, in which the resulting data will show you the exact number of impressions you’ll need to test in order to reach data significance.  By incorporating cost metrics (CPM or CPC) into the equation, you can also realize the necessary budget needed to run an accurate test (Nanigans can also help you with this part and we also built a calculator for prospective customers)

So are traditional bake-offs fundamentally broken? We think so, and feel that it’s time for a change in favor of the customer. Next time you’re involved in a bake-off (or even think about the last bake-off you were involved in), in addition to the preceding points in this article, as yourself questions such as:

A.) Are you thinking about long-term integration and scale?

B.) Are you measuring the proper performance metric for your business or are you being deceived by CPA?

C.) Are you using the right window to measure success?

It’s the little questions that you need to ask yourself that will ultimately result in the long-term success of your marketing department. Seemingly small details in ad-tech act like tectonic plates shifting, and it doesn’t take much to produce cataclysmic shifts in performance over time without the proper methodology, strategy and execution.

next post

But wait, there's more

Join Our Newsletter