You can adjust all of your cookie settings by navigating the tabs in this window.
In modern advertising automation software, Stop Loss protection is a feature that monitors each individual ad in your campaign, automatically pausing ads that exhibit unacceptable performance. The concept is simple, the benefit is clear. This is a classic example of relying on a computer to do what computers do better than people: apply a logical rule, over and over again, exactly the same way, all day and night, regardless of what’s on TV or whatever your cat Chipper just knocked over.
Here’s the problem. Sometimes applying a logical rule, over and over again, exactly the same way, no matter what the circumstances, has a disastrous result. This is why human judgment is often beneficial. At Nanigans, we build advertising automation software that has good judgment. We use techniques from Artificial Intelligence and Applied Statistics to give our customers the best of both worlds.
In this two-part series, we will show how our software applies statistical reasoning to the important task of automated Stop Loss protection. Rather than relying on formal derivations and equations made of a Greek alphabet soup we will demonstrate the key concepts via simple experiments with visual results.
Let’s put the discussion of automated Stop Loss into a realistic business context.
Performance marketer Prudence Marshal has a goal of generating registrations for her company’s new trunk club service at $10 per registration. In her plan Prudence will use the new Custom Audience targeting segments, based on known buyers from their traditional ecommerce site. She will then configure their ad automation software to maximize the number of registrations it can get at an average cost of $10 per action (CPA). Additionally, she will use the Stop Loss feature in their automation software.
Even though she already set up the software to optimize for a $10 CPA goal, she learned her lesson the last time the developers at their Seattle office accidentally broke the tracking pixels on their site – in the middle of the night.
After formulating her plan, Prudence has a quick strategy session with her colleague Bob Smith. Prudence suggests setting the daily Stop Loss at $13, as a safe backstop that won’t interfere with the primary optimization goals. Bob takes a slow, audible, deep breath through his nose, while leaning his head slightly to one side.
Here comes the mansplaining, Prudence thinks to herself.
“I’m not sure $13 Stop Loss is the best idea. Why take on all that extra risk of high CPAs? You know you don’t want to spend over $10 per reg. So, just set the Stop Loss to $10 too. It’s, like, twice as good…theoretically,” says Bob.
So, who’s right, Prudence or Bob…theoretically?
In statistics, it’s sometimes easier to answer a question by analyzing data, rather than by deriving a formula. Furthermore, instead of going through the effort to find, then clean, and then validate a relevant data set, it’s sometimes better to just make up your own perfect data. To support the following analysis we’re going to generate synthetic click and conversion data with the following assumptions:
Note: This results in daily totals of 24,000 clicks, 1,200 registrations, and $12,000 ad spend.
Based on these assumptions – plus several other assumptions that a significant fraction of statistics professors would support – the hourly conversion rate and cost per conversion data for a whole day might look like this: In both graphs we show hourly metrics, plus cumulative metrics over the hours in the day. For illustrative purposes, an assumed CPA threshold at $10 and $13 is also shown on the CPA graph.
First, the conversion rate graphs show that even when the average conversion rate of an ad is truly known to be 5%, individual hours, with plenty of clicks, can have a large range of effective conversion rates. Most digital marketers are familiar with the statistical concept of variance, but it can be easy to forget just how significant the impact is on real performance results. Even the daily cumulative conversion rate is fairly noisy for most of the day. To reiterate, this is “perfect” data. In the real world, there is no “true” conversion rate for an ad. The average conversion rate is just an estimate that fluctuates over time due to seasonality, time of day effects, click reporting lag, click bots, etc.
Turning to the CPA graphs, fluctuations look similar to those in the conversion rate graphs, but “flipped.” That’s because CPA is effectively cost per click divided by conversion rate. In this experiment, CPC is treated as constant. If CPC is correctly treated as noisy, the fluctuations in the CPA graphs will be even greater. Also worth noting is how the CPA threshold lines relate to the Cumulative CPA line. Since daily Stop Loss can be described as “pause an ad if its daily average CPA exceeds the threshold,” the comparison of these lines is the key relevant point for the current analysis. For this particular trial, daily Stop Loss would have triggered the $10 threshold in the second hour of the day, causing the ad to be paused. This seems overzealous, considering that if the ad was left active for the rest of the day, the overall CPA would have been within the campaign goal of $10 CPA. This behavior is sometimes called a false alarm or a false positive.
Although we are avoiding formal equations in this series, it is worth mentioning that one can directly compute the expected rate of false alarms for a given expected conversion rate and threshold.
In the second part of this series, we will continue our experimental approach. We will return to the subject of false alarms as a way to figure out who has better judgment, Prudence or Bob.