Blog

A cornerstone of agile business practices and iterative optimization, A/B testing allows digital publishers to compare two different versions of any given campaign or setup element. 

Moving toward better performance by tweaking one variable at a time, A/B testing makes it possible to run experiments in a perfectly controlled manner and remove all guesswork from the equation.

As McKinsey explains, "If an organization can atomize a single process into its smallest parts and implement advances where possible, the payoffs can be profound. And if an organization can systematically combine small improvements across bigger, multiple processes, the payoff can be exponential."

At least that's the theory. With complete control and perfect oversight, it's argued that even tiny differences can be serially optimized to bear a tremendous cumulative impact. The reality of the matter, however, is seldom so pristinely orchestrated and cleanly cut.

Lamenting A/B Testing

In contrast to the theory, the traditional approach to A/B testing can be imprecise and slow. Imprecise for a few reasons:

  • Complexities in managing the split for even and equal distribution
  • Difficulties involved in normalizing result values for 1:1 analysis
  • Over-focus on the setup and under-focus on assessment
  • Challenges involved in accounting for nuance
  • Complications in measuring small differences

And slow for all the same reasons. As a result, A/B testers typically take a very scientific approach to how they set up their experiments and a very improvised approach to how they monitor, manage, and learn from them. 

But that imprecision and slowness come at a tremendous cost. You're running those tests to improve performance and ultimately pad the bottom line. Every wrong conclusion takes money out of your pocket. And every day that passes without declaring a winner and implementing changes accordingly is another day you won't see those gains.

Fortunately, the next generation of A/B testing has arrived, promising to overcome these challenges and take optimization to the next level.

oolo's Fix and How It Ticks

oolo monitors all active A/B tests and alerts users whenever a test is ready for review, directing them to the conclusion analytics portal. The goals are several:

  1. Spare you from reviewing tests that lack a sound basis for conclusion — saving time & effort
  2. Leverage statistical modeling to help draw conclusions faster — accelerating improvements
  3. Apply advanced analytics to ensure accurate winner selection — saving you money
  4. Providing a single view through which to compare all KPIs and drill down according to relevant test dimensions (e.g. country) — easing follow up

Supporting those goals, oolo's A/B test monitoring not only identifies the winning version, but highlights the confidence score and estimated impact. In this way, oolo helps keep A/B testing  scientific from start to finish.

If the test has not yet matured and the sample size is not sufficiently robust or representative, it is labeled "TBD" to indicate that results are still pending. If the difference between A and B is statistically insignificant, it is labeled "no clear winner" to indicate that results were inconclusive or non-meaningful. Most of the time though, results will be conclusive and you'll see a version — A or B — labeled "winner".

Winner determinations account for factors such as:

  • The cumulative difference between A and B
  • The consistency of differing results
  • Test length and sample size
  • The direction and extent of changes
  • The likeliness of bottom-line conclusions to change with additional run time

oolo leverages a combination of statistical modeling, sequential probability ratio tests (SPRT), and proprietary heuristics (validated and refined over time) to help publishers make better informed decisions faster — increasing monetization in the process.

The system even accounts for data complexities and nuances in distribution. For example, instead of evaluating tested variables on a head-to-head revenue examination, oolo automatically generates a "normalized revenue" metric to compare versions that didn't receive even play.

Putting The Shine On the Bottom Line

oolo’s automated A/B test monitoring allows for faster iteration cycles and more accurate conclusions; helping publishers identify test winners even when results look comparable to the naked-eye.

But the value of this new tool in the publisher arsenal goes beyond that. Properly monitoring and managing a single A/B test may seem like a fairly straightforward even simple thing to do, but when you're running 5+ concurrent tests, it can feel almost impossible to maintain order. And things falling through the cracks becomes almost inevitable.

For large mature operations running a multitude of tests at any given moment, it's critical to have an early alert system notifying you whenever an experiment is ready to review. With oolo, you can feel confident that nothing will ever slip through the cracks and that no test will run even a second longer than necessary. And that makes a very big difference.

Continuing a test after there is already a clear winner means that potential improvements (revenue uplift) are lost. You're leaving more money on the table with each and every day you keep tests open after they've already told their story. 

To put the concept into context, have a look at this recent example (pictured) from one of our customers. Here the alert pertains to a long-running test on US Interstitials. oolo has identified group B as the clear winner.

As soon as the user applies version B to the entire test base, the business stands to gain an extra $2K per month. Here, oolo was able to draw this conclusion with a 99% confidence score after only 5 days. Prior to using our A/B test monitoring, this customer would run tests for a little over 15 days, on average. With oolo, the average test result is delivered within 8 days. In other words, they've been able to basically cut test durations in half, translating to considerable revenue uplift.

In this case, if we conservatively assume that the customer would have taken 10 days to draw the same conclusion, they'd have forfeited $360. Multiply that by the 20-some tests that this client is running every month and you can appreciate the direct financial benefits of oolo's A/B test monitoring. But not all the benefits are financial.

oolo's A/B alerts streamline the entire test monitoring process saving valuable time and preventing a lot of unnecessary hassle. In effect, it makes overseeing experiments error-proof and nearly effortless so you don't have to refer back to your notebook, run through a hundred filters, or make any assumptions.

The Situation In Summation

A/B tests are a major avenue for revenue growth for digital publishers, but they're still subject to a lot of delays and too much eyeball-based assessment. They can also be the source of plenty of headaches and frustrations. That is until now. 

With oolo, you automatically get an alert whenever a winner emerges. And since we use SPRT and other proprietary analytical techniques, we deliver conclusions at the earliest possible point and with superior accuracy. Ultimately, that allows you to spend less time running tests and more time benefiting from their lessons. 

Alerts includes a confidence score, a view of affected metrics, and the estimated monthly revenue impact with results adjusted for the relevant testing dimension. The upshot is faster and more impactful iteration cycles, so you can approach your experiments with confidence and calm.

It’s just another way that oolo helps improve the speed and quality of decision-making.


Want to learn more about oolo's smart A/B test monitoring?

 

Subscribe to The Monetization Station

Stay up-to-date and in the loop

Comments

0
Catch and patch more revenue leaks more quickly
with 24/7 monitoring and holistic impact analysis