Content may be king, but data is queen. And as anyone who play chess knows, the queen holds the power. From marketing to monetization and user experience, from retention to engagement and content creation — data drives decisions and takes the business in tow.

Today, digital publishers generate more data than ever before. In fact, they generate so much data that it can easily overwhelm operations teams and sabotage the best efforts to extract and enact insights within an addressable timeframe.

Whereas the main challenge of the past may have been collecting data and ensuring access, today the problem has shifted to data observability. Even with so many great data collection and reporting solutions available, it's estimated that  most businesses only actually observe 5-10% of their data .

The fact is no matter how a dashboard slices or dices your data, you'll still only get human eyes on a small fraction of it. Which is why automated alerts are so mission critical. 

Much the Same Whether Home-Built Or BI Add-On

It's no secret that effective growth and monetization relies on exacting oversight and continuous fine-tuning. To wit, digital publishers must inspect, dissect, and operationalize data at a breakneck pace. Unfortunately, humans just aren't capable of thoroughly and accurately health-checking thousands upon thousands of daily data points in view of seasonality and normal variance.

For an operation of any reasonable size, data oversight will almost always become a real struggle. In fact, pursued through traditional means, it's all but impossible to ensure timely, reliable, and comprehensive data monitoring at all times. And the more that oversight loosens, the more things slip through the cracks.

It's obvious that things can't be fixed/optimized when they pass altogether undetected. Less obvious but even more significant is the fact that speed matters. Even if you'll be sure to catch the issue in the end, finding it quicker and with more of the relevant context can make a world of difference.

And detection is just the tip of the iceberg. To maximize revenue, you need to be able to investigate and intervene as quickly and effectively as possible.

It should come as no surprise then that businesses today invest heavily in sophisticated business intelligence (BI) tools and whole departments dedicated to the task of analyzing their business data.
When it comes to alerting, most companies will either customize the built-in alerting capabilities of their BI solutions, or they'll build their own solutions entirely.

Generally, data alerting systems rely on if-then rules engines and thresholds. It's a smart way to get a handle on data mountains you couldn’t possibly work through stone by stone and grain by grain. At the same time, it’s far from a perfect system. It's noisy and near-sighted.

You’ll miss things that don't appear in the expected places. And even when you do catch things, you'll get none of the important context you need to make sense of and sort out the situation. For that you'll still need to dig. Manually. 

Messy and Maladroit Data Alerting Systems

The problem with in-house and standard BI alerting systems is that they're inherently messy. They rely on rigid if-then rules and thresholds that crumble in the face of complexity and nuance. And when you have a large number of operational and technological interdependencies, competing internal interests, and no small amount of outside factors, nuance is the name of the game.

As it's adapted to new scenarios, rule logic grows more complicated and unruly. That leads to unintended knock-on effects only discovered later down the line. In an effort to achieve end-to-end coverage, the convoluted logic can easily become self-defeating.

And even when the logic is sound, it's rigid. It cannot muster the flexibility of real intelligence and it cannot look even an inch beyond its predefined purview. When reality is so dynamic, such a rigid dragnet will never be capable of providing truly comprehensive coverage.

Take seasonality, for example. It's a familiar case to any experienced data analyst. When most people think of seasonal factors, they think of things relating to winter, summer, spring or fall. Or maybe they think sports seasons. But for data operations teams, it's those things and more. It can be more localized and it can be much more nuanced.


For example, there may be a cyclical pattern in how performance ebbs following a flow, regardless of the flow's specific circumstances. Or there can be regular trends in relation to the days of the week, the months of the year, lunar holidays that don't hold a fixed place within the Gregorian calendar, political events, current events, etc. There can all of those things at once or some combination of them, each nesting within — offsetting and compounding — the other.

To account for such factors, you'd not only have to look at your data in terms of its placement in the sweep of the year or season, but consider the sweep of all the different micro and macro ecosystems that it's entangled with.

Understanding seasonality is important since, when we look at data, we're looking for indications of  problems or opportunities; the types of things that betray their existence through abnormal data points. When you don't have a firm grasp on things like seasonality, you can't properly set expectations or define what would be normal.

Now, imagine trying to account for rich real-world seasonality using only rule-based logic. You can take a floating time-frame approach, comparing those numbers to recent norms or historical averages.  But such configurations — well reasoned though they may be — still leave the system blind to monthly and event-based seasonality patterns, leading to missed issues and false alarms.

Of course, the logic could always be upgraded to add more sophistication and account for additional situations, but it's still going to be fundamentally flawed in the sense that it lacks contextual intelligence and fluidity. 


That type of non-dynamic logic quickly weakens in the face of real-world business complexities. And the more you try to build context-awareness into those rules and thresholds, the more the system begins to show its limitations and break down.

And that's all aside from the fact that managing and maintaining such a complex and growing logical mesh is a time-consuming nightmare. Keeping the system humming and properly calibrated requires an enormous amount of ongoing attention. And still issues are missed.

As a workaround to this complexity and to mitigate the risk of missing things, many opt to simply configure their alerting in a trigger-happy manner. It's an approach that typically backfires, generating a flood of low-quality alerts and desensitized users.

There Is An Alternative

Alerts are so important that we willingly ignore all the ways that our data alerting systems fail us. But that's just not good enough. Leaving aside the false-alarms and missed issues, even at their best, incumbent alerting systems only give you a springboard for open-ended investigations. That might be of some use, but it's just as likely to send you chasing your tail for something that isn't actionable or isn't impactful.

What we really need is trusted and timely detection along with the context needed for correction/optimization.

What separates oolo from other data alerting systems is its ability to deliver the scalability, speed, and automatability of AI-powered analytics together with the context-awareness, situational intelligence, and operational expertise of an experienced professional. It's anomaly detection with a built-in "business brain".

That unique combination allows oolo to avoid false alarms, never miss an issue, and pull the story from the statistics to output clear-cut opportunities for improvement.


Crude vs. Clever Data Alerting Systems: Key Differentiators

For digital publishers, moving from comparatively crude alerting to something like oolo can be transformative. It's a change that breaks down along 4 main lines of differentiation.

1.  Rule violation vs. Anomaly detection

Comparing in-house data alerting to oolo, the thing that sticks out most prominently is the difference between the trigger mechanisms. While traditional alerting architectures are designed to generate alerts when rules are violated, oolo generates alerts when anomalies are detected. That may sound like a trivial difference, but it's really not.

One approach is fixed and the other is flexible. One is manually defined and the other is situationally defined. One takes a snapshot and the other gives a continuous relay. One is myopic and one is panoptic. One falters in the face of seasonality and natural fluctuations while the thrives on it. One is liable to miss things and the other ensures that nothing ever slips by unseen.

Rule-based alerts only cover the data expressions and relationships you tell them to. And since most data alerting systems take an isolated approach — measuring each metric independently — it cannot properly unpack data events in the context of their interconnectedness. As a result, it produces too many false-positives, too many irrelevant alerts, and nothing approaching any sort of root cause insights or prescriptive guidance.

In the world of analytics, alerts take 3 distinct forms — descriptive, predictive, and prescriptive. Descriptive alerts are considered the most basic, with predictive alerts bringing an additional layer of utility and sophistication. Prescriptive alerts, meanwhile, represent the ideal.

Measured according to this yardstick, incumbent data alerting systems barely register at all. They are more detective than descriptive and even by that standard they frequently fall short.

measuring-good-data-alerting systems

Not so oolo, which examines the entire datascape continuously, from end to end — with an understanding of what things should look like — and the ability to compare actual data points to expectations. oolo continuously looks at everything and surfaces irregularities worth your attention — regardless of whether or not you'd think to look at them or suspect them of representing potential performance inflection points.

oolo also understands how different data permutations and metrics are inter-related, allowing it to look at everything — every single data point — in view of all cross-correlated data points. Powered by machine learning, oolo forecasts millions of precise data values that are converted into predictive ranges through linear regression models. By comparing the predictive range to live results, oolo is able to instantly pinpoint any irregular data permutation is view of seasonality, historical performance, recent performance, external influences, and factor entanglement. 

By contrast, since rule-based alerting systems cannot muster that type of holistic perspective or dynamic oversight, they come tuned to the most common problems in their most common expressions. It's a reasonable enough approach, but it also creates significant blindspots and — perhaps even more dangerously — a false sense of security.

2.  Post-validation vs. Pre-validation

Whether its an in-house build or a dashboard customization, conventional data alerting systems all work in pretty much the same way. When incoming data satisfies pre-configured parameters, an alert is generated and pushed to the user. With oolo though, that's not quite how it works. There's an extra step between when an alert is generated and pushed. Validation.

The validation stage is where oolo's "business brain" and alert dispatcher come into play. And it's where oolo creates tons of added value.


Of course, traditional data alerting systems also have a validation stage. Critically though, those systems validate only after the alert is sent and by way of the human on the other side of the screen. The user has to assess the operational relevance, business impact, and fixability. That can be a time-consuming task, and with many assessments resulting in a no relevance or negligible relevance determination, it can feel like time poorly spent.

Given how stretched human resources are within most monetization and growth teams, that's a luxury publishers can ill-afford. 

It's part and parcel of the flooding and desensitization problem. When alerting systems work in this manner, they fail to serve their purpose of lightening the load. To the contrary, they rely on manual workflows that are exploratory by nature and can stretch out for extended periods of time. It's wildly inefficient and it even hurts morale. While the team runs down leads and works to cobble together an explanation for the issue, they’re not doing any of the dozen other tasks required of them.

3.  Alerting on data points vs. Alerting on incidents

Traditionally, alerting systems will notify the user of each individual instance where the defined parameters are met. That might make sense at first glance, but in practice it's a problem. It creates a really noisy monitoring ecosystem that adds confusion and slows things down.

Since so many key metrics are operationally inter-related, any given issue will show up in the data trail multiple times. Every addressable problem or opportunity might correlate to a dozen separate anomalies — resulting in a barrage of notifications. But you don't really care about each individual anomalous expression. What you really want to know is know why it matters, what caused it, and what you can do about it. And when all the different pieces of the puzzle are scattered and interspersed with irrelevant items, it'll take you a lot longer to put that picture together.

This is why oolo takes a different approach. Instead of alerting you to each instance in which alerting conditions are satisfied, it groups all related anomalies into a single alert story. Then from within the alert, you can explore all the impacted metrics, dimensions, and permutations.


To add even more context, oolo labels each alert according to the applicable use case and category, whether it's about performance, discrepancy, user acquisition, A/B testing, setup, versioning, or what have you.

In this way, oolo is able to keep the monitoring environment pristine and keep your attention on what truly matters. Since data relationships are thoroughly mapped, oolo is not only able to group anomalies into coherent stories, but trace each incident back to the fall of the very first domino in the chain.

That context-awareness allows us to unify as many as 50 different anomalies into a single alert and ensure that you have all the information you need (Setup, Traffic, Delivery) in order to follow-up. It's also what allows oolo to avoid confusion from:

  • Incomplete data
  • Traffic-driven anomalies
  • Natural cannibalization
  • Incongruent metric importance

4.  High maintenance vs. No maintenance

Because oolo is powered by machine learning and built specifically for publisher monetization and growth, it's able to deliver powerful functionality right out of the box.

Integration is fast and painless. With all data mapping and processing done on oolo’s side, the system connects to your data platforms (mediation, MMP, campaign manager, S3 bucket, etc.) to model and monitor performance from day one.

oolo is designed to bring clarity, control, and comfort to monetization and growth efforts. The comfort piece of that equation starts with deployment and onboarding but extends to user experience and maintenance. It's why oolo requires no cumbersome integrations and no rule-setting. It's also why oolo fits your workflows like a glove and is so easy-to-use. And it's the same reason why you'll never need to adjust thresholds or tweak the system's sensitivity. 

oolo is built on highly scalable infrastructure to ensure minimal costs together with maximal effectiveness and efficiency regardless of your size. It doesn't require human data review at  impossible speed and scale. Nor does it requires the mapping or maintenance of a complicated and convoluted logical mesh.

And best of all, it's subject to none of the blindspots or blunders that plague homemade and built-in data alerting systems. Instead, it gives you a context-aware God Mode view of the operation surveying traffic, inventory, and performance around the clock.

Crossing the Rubicon

Smart decisions are based on an intelligent view of the data. But with such a vast amount of data being generated, it can be challenging to catch and contextualize business impacting anomalies within a viable response window. Which is exactly why data alerting systems have become so vital. 

The problem is that the alerting systems that digital publishers rely on — whether they're built in-house or on top of BI dashboards — are woefully inadequate. They miss issues, they're generate way too many false-positives, they give no context, and they require a lot of ongoing maintenance. 

It's a problem that most businesses see no solution to and therefor accept as the cost of doing business. But there is a solution and companies like oolo are showing the market the way forward. oolo offers an automated alerting an analysis engine that actually understands the data and clearly points out revenue-bolstering intervention opportunities. It looks at each data point in view of the related monetization factors and of the whole ad stack to deliver always-on data insights that are not only descriptive, but diagnostic & prescriptive.

At its core, oolo is end-to-end  anomaly detection with an embedded business brain . oolo makes sense of data sets too vast, fast, and entangled for humans to effectively oversee. And whenever an anomaly is detected, the system automatically validates the issue and traces it to its point of origin.

Knowing that there's an alternative to awkward and unwieldy alerting systems, there will be no going back. Say goodbye to oversights, goodbye to irrelevant alerts, and goodbye to open-ended investigation. It's time you say hello to clarity, hello to control, and hello comfort.

Subscribe to The Monetization Station

Stay up-to-date and in the loop


Catch and patch more revenue leaks more quickly
with 24/7 monitoring and holistic impact analysis