Understanding the Datatecture Part 2: Operations Deep-Dive

  • Home
  • Data
  • Understanding the Datatecture Part 2: Operations Deep-Dive
01

In this second post of this series, we dig into some of the deeper layers of the Streaming Video Datatecture in the Operations category, defining many of the individual sub-categories, and explaining their purpose in the broader workflow.

Just a reminder, as we covered in the first post of this series, the Datatecture is governed by three main categories: Operations, Infrastructure, and Workflow. Within these categories are also a myriad of other sub-categories, sometimes themselves branching to even more specific groups. This structure isn’t intended as a parent-child hierarchy. Rather, it is just a way of illustrating relationships between specific components and categories of functionality. For example, there are many systems and technologies within analytics that don’t compete against each other because they handle different sets of data from video player metrics to customer behavior.

What is Operations?

As was discussed in the initial blog post, Operations refers to systems that are involved in the operation of the streaming service. Many of these systems, like dashboards, video analytics, video quality assurance, and multi-CDN solutions are part of the Network Operations Center (NOC) where operations and customer support engineers can keep careful track of what’s happening within the streaming video technology stack. But because the operation of a streaming platform extends beyond just traffic and network management, there are also a lot of other systems in use by non-engineering employees such as customer and product analytics and ad viewability. 

Analytics, Monitoring, and Configuration Management

Within the Operations category, there are three primary sub-categories which were outlined in the first post of this blog series. Let’s dig past those and go deeper into Operations to understand the individual systems involved in this area of the Datatecture.

Analytics

Analytics is a core function within the streaming technology stack. As such, there are many systems (gathered into separate categories) that address a broad range of activities ranging from quality assurance to ad viewability.

  • Ad Viewability and Verification. One of the biggest issues with delivering digital advertising is ensuring advertising impressions are legitimate and meet the advertiser requirements such as how much time constitutes a view. Some of these systems are also involved in fraud and bot detection. The systems in this category are critical to any streaming operator whose business model includes advertising.
  • Identity and Attribution. Understanding the impact of marketing campaigns and other subscriber-touchpoints is crucial to maximizing viewer engagement which can have a positive impact on advertiser and subscriber revenue. The platforms in this subcategory enable streaming operators to deeply understand each user touchpoint and maximize revenue opportunities.
  • Customer and Product Analytics. While operations engineers are busy looking at performance data, others in the business are focused on activity within the streaming platform trying to answer such question as, “what features are users engaging with the most?” “how easy it is for users to find what they need” or “what are the most visited parts of the interface?” Answering these can be important to maximizing engagement and revenue. The service providers in this subcategory offer platforms to help product managers and product developers to better understand user interaction with the platform features.
  • Video Quality Assurance. One of the biggest challenges to delivering a great viewing experience is ensuring a high visual quality of the content. There may be points within the workflow, such as encoding, transcoding, and delivery where the quality of the content degrades. The systems in this group of the Datatecture analyze content visually, identifying areas where degradation has occurred (such as blocking) so that it can be remedied before delivering to the viewer.
  • Audience Measurement. An important dataset to the business of streaming is audience measurement. This data provides, in short, an understanding of what viewers are watching which can be instrumental in influencing future content investments. These well-known providers, such as Comscore and Nielsen, can provide invaluable data about the popularity and engagement with content.
  • Video Analytics. Much like broadcast, understanding the Quality of Experience (QoE) is crucial to ensuring a great viewing experience. This means gathering data about bitrates, buffering, start time, and more from the player itself. The providers in this subcategory offer specialized services to help both engineers and business-focused employees understand what the viewer is experiencing. Although many of these providers offer data via an API, they also provide proprietary visualization tools.

Monitoring

Unlike Analytics, which can involve more detailed and in-depth exploration of datasets, Monitoring is purely looking at streams of data, such as performance data, most often in a dashboard or visualization tool. 

  • Synthetic Monitoring and Testing. It can be difficult to understand the impact of sudden scale on streaming platform features because it isn’t feasible to employ a million or more users. Synthetic testing can simulate those users and provide valuable data to understand what the real-world impact of scale might be. In addition, these same monitors can be employed to continually track operation and performance throughout the video stack including on-premise, cloud-based, and even third-parties, like CDNs, to provide a holistic view of the workflow.
  • Visualization and Dashboards. The visual of streaming operations is always the same: screens on the walls in the Network Operations Center displaying content and a myriad of dashboards. That’s because without visualization it would be impossible to understand what was happening. There is simply too much data coming too quickly to make sense of just numbers. Dashboards and visualization tools empower operations engineers to have visibility on performance issues, KPIs, and other data thresholds without having to dig into the numbers themselves.

Configuration Management

This subcategory within operations addresses systems which are deeply involved in how the streaming platform functions, from managing the data that is collected to how CDNs are used to deliver streams.

  • Data Management Platforms. Streaming is not just about content. It’s about data. Unlike broadcast, the content delivered to viewers through streaming platforms is all bits and bytes. Not only that, but each component within the technology stack throws off data: CDNs have logs, video players have metrics, etc. All of this data must be managed. The providers in this subcategory provide technologies and Software-as-a-Service offerings that enable streaming operators to have more control over the data behind their business.
  • Multi-CDN solutions. As streaming platforms have gone global, it has become necessary to utilize multiple CDNs as no one CDN has the best performance in every region. Using Multi-CDN services, like those offered by the providers in this Datatecture group, streaming operators can quickly and easily move between CDNs to ensure that content is always delivered on the CDN that meets the provider’s requirements, whether that is performance or price-based.

Customer Data Platform (CDP)

Sitting outside the other subcategories within operations is a very important system to subscription-based streaming services: CDPs. These platforms enable streaming operators to leverage their first-party data to better and more deeply understand their subscribers. By enabling that understanding, insights can be derived which are critical to the success of marketing campaigns and other targeted communications with subscribers.

Separate, But Not Alone

Although these operations systems are all in discrete and separate groups, they aren’t independent. Many of them provide data that can be used by other systems. For example, some of the platforms have their own dashboards but, with programmatic access to the data, that data can be pulled into more flexible visualization tools, such as Looker. By doing so, both operations engineers and business leaders can exchange simple analysis and monitoring for observability: with all of the data in one place, it can be easier to see patterns across all of the sources (of course, it helps when that data is standardized such as through a Datazoom Data Dictionary). 

To learn more, visit and explore the Datatecture site. In the next blog post, we will explore the groups within the Infrastructure category.

02

About the Author

03

More Interesting Articles From the Datazoom Team

Other Posts You Might Find Interesting

Scroll to Top