You probably already have visualization tools, dashboards, and data lakes which form the foundation of your streaming video operations. The Datazoom Platform includes a library of connectors to many of the systems you might already use, so you can deliver your data where it needs to be.
Get Data When And Where You Need It
There are many challenges associated with getting data from endpoints, like video players, audio players, and web/native apps, but none greater than delivery. When operators rely on third-party providers for analytics, monitoring, and other data tools, they can become locked into systems that don’t provide the delivery configuration and flexibility they need to improve and speed up their decision making.
With delivery that can be configured and is flexible to meet an operator’s specific business and technology stack requirements, businesses can experience significant value:
Improved data relationships
When data collected from end-points is delivered to existing data storage (such as an enterprise data lake), data can be better related to generate improved insights which, in turn, may generate new revenue opportunities.
When data is delivered as quickly as it’s needed and to the tools that are already used, business decisions can be made faster.
When data is delivered in a consistent manner, there is less chance of errors creeping into the datasets ensuring improved data governance.
Delivery of data through the Datazoom DaaS platform is flexible, configurable, and highly reliable
How Data Delivery Works Through Datazoom
Datazoom’s data delivery feature is scalable, reliable, and resilient. The platform ensures that the data users rely on to make business decisions is uninterrupted. And the flexibility is always available: users can simply drop new or different Connectors onto the visual data pipe builder canvas to deliver data somewhere else.
We collect a superset of behavioral events you want to receive and route them to different Connector tools based on what data is needed in a specific tool. As events are processed in our system, we will confirm the current routing rules and direct these messages to the correct destinations.
We also allow metadata & fluxdata contained within each event to be filtered so that the customer can control who has access to different data points in their various Connector destinations. The best example of this would be that an Operations team may want different data in their tool than the Marketing team needs in their tool. They may share some but not all of the user client data.
Once the events have been properly filtered, our Connector converts the received Datazoom message into the format that is compliant with what the destination needs to receive. This can include anything from changing the structure of the message before it leaves Datazoom to renaming keys or values to conform to a customer or Connector’s preferred data model.
Our Connectors are optimized to deliver the data payloads in the most efficient way possible for each Connector destination. We support batching & multi-thread delivery to minimize latency. For Object Store connectors like S3, GCS & Azure Blob storage that perform better with larger size objects, we have a Bulking process that balances optimal object size with customer latency goals. For these connectors we will create large batches of up to 1,000 messages but not wait more than 30 seconds if traffic is low.
If we run into any issues delivering data to the Connector destination we have multiple processes in place to attempt to redeliver the failed payloads. We will immediately retry up to 3 times within our Connectors in case of an intermittent internet interruption to ensure minimal lag added to message delivery. If those attempts also fail, we have a longer term retry mechanism that will come back and attempt to deliver up to 6 more times over a couple of hours. This allows us to handle some outages that the customer’s Connector destination may be experiencing.