As the number of streaming queries grew, we wanted a centralized place where we could quickly view a snapshot of all our pipelines.
When it comes to monitoring our queries, we are primarily interested in answering three questions:
What is the input rate?
What is the processing rate?
What is the age of the freshest data being processed?
Structured Streaming queries on Databricks already come with a UI attached to them when they are started, which helps monitor the input rate and processing rate (#1 and #2) at the job level:
These graphs help look at the performance of a specific job, but when you have a lot of queries, it is not practical to individually check each one. Additionally, this UI does not answer question number three: What is the age of the freshest data being processed?
Before monitoring the query, we first need to install the Datadog Agent on the cluster. You can follow the ‘Driver only’ instructions on the Datadog Databricks integration guide. Before running the script provided by Datadog, you need to modify it to set‘enable_query_name_tag’ to true. This is placed under ‘instances’ like this:
- spark_url: http://\$DB_DRIVER_IP:\$DB_DRIVER_PORT
enable_query_name_tag: true <----
This will tag your metrics with the QueryName you provide, allowing you to view them individually even if there is more than one query per cluster. You also need to enable
dogstatsd by adding the following two lines to the script:
echo "use_dogstatsd: true" >> /etc/datadog-agent/datadog.yaml
echo "dogstatsd_port: 8125" >> /etc/datadog-agent/datadog.yaml
To run the Datadog agent on your cluster, you need to have the install script you generated run as an init script as well as enable streaming metrics and pass in your Datadog API key:
In order to select a specific query when making a dashboard, we need to provide a QueryName for the query to use. This can be done when writing the query:
Input rate and processing rate are automatically tracked for you. However, if you want to know your data freshness, you need to track it using foreachBatch. First, you need to pip install datadog. Then you can track the freshness like this:
from datadog import statsd
.toTable(tablename)def record_freshness(df, epoch_id):
timestamp = df.limit(1).collect()['enqueued_time']
freshness = (datetime.now() - timestamp).total_seconds()
If your query does not use ‘foreachBatch’, you can create a second query that reads the updates from your first query and records metrics:
Now that we have all the metrics we care about being tracked, we can build a dashboard on Datadog. We can track all of these metrics using time-series graphs. Here is a quick guide on setting up a graph.
This is what the dashboard could look like when all the charts are set up:
You can quickly swap to a different $query_name to view graphs for different queries. These dashboards will allow you to ensure that your queries are keeping up with incoming data and track how changes are affecting performance.
Thanks to our support team, our customers can feel like Statsig is a part of their org and not just a software vendor. We want our customers to know that we're here for them.
Migrating experimentation platforms is a chance to cleanse tech debt, streamline workflows, define ownership, promote democratization of testing, educate teams, and more.
Calculating the right sample size means balancing the level of precision desired, the anticipated effect size, the statistical power of the experiment, and more.
The term 'recency bias' has been all over the statistics and data analysis world, stealthily skewing our interpretation of patterns and trends.
A lot has changed in the past year. New hires, new products, and a new office (or two!) GB Lee tells the tale alongside pictures and illustrations:
A deep dive into CUPED: Why it was invented, how it works, and how to use CUPED to run experiments faster and with less bias.
Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.