As a product manager, it's important to constantly assess and evaluate the impact of the features you're shipping to users. By measuring the impact of every single feature, you can gain valuable insights into how your product is being used, what's working well, and what areas may need improvement.
Here are just a few of the benefits of measuring the impact of every feature you ship:
By gathering data on how your features are performing, you can make more informed decisions about which features to prioritize and how to improve existing features. This can help you avoid making costly mistakes and ensure that your product is consistently meeting the needs of your users.
By gathering feedback from users and measuring the impact of specific features, you can identify areas where the user experience can be improved. This can help you create a more intuitive and enjoyable product for your users.
By measuring the impact of your features, you can gain insight into which features are most popular and which ones are being underutilized. This information can help you create more engaging content and drive higher levels of user engagement.
By gathering data on how your features are being used, you can identify unique value propositions and differentiators for your product. This can help you stand out in a crowded market and attract more users.
Statsig offers multiple ways to measure the impact of experiments by default.
The scorecard panel displays Primary and Secondary experiment metrics for each variant. These metrics are in the context of experiment hypothesis, which is highlighted at the top of the scorecard.
By default, all scorecard metrics have CUPED applied in order to shrink confidence intervals and reduce bias.
The all metrics tab shows the metric lifts across all metrics in the metrics catalog.
To further adjust the results, significance levels can be tweaked in the following ways:
Apply Bonferroni Correction: This reduces the probability of false positives by adjusting the significance level alpha, which will be divided by the number of test variants in the experiment.
Confidence Interval: Choose lower confidence intervals (e.g.: 80%) when there's higher tolerance for false positives and fast iteration with directional results is preferred over longer/larger experiments with increased certainty.
CUPED: Toggle CUPED on/ off via the inline settings above the metric lifts. NOTE- this setting can only be toggled for Scorecard metrics, as CUPED is not applied to non-Scorecard metrics.
Sequential Testing: This helps mitigate the increased false positive rate associated with the "peeking problem". Toggle Sequential Testing on/ off via the inline settings above the metric lifts. NOTE- this setting is available only for experiments with a set target duration.
Measuring the impact of every single feature you ship to users is a crucial part of the product development process. By gathering data and feedback, you can make more informed decisions, improve the user experience, increase user engagement, and better differentiate your product.
If you're a product manager, make sure to incorporate impact measurement into your workflow and start reaping the benefits.
Thanks to our support team, our customers can feel like Statsig is a part of their org and not just a software vendor. We want our customers to know that we're here for them.
Migrating experimentation platforms is a chance to cleanse tech debt, streamline workflows, define ownership, promote democratization of testing, educate teams, and more.
Calculating the right sample size means balancing the level of precision desired, the anticipated effect size, the statistical power of the experiment, and more.
The term 'recency bias' has been all over the statistics and data analysis world, stealthily skewing our interpretation of patterns and trends.
A lot has changed in the past year. New hires, new products, and a new office (or two!) GB Lee tells the tale alongside pictures and illustrations:
A deep dive into CUPED: Why it was invented, how it works, and how to use CUPED to run experiments faster and with less bias.
Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.