In a startup, everybody builds stuff (code, websites, sales lists, etc) — and part of the building process is accepting that not everything you make is good. But, switching from outcome to learning-oriented can speed up productivity, and make it easier to identify good ideas.
Speaking from personal experience here: it sucks to admit that your baby is ugly. But, what’s worse is spending a ton of time on that bad idea, and then realizing months later it’s not useful. Not that I’ve ever done that… ;)
But, this is when I learned the value of minimum viable products (MVPs).
Our lead data scientist/ experimentation wizard Tim talks a lot about building experimentation cultures- test everything, not just the product you build. One of Tim’s suggestions is to do the least amount of work possible in order to have an MVP and get some learnings. Even if a product or idea “fails”, what’s more important is how it informs direction in the future. Does crossing this idea off the list mean we can also cross some other ideas off? Or, even if the idea fails, are there salvageable elements that we can iterate on?
For me, part of the magic of Statsig has been working with super efficient people- and it’s been impressive to see how quickly people can ditch their egos and focus on learnings. In high performing teams, nobody blames anybody.
Our CEO Vijaye mentioned this in a LinkedIn post a couple months ago about code reviews at Facebook- when there are big issues in code, the team gets together to prevent similar issues in the future. The secret ingredient to these reviews? Nobody asks who created the problem. As Vijaye says, “throwing blame is not productive and will only disincentivize taking bold initiatives”.
Having an idea fail doesn’t mean that you failed too, but sometimes it can feel that way. I’m also realizing that a lot of pain can be avoided by building an MVP, instead of going straight for my dream end-state.
Learning to fail fast is accepting that in order to find a prince, you have to kiss a lot of frogs, so you better round up some frogs and get really fast at kissing.
Thanks to our support team, our customers can feel like Statsig is a part of their org and not just a software vendor. We want our customers to know that we're here for them.
Migrating experimentation platforms is a chance to cleanse tech debt, streamline workflows, define ownership, promote democratization of testing, educate teams, and more.
Calculating the right sample size means balancing the level of precision desired, the anticipated effect size, the statistical power of the experiment, and more.
The term 'recency bias' has been all over the statistics and data analysis world, stealthily skewing our interpretation of patterns and trends.
A lot has changed in the past year. New hires, new products, and a new office (or two!) GB Lee tells the tale alongside pictures and illustrations:
A deep dive into CUPED: Why it was invented, how it works, and how to use CUPED to run experiments faster and with less bias.
Explore Statsig’s smart feature gates with built-in A/B tests, or create an account instantly and start optimizing your web and mobile applications. You can also schedule a live demo or chat with us to design a custom package for your business.