
Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing

more sensitive variants can be great alternatives, such as revenue indicator-per-user (was there revenue for user: yes/no),
Ya Xu • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
Using experiments as a guardrail is a difficult cultural change,
Ya Xu • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
Dan McKinley at Etsy (McKinley 2013) wrote “nearly everything fails” and for features, he wrote “it’s been humbling to realize how rare it is for them to succeed on the first attempt.
Ya Xu • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
A simple hierarchy of evidence for assessing the quality of trial design
Ya Xu • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
many high-risk/high-reward ideas do not succeed on the first iteration, and learning from failures is critical for the refinement needed to nurture these ideas to success,
Ya Xu • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
Slack’s Director of Product and Lifecycle tweeted that with all of Slack’s experience, only about 30% of monetization experiments show positive results; “if you are on an experiment-driven team, get used to, at best, 70% of your work being thrown away.
Ya Xu • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
Defining guardrail metrics for experiments is important for identifying what the organization is not willing to change, since a strategy also “requires you to make tradeoffs in competing – to choose what not to do”
Ya Xu • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
Having too many metrics may cause cognitive overload and complexity,
Ya Xu • Trustworthy Online Controlled Experiments: A Practical Guide to A/B Testing
For education, establishing just-in-time processes during experiment design and experiment analysis can really up-level an organization.