A/B testing for the statistically inclined
Another of my clients is about embark on another round of A/B testing, so I thought I’d post some of my favorite A/B testing resources, summarizing each as a single rule:
- Why you should always set your sample size (n) in advance: In How Not to Run an A/B Test @evmill makes the case for always fixing your sample size in advance. I generally agree with him – this is good research design.
- How to set your sample size: A/B Testing Tech Note: determining sample size – This post from @noahlorang at 37 Signals explains how to use a power law function to calculate the required sample size in order to measure an affect in some downstream outcome. Remember - the “significance” measures generated by a testing tool are only as accurate as your “conversion” measure. In other words, if Optimizely is looking at “Sales,” but you want to look for changes in “Revenue”, then the significance measure that a tool spits out will be irrelevant.
- Balancing Speed vs. Certainty in A/B testing - a good post summarizing the inevitable tradeoffs from @jfarmer