Distributions balance
Another way we can quickly check for randomization bias in our A/B tests is by looking at how balanced or imbalanced the distributions of metrics and attributes that shouldn't change between the different variants. Any major differences in the percentage of certain devices, browsers, or operating systems for examples, assuming our samples are large enough, could be symptoms of larger problems in our internal setup. Examine the AdSmart
and checkout
datasets that are loaded for you and check for internal validity using the attributes distributions. Which dataset seems to have a more valid internal setup?
The Adsmart
Kaggle dataset source is linked here.
This exercise is part of the course
A/B Testing in Python
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Check the distribution of platform_os by experiment groups
AdSmart.____('____')['____'].____(normalize=____)