Distributions balance
Another way we can quickly check for randomization bias in our A/B tests is by looking at how balanced or imbalanced the distributions of metrics and attributes that shouldn't change between the different variants. Any major differences in the percentage of certain devices, browsers, or operating systems for examples, assuming our samples are large enough, could be symptoms of larger problems in our internal setup. Examine the AdSmart
and checkout
datasets that are loaded for you and check for internal validity using the attributes distributions. Which dataset seems to have a more valid internal setup?
The Adsmart
Kaggle dataset source is linked here.
Cet exercice fait partie du cours
A/B Testing in Python
Exercice interactif pratique
Essayez cet exercice en complétant cet exemple de code.
# Check the distribution of platform_os by experiment groups
AdSmart.____('____')['____'].____(normalize=____)