Distributions balance
Another way we can quickly check for randomization bias in our A/B tests is by looking at how balanced or imbalanced the distributions of metrics and attributes that shouldn't change between the different variants. Any major differences in the percentage of certain devices, browsers, or operating systems for examples, assuming our samples are large enough, could be symptoms of larger problems in our internal setup. Examine the AdSmart
and checkout
datasets that are loaded for you and check for internal validity using the attributes distributions. Which dataset seems to have a more valid internal setup?
The Adsmart
Kaggle dataset source is linked here.
Diese Übung ist Teil des Kurses
A/B Testing in Python
Interaktive Übung
Versuche dich an dieser Übung, indem du diesen Beispielcode vervollständigst.
# Check the distribution of platform_os by experiment groups
AdSmart.____('____')['____'].____(normalize=____)