CommencerCommencer gratuitement

SRM

When we design an experiment to allocate enrollment units (e.g. users) at a given percentage per variant, we expect some slight variations to happen due to logging issues, delays, minor instrumentation bugs, etc. When that deviation is larger than what is expected, however, this is usually an indication of a larger issue that could invalidate and bias our test results. The goal of this exercise is to examine the statistical techniques that enable you to catch cases where the allocation mismatch is too large to be blamed on chance alone.

As an analytics engineer, your role may require you to design and even automate frameworks for catching sample ratio mismatches in A/B tests. The checkout DataFrame is loaded for you along with pandas and numpy libraries. Let's consider the control group to be checkout design 'A' and the treatment group to be 'B'.

Cet exercice fait partie du cours

A/B Testing in Python

Afficher le cours

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

# Assign the unqiue counts to each variant
control_users = ____
treatment_users = ____
total_users = ____ + ____
print("Control unique users:",control_users)
print("Control unique users:",treatment_users)
Modifier et exécuter le code