CommencerCommencer gratuitement

Novelty effects detection

Novelty effects happen more often than most Data Scientists and Engineers would expect. Running an A/B test on one new cool feature and calling the decision after seeing a big uptick in usage metrics over the first few days is one of the common mistakes that junior analysts fall into.

The novelty dataset that is loaded for you, has data about the difference in average time on page per user (ToP) between two variants. Examine the results over time and check if there are any signs of novelty effects. Would you include all the results form the start to the end of the test?

Cet exercice fait partie du cours

A/B Testing in Python

Afficher le cours

Exercice interactif pratique

Essayez cet exercice en complétant cet exemple de code.

# Plot ToP_lift over the test dates
____.____('____', '____')
plt.title('Lift in Time-on-Page Over Test Duration')
plt.ylabel('Minutes')
plt.ylim([0, 20])
plt.show()
Modifier et exécuter le code