Novelty effects detection
Novelty effects happen more often than most Data Scientists and Engineers would expect. Running an A/B test on one new cool feature and calling the decision after seeing a big uptick in usage metrics over the first few days is one of the common mistakes that junior analysts fall into.
The novelty
dataset that is loaded for you, has data about the difference in average time on page per user (ToP) between two variants. Examine the results over time and check if there are any signs of novelty effects. Would you include all the results form the start to the end of the test?
Diese Übung ist Teil des Kurses
A/B Testing in Python
Interaktive Übung
Versuche dich an dieser Übung, indem du diesen Beispielcode vervollständigst.
# Plot ToP_lift over the test dates
____.____('____', '____')
plt.title('Lift in Time-on-Page Over Test Duration')
plt.ylabel('Minutes')
plt.ylim([0, 20])
plt.show()