Get startedGet started for free

Novelty effects detection

Novelty effects happen more often than most Data Scientists and Engineers would expect. Running an A/B test on one new cool feature and calling the decision after seeing a big uptick in usage metrics over the first few days is one of the common mistakes that junior analysts fall into.

The novelty dataset that is loaded for you, has data about the difference in average time on page per user (ToP) between two variants. Examine the results over time and check if there are any signs of novelty effects. Would you include all the results form the start to the end of the test?

This exercise is part of the course

A/B Testing in Python

View Course

Hands-on interactive exercise

Have a go at this exercise by completing this sample code.

# Plot ToP_lift over the test dates
____.____('____', '____')
plt.title('Lift in Time-on-Page Over Test Duration')
plt.ylabel('Minutes')
plt.ylim([0, 20])
plt.show()
Edit and Run Code