Corrected p-values
Imagine you are a Data Scientist working for a subscription company. The web design team is working on finding the perfect CTA (call-to-action) button to urge page visitors to sign up for their service. They presented you with 4 different designs besides the current version.
After running an experiment comparing each variant to the control, you generated a list of p-values loaded in the pvals
variable. Comparing them directly to the significance threshold would result in an inflated Type I error rate. To avoid this, you can use the smt.multipletests()
function from Python's statsmodels
library to correct the p-values and test for statistical significance with a FWER = 5%.
This exercise is part of the course
A/B Testing in Python
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
import statsmodels.stats.multitest as smt
pvals = [0.0126, 0.0005, 0.00007, 0.009]
# Perform a Bonferroni correction and print the output
corrected = smt.____(pvals, alpha = ____, method = '____')
print('Significant Test:', corrected[0])
print('Corrected P-values:', corrected[1])
print('Bonferroni Corrected alpha: {:.4f}'.format(corrected[3]))