Get startedGet started for free

Re-centering a bootstrap distribution

1. Re-centering a bootstrap distribution for hypothesis testing

So far we learned how to create bootstrap confidence intervals for estimation purposes. But what do we do if our goal is testing, not estimation? How can we use simulation methods to test whether a single parameter of a numerical distribution is different from, greater than, or smaller than some value? The answer is simple, though not necessarily intuitive. However if you keep in mind one important aspect of hypothesis testing -- namely that we assume the null hypothesis is true -- hopefully the approach makes sense.

2. Re-centering a bootstrap distribution for hypothesis testing

Bootstrap distributions are by design centered at the observed sample statistic. However since in a hypothesis test we assume that the null hypothesis is true, we shift the bootstrap distribution to be centered at the null value. The p-value is then defined as the proportion of simulations that yield a sample statistic at least as favorable to the alternative hypothesis as the observed sample statistic.

3. Re-centering the bootstrap distribution - sketch

Here is a graphical representation: We start with our bootstrap distribution, which is always centered at the observed sample statistic. We then shift this distribution so the center is at the null value, and calculate the p-value as the proportion of simulations that yield bootstrap statistics that are at least as extreme as the observed sample statistic.

4. Let's practice!

Now it's time to practice.