Markov chains
1. Markov chains
RJAGS is great! Once you get a feel for defining Bayesian models using RJAGS syntax, simulating a posterior is quite easy. In this video, we'll discuss some basics of the mechanics behind RJAGS: Markov chains.2. Posterior simulation
Let's first revisit the Normal-Normal model for the impact of sleep deprivation on reaction time. Combining insights from the priors and observed sleep study data, you used RJAGS to approximate posterior models of parameters $m$ and $s$.3. Markov chains
Let's focus on the mean parameter `m`. Your approximation of the `m` posterior is constructed from a sample of 10,000 `m` values, the first 20 of which are shown here. It's important to note that these values are NOT a random sample from the posterior. Rather, the `m` values form a Markov chain. As is the case for our Normal-Normal model, posteriors are often too complicated to define in closed form and too complicated to allow for direct random sampling. Thus RJAGS utilizes Markov chains to *approximate* posterior models.4. Markov chain dependence
Here the `m` chain starts at a value of roughly 17. We call this "iteration 1".5. Markov chain dependence
Unlike in a random sample, each iteration of a Markov chain depends upon the previous iteration. In iteration 2, the chain moves from 17 to 35.6. Markov chain dependence
Dependent upon iteration 2, the chain moves to a value of 36 in iteration 3.7. Markov chain dependence
The chain continues to traverse the sample space or range of plausible m values. In the first 20 iterations, the chain largely explores values between 20 and 40. We can also see hints of dependence among the values - in a few places, the chain floats up for a few iterations and then down for a few iterations.8. Markov chain dependence
After 100 iterations, the Markov chain has started to explore new territory, traversing a wider range of values from 0 to 60.9. Markov chain trace plot
Finally, we can examine the trace plot of all 10,000 iterations or steps of the Markov chain. Trace plots illustrate the longitudinal behavior of the Markov chain, marking each value of each subsequent iteration. As the chain traverses the sample space of m, we also want to examine the distribution of the values it visits along the way.10. Markov chain distribution
Let's start with the first 20 steps of the Markov chain. The trace plot on the left illustrates the sequence of these steps. The histogram on the right summarizes the overall distribution of the first 20 values. Again, we see that these are largely restricted to the range between 20 and 40.11. Markov chain distribution
The distribution of "values visited" both expands to a wider range and starts to smooth out after 100 iterations.12. Markov chain distribution
After all 10,000 iterations, the Markov chain values are roughly Normally distributed around 30.13. Markov chain distribution: an approximation of the posterior!
The corresponding *density plot* of the $m$ chain values provides an *approximation* of the $m$ posterior model. Putting this all together, the $m$ Markov chain **traverses** the sample space of $m$ and in the end **mimics** a random sample that **converges** to the posterior.14. Let's practice!
In the next set of exercises you'll explore the magic of Markov chains.Create Your Free Account
or
By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.