MCMC settings

It’s useful to know the workflow of a JAGS session – or any other software using Markov chain Monte Carlo methods to generate draws from posterior distributions. You can execute the steps one by one with functions in the rjags package, but usually it’s more convenient to use a wrapper such as the jagsUI::jags function, which takes care of the sequence. Some of the steps involve settings which can be controlled by the user. Here we refer to the arguments of jags, but other software has similar settings.

Compilation

The first stage is to compile the model code in the file specified by model.file to produce an object where each value (termed a node) is connected to those it depends on (by edges). Errors in syntax in the model code will show up at this stage.

Initialization

Each of the stochastic nodes – those on the left side of “~” – is given a starting value. Values provided as data or as inits will be plugged in. If no values are provided, JAGS draws a random value from the prior distribution.

In general, it’s best to leave JAGS to generate initial values and only provide inits if that doesn’t work.

Some diagnostic tests look at differences between chains, but they only work properly if chains have different starting values. That is taken care of when JAGS generates values, or if inits is a function which generates random values.

Adaptation or tuning

Most algorithms used to generate posterior draws have parameters which affect the efficiency of the process, in particular the degree of autocorrelation among the draws. Optimal values for these depend on the specific model, and during this phase the values are changed until the optimum is found. Draws generated during this phase are discarded.

The number of adaptation iterations can be controlled with the n.adapt argument to jags. The default (n.adapt = NULL) is to run batches of 100 iterations until adaptation is adequate, up to a maximum of 10,000. You may sometimes need to set this if jags reports that 10,000 is not enough.

Burn-in or warm-up

The starting values can be very far from the main body of the posterior distribution, especially if you have very weak priors, and the first few values in the chain will be highly improbable. It’s common practice to discard the first portion of the chain which is affected by the starting values: this is called burn-in, though warm-up may be a better term. The number of iterations to discard is controlled by the n.burnin argument to jags.

Simple models move away from starting values quickly, and the iterations discarded during adaptation are often enough: no further burn-in is needed. Hierarchical models with a large number of parameters estimated may require more burn-in, which will show up in the diagnostic trace plots. In that case, it would be nice to be able to remove the initial iterations and keep the remainder. You can do this with the window function for mcmcOutput objects, but there is currently no easy way to do that with the jags output. The alternative is to repeat the run with a larger value for n.burnin or n.adapt.

Generating the draws to retain

Once the preliminaries are done, we can collect the “mountains of numbers” that will define our posterior distributions. Output objects can become very large, so we may not be able to get all the available output into R’s memory. There are 3 things we can control:

Which parameters are monitored are given in the parameters.to.save argument, which takes a character vector of the parameter names. In some software it’s possible to specify subsets of parameters, eg, “p[4:7]“, but not all functions will do this.

Different software packages have different ways of controlling how many iterations take place during this phase. For jags, the n.iter argument includes burn-in, so the number here is n.iter - n.burnin iterations per chain before thinning (see below for thinning).

We can describe the posterior distribution as precisely as we like provided we have a sufficiently large mountain of numbers (the Precision section here). Sometimes autocorrelation among draws in the chain is high, and we need a huge number of iterations to get sufficient precision. That takes up a lot of memory and slows down processing of the output. The solution is thinning.

Thinning does not affect the number of iterations generated internally by JAGS, but it reduces the number in the output object. If we set n.thin = 10, jags will return every 10th value in the chain. The number returned per chain is (n.iter - n.burnin) / n.thin. A total of 30,000 over all chains is enough for most purposes.

Note that thinning does not “solve” the problem of autocorrelation. The autocorrelation appears to be lower, but this is an illusion, the thinned chain has less information than the original unthinned chain. It’s used to solve the problems with excessively large objects in R.

Updating

Usually we will have the output we need as the result of the last step. But many packages, including jagsUI, provide functions to start generating draws again where the previous run finished. This happens if you provide the jags output object as the first argument to the update function.

The compilation, initialization and burn-in phases can be skipped, and output generation begins after a short adaptation phase.

Unfortunately, update returns a new output object and there is no easy way to combine the new output with the previous output.

It is useful if a huge burn-in is required and the first run did not allow enough burn-in iterations. Running update effectively treats the whole of the first run as burn-in. It allows you to change the parameters monitored and the thinning rate, and rerun the model without having to wait for the burn-in phase.

Leave a Reply

Your email address will not be published.

The maximum upload file size: 1 MB. You can upload: image, document, text, archive. Drop file here