Statistics
Tae  

Faster Bayesian Inference with INLA

Published on 2021-05-14

Introduction

It is no secret that Bayesian statistics – Markov chain Monte Carlo (MCMC) methods in particular – has been gaining popularity in the past couple of decades. I remember making memes in undergrad about how “Bayesian inference is so hot right now!” Yes, I made statistics memes.

MCMC samplers like NIMBLE, STAN, and JAGS make implementing MCMC easier and accessible. However, there is a reason why MCMC only became viable in the last couple of decades: computational cost. This issue persists even today when we have more compute power than ever before. The computation time required for MCMC to stabilize increases exponentially with sample size and the number of hyperparameters.

Integrated nested Laplace approximation (INLA) proposed by Rue, Martino, and Chopin in 2009 is a modern tool for Bayesian inference that provides similar insights as MCMC but at a much lower computational burden. Check out this presentation by John Paige at the University of Washington in 2017. The last few slides compare the computation time between INLA and MCMC and the difference is staggering. INLA takes a few seconds whereas MCMC takes hours or even longer.

I came across INLA during my last semester at George Mason University. While researching the topic for a class project, I noticed that many resources online compared INLA and MCMC for Gaussian responses. So, I ambitiously set out to compare the performances of the two approaches with respect to inference and prediction in non-Gaussian settings.

Statistical Methods

Markov chain Monte Carlo

MCMC is a family of sampling algorithms used to estimate the joint posterior distribution in Bayesian statistics. The two most popular algorithms are the Metropolis-Hastings algorithm and the Gibbs Sampling algorithm, a special case of the Metropolis-Hastings algorithm. There are other better resources online that explain the details of each algorithm much better than I can, so I’m going to simplify a bit here.

Let $p\paren{\theta\mid\mathbf{y}}$ be the posterior distribution of interest, where $\theta$ is the hyperparameter and $\mathbf{y}$ represents the observed data. Then, from Bayes’ Rule, we know that $p\paren{\mathbf{y}\mid\theta} \cdot p\paren{\theta}$ is proportional to the distribution of interest or the target distribution. The Metropolis-Hastings algorithm works in the following way:

  1. Choose an arbitrary starting value $\theta^{(0)}$ and an arbitrary proposal density $q$.
  2. Draw a candidate value $\theta^*$ for the next iteration given the previous value $\theta^{(i-1)}$ from the proposal density.
  3. Calculate the acceptance ratio $\alpha$ (see below).
  4. Generate a random number $u$ from $U \sim Unif(0,1)$ and compare to $\alpha$.
  5. Reject $\theta^*$ if $u > \alpha$ and set $\theta^{(i)} = \theta^{(i-1)}$; otherwise, $\theta^{(i)}=\theta^*$.
\alpha = \frac{p\left(\mathbf{y}\mid\theta^*\right) \cdot p\left(\theta^*\right)\cdot q\left(\theta^{(i-1)}\mid\theta^*\right)}{p\left(\mathbf{y}\mid\theta^{(i-1)}\right) \cdot p\left(\theta^{(i-1)}\right) \cdot q\left(\theta^*\mid\theta^{(i-1)}\right)}

A Gaussian distribution is commonly chosen as the proposal density, but there are other options as well.

Gibbs Sampling works similarly but it always accepts the proposal $\theta^*$. It makes use of the fact that it is easier to sample from a conditional distribution than to marginalize a joint distribution given a multivariate distribution. Let $\boldsymbol{\theta} = \left(\theta_1,\dots,\theta_p\right)$ be a multivariate parameter, where $p$ is the number of parameters. The proposal density for $\theta_j \in \{\theta_k\}_{k=1}^{p}$ is the full conditional distribution $p\left(\theta_j\mid\theta_{-j},\mathbf{y}\right)$ and $\theta_{-j}$ denotes the vector of $\theta$’s not including $\theta_j$ from $\boldsymbol{\theta}$. This means that the acceptance ratio $\alpha$ for $\theta_j$ is:

\begin{align*}
\alpha &= \frac{\left[p\left(\mathbf{y}\mid\theta_j^*,\theta_{-j}^{(i-1)}\right) \cdot p\left(\theta_j^*,\theta_{-j}^{(i-1)}\right)\right] \cdot p\left(\theta_j^{(i-1)}\mid\theta_{-j}^{(i-1)},\mathbf{y}\right)}{\left[p\left(\mathbf{y}\mid\theta_j^{(i-1)},\theta_{-j}^{(i-1)}\right) \cdot p\left(\theta_j^{(i-1)},\theta_{-j}^{(i-1)}\right)\right] \cdot p\left(\theta_j^* \mid \theta_{-j}^{(i-1)},\mathbf{y}\right)}\\
&= \frac{\left[p\left(\theta^*_j \mid \theta^{(i-1)}_{-j},\mathbf{y}\right) \cdot p\left(\theta^{(i-1)}_{-j},\mathbf{y}\right)\right] \cdot p\left(\theta^{(i-1)}_{j}\mid\theta^{(i-1)}_{-j},\mathbf{y}\right)}{\left[ p\left(\theta^{(i-1)}_{j}\mid\theta^{(i-1)}_{-j},\mathbf{y}\right) \cdot p\left(\theta^{(i-1)}_{-j},\mathbf{y}\right) \right] \cdot p\left(\theta^*_j \mid \theta^{(i-1)}_{-j},\mathbf{y}\right)} \\
&= 1
\end{align*}

Gibbs Sampling is much more computationally efficient compared to the regular Metropolis-Hasting algorithm for this reason. So, when sampling from a multivariate parameter space is necessary, such as the Gaussian distribution, Gibbs Sampling is preferred.

The core idea behind MCMC is that after a large number of iterations, the random nature of the algorithm will have found the region where the likelihood is maximized. The relative frequency of the accepted values can be plotted to give a rough idea of the posterior distribution. Since it is extremely unlikely that the algorithm started in the correct place, the convention is to set a “burn-in” period for the algorithm to stabilize. Additionally, because the new proposal value is inevitably correlated to its previous value, the practitioner can also specify “thinning” intervals so that only the n-th value is recorded.

Integrated nested Laplace approximation

INLA speeds up the Bayesian inference process by employing the following three tactics:

  1. Focus on models that can be expressed as latent Gaussian Markov random fields.
  2. Split up the joint posterior into a nested product of marginal posteriors.
  3. Approximate the marginal posteriors using Laplace approximation.

Rue, Martino, and Chopin claims that inference on marginal posteriors is often sufficient and substantially decreases the computation time. The main benefit of using Laplace approximation is that it converts an integration problem into an optimization problem, a relatively simpler problem. Here’s the gist of how Laplace approximation works.

Let $f(x)$ be a well-behaved unimodal function that is twice-differentiable and achieves its maximum at $x_0$. Let $g(x) = \ln(f(x))$. The goal is to compute

\int_a^b f(x)\ dx = \int_a^b \exp(g(x))\ dx

where $a, b$ are potentially infinite. Then, using a Taylor approximation,

\int_a^b \exp(g(x))\ dx \approx \int_a^b\exp\left(g(x_0) + g'(x_0)(x-x_0)+\frac{1}{2}g''(x_0)(x-x_0)^2\right)\ dx

Notice that $g'(x_0) = 0$ because the logarithm function is one-to-one and $f(x)$ achieves its maximum at $x_0$ by definition. So,

\begin{align*}
\int_{a}^{b} f(x)\ dx &\approx \int_{a}^{b} \exp\left(g(x_0)+ \frac{1}{2}g''(x_0)(x-x_0)^2\right)\ dx\\
&=\exp\left(g(x_0)\right)\cdot\int_{a}^{b} \exp\left(-\frac{1}{2}\frac{(x-x_0)^2}{-g''(x_0)^{-1}}\right)\ dx\\
&=\exp\left(g(x_0)\right) \cdot \sqrt{\frac{2\pi}{-g''(x_0)}}\cdot\left[\Phi\left(b \mid x_0, -g''(x_0)^{-1}\right)-\Phi\left(a \mid x_0, -g''(x_0)^{-1}\right)\right]
\end{align*}

where $\Phi\left(z \mid \mu,\sigma^2 \right)$ is the cumulative distribution function for a Normal distribution with mean $\mu$ and variance $\sigma^2$. Notice that the Normal distribution is centered at the mode $x_0$. The specifics of the INLA algorithm are well-explained here.

One major benefit of INLA is that it is fully implemented in R. The code is converted to C++ at runtime, but the $\verb+INLA+$ package allows users to write everything in R without learning a new syntax. While MCMC sampling libraries have a fairly simple syntax, it’s still something the practitioner has to learn. Moreover, because the INLA algorithm focuses on marginal distributions, parallelization is as easy as setting the desired number of processor cores. Check out https://www.r-inla.org/ for further details on the R package.

Results

I won’t go into the details of the study design, but I first generate random data for a number of different scenarios that include Gaussian, Bernoulli, and Poisson responses Then, MCMC and INLA are fit and their respective run times, mean squared error (MSE) for model parameters, and mean squared prediction error (MSPE) are recorded.

nINLAMCMC
1000.5012.97
2000.5214.90
10000.6718.84
20000.8523.24
Bernoulli Response Run Times (seconds)
nINLAMCMC
1000.5911.77
2000.5913.06
10000.8015.45
20001.1118.29
Gaussian Response Run Times (seconds)
nINLAMCMC
1000.5312.27
2000.5214.00
10000.6618.55
20000.8423.96
Poisson Response Run Times (seconds)

INLA is much faster than MCMC in all scenarios. However, faster computation alone is insufficient. Let’s look at inference performances:

nINLAMCMC
1006.211.66
2002.381.12
10000.410.30
20000.190.15
Bernoulli Response MSE
nINLAMCMC
1004.522.54
2002.152.45
10000.421.85
20000.211.32
Gaussian Response MSE
nINLAMCMC
1000.930.62
2000.460.34
10000.090.07
20000.040.04
Poisson Response MSE

MCMC generally performs better than INLA for smaller datasets and the difference in performance decreases as sample size increases. Surprisingly, INLA performs better than MCMC in the Gaussian setting when the sample size is greater than 200. Finally, let’s examine the prediction performances:

nINLAMCMC
1000.430.43
2000.420.41
10000.400.40
20000.400.39
Bernoulli Response MSPE
nINLAMCMC
1005.116.71
2004.956.44
10004.786.13
20004.786.09
Gaussian Response MSPE
nINLAMCMC
1001.281.19
2001.151.14
10001.101.10
20001.081.08
Poisson Response MSPE

For the non-Gaussian responses, INLA and MCMC have almost identical MSPE regardless of sample size. What’s interesting is that INLA performs better than MCMC for the Gaussian response by a noticeable amount.

Conclusion

MCMC seems to perform slightly better than INLA for non-Gaussian responses with respect to both inference and prediction. However, this comes at a much higher computational cost. In my testing, I purposefully built simple scenarios because I wanted to run the same scenario multiple times to get the performance metrics. Had I designed something more complicated where MCMC takes hours like the scenarios in John Paige’s presentation, I would not have been able to finish this project. As for Gaussian responses, it seems like INLA has better performance than MCMC except when sample size is a few hundred and the goal is inference rather than prediction. Personally, I would take the minor inference performance penalty of INLA over MCMC for the Gaussian response.

There are some caveats to these results, however. Bayesian models shine when the underlying processes are complex. In cases like this simulation study where all covariates are independent from each other and lacks a hierarchical structure, there is little reason to consider a Bayesian framework. Frequentist methods such as generalized linear models (GLM) run much faster than INLA and they also provide almost identical performance to INLA in these simple cases. While MCMC performs slightly better in terms of MSE, that may be due to the MCMC algorithm being given the exact distribution of the model parameters. Another limitation to this study is the fixed number of covariates. Also, I only explored two types of non-Gaussian responses.

These caveats are mainly due to limited time and computation power as this was a class project and I had to use my personal computer. With more time and computation power, it may be interesting to vary the number of covariates, implement correlation structures between covariates, consider hierarchical structures, or explore other non-Gaussian responses.

Recommended Resources