A Bayesian approach to statistical modeling comes with many advantages. For example, it's the only logically coherent way to model uncertainty of parameter estimates! Being Bayesian has never been easier than it is now, thanks to high-quality, easy-to-use automatic tools like Stan.
One difficulty is that a Bayesian anaylsis requries specifying a complete generating process for your data, including a fully specified likelihood, describing the distibution of your data given your parameters, as well as a prior distributions on your parameters. Since all models are wrong, and priors may be subjective, one typically hopes that the conclusions of your Bayesian analysis don't depend too much on the particular choices that were made to describe your model. A Bayesian model and dataset that don't depend too much on a reasonable range of priors and likelihoods is called "robust" [1].
Unfortunately, getting even a single Bayesian estimate can be time-consuming, and getting a large number of estimates for a wide range of priors is usually prohibitively time-consuming. For this reason, many users simply don't check for robustness and hope for the best.
Fortunately, it's possible to easily calculate approximate measures of robustness using the "local sensitivity" of the posterior [2], and Stan's automatic differentiation engine means it's now easy to calculate these sensitivity measures automatically from a single Stan run with very little additional effort [3, 4]. We've made an R package called rstansensitivity
to do just that. You can install it from the github repo rgiordan/StanSensitivity.
To use it, make a Stan model with a special hyperparameters
block containing everything that your model might be sensitive to. Otherwise, write your model as usual. For example, suppose you have a single data point, x
, which is normal with unknown mean theta
. If you have a normal prior on theta
, you might specify the prior mean and standard deviation as hyperparameters like so:
data {
real x;
real x_sd;
}
parameters {
real theta;
}
hyperparameters {
real theta_prior_mean;
real theta_prior_sd; // Note: hyperparameters must be unconstrained.
}
model {
theta ~ normal(theta_prior_mean, theta_prior_sd);
x ~ normal(theta, x_sd);
}
Do your original Bayesian analysis, and then, with a few R commands, in few seconds you will automatically get a graph that looks something like this:
This graph shows that increasing theta_prior_mean
by 1 would increase the posterior expectation of theta
by about 2.25 standard deviations. That's very sensitive, so the posterior is not robust!
In this case it's no surprise -- a single data point isn't very informative. But in more complicated models, lack of robustness can be more difficult to detect, and that's where this package is particularly useful. For an in-depth example of how useful this kind of analysis can be, see this worked example in the git repo.
References
[1]: Robust Bayesian Analysis. Insua, Ruggeri (2000)
[2]: Local sensitivity of posterior expectations. Gustafson (1996)
[3]: Local posterior robustness with parametric priors: Maximum and average sensitivity. Basu, Jammalamadaka, Rao, Liu (1996)
[4]: Covariances, Robustness, and Variational Bayes. Giordano, Borderick, Jordan (2017)