In this exercise we will look at different adaptive Metropolis
algorithms. The aim is to get an intuition for what their “adaptations”
are doing, by examining their behaviour with our simple
monod
model. You are encouraged to play with each of the
adaptation parameters of each algorithm and to check how it influences
the resulting sample chain.
You can use you own code from exercises 2 or the functions defined below. And of course you can also do the exercises with the growth model.
## load the monod model
source("../../models/models.r")
## read data
data.monod <- read.table("../../data/model_monod_stoch.csv", header=T)
## Logprior for model "monod": lognormal distribution
prior.monod.mean <- 0.5 * c(r_max=5, K=3, sigma=0.5)
prior.monod.sd <- 0.5 * prior.monod.mean
logprior.monod <- function(par, mean, sd){
sdlog <- sqrt(log(1+sd*sd/(mean*mean)))
meanlog <- log(mean) - sdlog*sdlog/2
return(sum(dlnorm(par, meanlog=meanlog, sdlog=sdlog, log=TRUE)))
}
## Log-likelihood for model "monod"
loglikelihood.monod <- function(y, C, par){
## deterministic part:
y.det <- model.monod(par, C) # defined in `models.r`
## Calculate loglikelihood assuming independence:
return( sum(dnorm(y, mean=y.det, sd=par['sigma'], log=TRUE )) )
}
## Log-posterior for model "monod"
logposterior.monod <- function(par) {
lp <- logprior.monod(par, prior.monod.mean, prior.monod.sd)
if(is.finite(lp)){
return( lp + loglikelihood.monod(data.monod$r, data.monod$C, par) )
} else {
return(-Inf)
}
}
using DataFrames
import CSV
using Distributions
using ComponentArrays
## load monod model
include("../../models/models.jl");
## read data
monod_data = CSV.read("../../data/model_monod_stoch.csv", DataFrame)
# set parameters
prior_monod_mean = ComponentVector(r_max = 2.5, K=1.4, sigma=0.25);
prior_monod_sd = 0.5 .* prior_monod_mean;
## Use a lognormal distribution for all model parameters
function logprior_monod(par, m, sd)
μ = @. log(m/sqrt(1+sd^2/m^2))
σ = @. sqrt(log(1+sd^2/m^2))
return sum(logpdf.(LogNormal.(μ, σ), par)) # sum because we are in the log-space
end
## Log-likelihood for model "monod"
function loglikelihood_monod(par::ComponentVector, data::DataFrame)
y_det = model_monod(data.C, par)
return sum(logpdf.(Normal.(y_det, par.sigma), data.r))
end
## Log-posterior for model "monod"
function logposterior_monod(par::ComponentVector)
lp = logprior_monod(par, prior_monod_mean, prior_monod_sd)
if !isinf(lp)
lp += loglikelihood_monod(par, monod_data)
end
lp
end
We try the adaptive Metropolis with delayed rejection as described by (Haario, Saksman and Tamminen (2001).
What could you use as initial values?
What is a meaningful initial covariance matrix for the jump distribution?
Plot the chains and see how quick the convergence is.
Look at 2d marginal plots. What happens in these marginal plots if you don’t cut off a burn-in or only use the beginning of the chain?
It is implemented in the function modMCMC
of the package FME
.
Note, this function expects the negative log density”, which is
sometimes called energy. See the modMCMC
function
documentation
for details.
neg.log.post <- function(par) -logposterior.monod(par)
Now call the modMCMC
function and investigate the
effects of the updatecov
parameter (try values of 10, 100
and 1000), which determines how often the covariance of the jump
distribution is updated, and the ntrydr
parameter (try
values of 1, 3 and 10), which determines the number of permissible jump
attempts before the step is rejected. In particular, examine the first
part of the chain and see how the adaptation works.
AMDR <- modMCMC(
f = neg.log.post,
p = par.init,
jump = jump.cov,
niter = 10000,
updatecov = 10,
ntrydr = 3
)
The package AdaptMCMC
provides multiple MCMC algorithms including the adaptive Metropolis.
using ComponentArrays
using AdaptiveMCMC
using MCMCChains
using Plots
using StatsPlots
θinit = ComponentVector(r_max=..., K=..., sigma=...)
res = adaptive_rwm(θinit, logposterior_monod,
10_000;
algorithm = :am,
b=1, # length of burn-in. Set to 1 for no burn-in.
);
# convert to MCMCChains for summary and plotting
chn = Chains(res.X', labels(θinit))
plot(chn)
corner(chn)
library(FME)
## Loading required package: rootSolve
library(IDPmisc)
neg.log.post <- function(par) -logposterior.monod(par)
The mean of the prior seems to be a reasonable point to start the sampler. Alternatively, we could try to find the point of the maximum posterior density with an optimizer. For the covariance matrix of the jump distribution we use the standard deviation of the prior and assume independence.
par.init <- prior.monod.mean
jump.cov <- diag(prior.monod.sd/2)
par.init <- prior.monod.mean
jump.cov <- diag(prior.monod.sd/2)
AMDR <- FME::modMCMC(f = neg.log.post,
p = par.init,
jump = jump.cov,
niter = 10000,
updatecov = 10,
ntrydr = 3
)
## number of accepted runs: 8062 out of 10000 (80.62%)
## plot chains
plot(AMDR)
## 2d marginals
pairs(AMDR$pars)
IDPmisc::ipairs(AMDR$pars)
using ComponentArrays
using AdaptiveMCMC
using MCMCChains
using Plots
using StatsPlots
θinit = ComponentVector(r_max = 2.5, K=1.4, sigma=0.25) # use prior mean
## ComponentVector{Float64}(r_max = 2.5, K = 1.4, sigma = 0.25)
res = adaptive_rwm(θinit, logposterior_monod,
10_000;
algorithm = :am,
b=1, # length of burn-in. Set to 1 for no burn-in.
);
# convert to MCMCChains for summary and plotting
chn = Chains(res.X', labels(θinit))
## Chains MCMC chain (10000×3×1 reshape(adjoint(::Matrix{Float64}), 10000, 3, 1) with eltype Float64):
##
## Iterations = 1:1:10000
## Number of chains = 1
## Samples per chain = 10000
## parameters = r_max, K, sigma
##
## Use `describe(chains)` for summary statistics and quantiles.
plot(chn)
corner(chn)
The robust adaptive Metropolis algorithm proposed by Vihola (2012) is often a good choice. It adapts the scale and rotation of the covariance matrix until it reaches a predefined acceptance rate.
Did the algorithm reach the desired acceptance rate?
How is the covariance matrix after the adaptation different from the initial covariance that you provided?
The package adaptMCMC
provides the function MCMC
. If the parameter
adapt
is set to TRUE
it implements the
adaptation propsoed by Vihola. Again examine the effect of the
adaptation settings on the chains, in particular examining the first
part of the chain to see how the adaptation works. Here the adaptation
is determined by parameter acc.rate
. Try values between 0.1
and 1.
How is the influence on the burn-in? What happens, if you use a very bad initial value?
RAM <- MCMC(
p = logposterior.monod,
n = 10000,
init = par.start,
scale = jump.cov,
adapt = TRUE,
acc.rate = 0.5
)
str(RAM)
The package AdaptMCMC
provides multiple MCMC algorithms including the robust adaptive
Metropolis.
using ComponentArrays
using AdaptiveMCMC
using MCMCChains
using Plots
using StatsPlots
θinit = ComponentVector(r_max=..., K=..., sigma=...)
res = adaptive_rwm(θinit, logposterior_monod,
10_000;
algorithm = :ram,
b=1, # length of burn-in. Set to 1 for no burn-in.
);
# convert to MCMCChains for summary and plotting
chn = Chains(res.X', labels(θinit))
plot(chn)
corner(chn)
library(adaptMCMC)
library(IDPmisc)
The mean of the prior seems to be a reasonable point to start the sampler. Alternatively, we could try to find the point of the maximum posterior density with an optimizer. For the covariance matrix of the jump distribution we use the standard deviation of the prior and assume independence.
par.init <- prior.monod.mean
jump.cov <- diag(prior.monod.sd/2)
RAM <- adaptMCMC::MCMC(p = logposterior.monod,
n = 10000,
init = par.init,
scale = jump.cov,
adapt = TRUE,
acc.rate = 0.5,
showProgressBar = FALSE)
## generate 10000 samples
The acceptance rate is matched closely:
RAM$acceptance.rate
## [1] 0.499
The adapted covariance matrix has a lot of correlation between \(r_{max}\) and \(K\):
RAM$cov.jump
## [,1] [,2] [,3]
## [1,] 0.043160486 0.050845997 -0.001540199
## [2,] 0.050845997 0.076208083 -0.002343804
## [3,] -0.001540199 -0.002343804 0.001795472
cov2cor(RAM$cov.jump) # rescale as correlation matrix
## [,1] [,2] [,3]
## [1,] 1.0000000 0.8865701 -0.1749622
## [2,] 0.8865701 1.0000000 -0.2003694
## [3,] -0.1749622 -0.2003694 1.0000000
## plot chains
samp.coda <- convert.to.coda(RAM)
plot(samp.coda)
## 2d marginals
IDPmisc::ipairs(RAM$samples) # prettier versions of pairs()
using ComponentArrays
using AdaptiveMCMC
using MCMCChains
using Plots
using StatsPlots
θinit = ComponentVector(r_max = 2.5, K=1.4, sigma=0.25) # use prior mean
## ComponentVector{Float64}(r_max = 2.5, K = 1.4, sigma = 0.25)
res = adaptive_rwm(θinit, logposterior_monod,
10_000;
algorithm = :ram,
b=1, # length of burn-in. Set to 1 for no burn-in.
);
# convert to MCMCChains for summary and plotting
chn = Chains(res.X', labels(θinit))
## Chains MCMC chain (10000×3×1 reshape(adjoint(::Matrix{Float64}), 10000, 3, 1) with eltype Float64):
##
## Iterations = 1:1:10000
## Number of chains = 1
## Samples per chain = 10000
## parameters = r_max, K, sigma
##
## Use `describe(chains)` for summary statistics and quantiles.
plot(chn)
corner(chn)
Population based samplers do not have an explicit jump distribution. Instead, they run multiple chains (often called particles or walkers in this context) in parallel and the proposals are generated based on the position of the other particles.
A popular algorithm of this class is the Affine-Invariant MCMC population sampler proposed by Goodman and Weare (2010). The algorithm is often called EMCEE based on the python package with the same name.
Population based samplers are implemented in the package
mcmcensemble
as MCMCEnsemble
. It has two
methods: stretch move (method = "stretch"
) and
differential evolution
(method = "differential.evolution"
).
EMCEE <- MCMCEnsemble(
f = logposterior.monod,
lower.inits = par.start.lower,
upper.inits = par.start.upper,
max.iter = 10000,
n.walkers = n.walkers,
method = "stretch",
coda = FALSE
)
How do you choose par.start.lower
and
par.start.upper
? Better wide or narrow?
What is the influence of the number of walkers?
Which method works better in this case?
We use the package KissMCMC
which provides a function emcee
:
using ComponentArrays
using KissMCMC: emcee
using Plots
using StatsPlots
# number of walkers (parallel chains)
n_walkers = 10
## We need a vector of inital values, one for each walkers.
## Make sure that they do not start from the same point.
θinits = [θinit .* rand(3) for _ in 1:n_walkers]
# Run sampler
samples, acceptance_rate, lp = emcee(logposterior_monod,
θinits;
niter = 10_000, # total number of density evaluations
nburnin = 0);
# This looks a bit ugly. It just converts the result into
# a `MCMCChains.Chains` object for plotting.
X = permutedims(
cat((hcat(samples[i]...) for i in 1:n_walkers)..., dims=3),
[2, 1, 3]);
chn = Chains(X, labels(θinit))
# plotting
plot(chn)
corner(chn)
How do you define the initial values? Very similar or very different?
What is the influence of the number of walkers?
Population based samplers are implemented in the package
emcee
.
sampler = emcee.EnsembleSampler(nwalkers, ndim,
logposterior_monod, moves)
What is the influence of the number of walkers?
Which method works better in this case?
library(mcmcensemble)
For each walker (chain) an initial starting point must be defined. In general, it is better to choose it in a region with high density. We could use an optimizer to fine the mode, but here we just use a rather wide coverage.
n.walkers <- 20
par.inits <- data.frame(r.max = runif(n.walkers, 1, 10),
K = runif(n.walkers, 0, 5),
sigma = runif(n.walkers, 0.05, 2))
EMCEE <- MCMCEnsemble(
f = logposterior.monod,
inits = par.inits,
max.iter = 10000,
n.walkers = n.walkers,
method = "stretch",
coda = TRUE
)
## Using stretch move with 20 walkers.
plot(EMCEE$samples)
Note, the more walkers (chains) we have, the shorter the chains. This means we have to “pay” the burn in for every single chain. Therefore, going too extreme with the number of chains is not beneficial.
n.walkers <- 1000
par.inits <- data.frame(r.max = runif(n.walkers, 1, 10),
K = runif(n.walkers, 0, 5),
sigma = runif(n.walkers, 0.05, 2))
EMCEE <- MCMCEnsemble(
f = logposterior.monod,
inits = par.inits,
max.iter = 10000,
n.walkers = n.walkers,
method = "stretch",
coda = TRUE
)
## Using stretch move with 1000 walkers.
plot(EMCEE$samples)
using ComponentArrays
using KissMCMC: emcee
using Plots
using StatsPlots
# number of walkers (parallel chains)
n_walkers = 10;
## We need a vector of inital values, one for each walkers.
θinit = ComponentVector(r_max = 2.5, K=1.4, sigma=0.25); # prior mean
## ComponentVector{Float64}(r_max = 2.5, K = 1.4, sigma = 0.25)
## We add some randomnesses to make sure that they do not start
## from the same point.
θinits = [θinit .* rand(Normal(0, 0.1), 3) for _ in 1:n_walkers];
# Run sampler
samples, acceptance_rate, lp = emcee(logposterior_monod,
θinits;
niter = 10_000, # total number of density evaluations
nburnin = 0);
# Converting into `MCMCChains.Chains` object for plotting.
X = permutedims(
cat((hcat(samples[i]...) for i in 1:n_walkers)..., dims=3),
[2, 1, 3]);
chn = Chains(X, labels(θinit))
## Chains MCMC chain (1000×3×10 Array{Float64, 3}):
##
## Iterations = 1:1:1000
## Number of chains = 10
## Samples per chain = 1000
## parameters = r_max, K, sigma
##
## Use `describe(chains)` for summary statistics and quantiles.
Note, that our chains are only of length 1000. So we have a lot of computation used for the burn-in phase.
# plotting
plot(chn)
corner(chn)
# removing burn-in:
corner(chn[250:end,:,:])
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from models import model_monod
import emcee
from emcee.moves import StretchMove
from scipy.stats import multinomial, gaussian_kde, lognorm, norm
First we read and plot the data.
# Specify the path to the CSV file
file_path = r"../../data/model_monod_stoch.csv"
# Load the CSV file into a pandas DataFrame
data_monod = pd.read_csv(file_path, sep=" ")
# Plot 'C' against 'r'
plt.figure(figsize=(10, 6))
plt.scatter(data_monod['C'], data_monod['r'])
# Set x-axis label
plt.xlabel("C")
# Set y-axis label
plt.ylabel("r")
# Set title
plt.title('"model_monod_stoch.csv"')
# Display the plot
plt.show()
Define the prior, likelihood and posterior:
# Define the prior mean and standard deviation
prior_monod_mean = 0.5 * np.array([5, 3, 0.5]) # r_max, K, sigma
prior_monod_sd = 0.5 * prior_monod_mean
# Log prior for Monod model
def logprior_monod(par, mean, sd):
# Calculate sdlog and meanlog for log-normal distribution
var = (sd**2) / (mean**2)
sdlog = np.sqrt(np.log(1 + var))
meanlog = np.log(mean) - 0.5 * sdlog**2
# Calculate the sum of log probabilities from the log-normal distribution
log_prior = np.sum(lognorm.logpdf(par, s=sdlog, scale=np.exp(meanlog)))
return log_prior
def loglikelihood_monod(y, C, par):
"""
Log-likelihood function for the Monod model.
Parameters:
- y: observed growth rates
- C: substrate concentrations
- par: dictionary containing the parameters ('r_max', 'K', 'sigma')
Returns:
- log_likelihood: the sum of log-likelihood values
"""
# Calculate growth rate using Monod model
y_det = model_monod(par, C)
# Calculate log-likelihood assuming a normal distribution
log_likelihood = np.sum(norm.logpdf(y, loc=y_det, scale=par[2]))
return log_likelihood
def logposterior_monod(par):
"""
Log-posterior function for the Monod model.
Parameters:
- par: dictionary containing the parameters ('r_max', 'K', 'sigma')
Returns:
- log_posterior: the log posterior probability
"""
# Calculate the log prior
lp = logprior_monod(par, prior_monod_mean, prior_monod_sd)
# If log prior is finite, calculate log likelihood and sum
if np.isfinite(lp):
log_posterior = lp + loglikelihood_monod(data_monod['r'], data_monod['C'], par)
return log_posterior
else:
# Return negative infinity if log prior is not finite
return -np.inf
Define the initial parameters. Additionally, for each walker (chain) an initial starting point must be defined. In general, it is better to choose it in a region with high density. We could use an optimizer to fine the mode, but here we just use a rather wide coverage.
ndim = 3
# Define the initial parameter values (mean of the prior)
par_init = np.array([2.5, 1.5, 0.25]) # r_max, K, sigma
# Define the number of samples
n_samples = 5000
# Create an instance of the StretchMove
move = StretchMove()
# Define the number of walkers for the MCMC chain
nwalkers = 20
# Initialize the walkers around the initial parameter values with a small random perturbation
init_pos = par_init + 1e-4 * np.random.randn(nwalkers, ndim)
# Initialize the `emcee` sampler
sampler = emcee.EnsembleSampler(nwalkers, ndim, logposterior_monod, moves=move)
# Run the MCMC chain
sampler_run = sampler.run_mcmc(init_pos, n_samples, progress=False)
# Get the chain samples
chain_samples = sampler.get_chain()
# Display a summary of the chain
print("Mean acceptance fraction: {:.3f}".format(np.mean(sampler.acceptance_fraction)))
## Mean acceptance fraction: 0.636
print("Chain shape: ", chain_samples.shape)
## Chain shape: (5000, 20, 3)
# Parameters for plotting
parameters = ['r_max', 'K', 'sigma']
num_samples, num_walkers, num_parameters = chain_samples.shape
# Create the (3, 2) subplot
fig, axes = plt.subplots(3, 2, figsize=(12, 10))
for i, param in enumerate(parameters):
# Trace plot (first column)
for walker in range(num_walkers):
axes[i, 0].plot(chain_samples[:, walker, i], alpha=0.5)
axes[i, 0].set_title(f'Trace plot for {param}')
axes[i, 0].set_xlabel('Samples')
axes[i, 0].set_ylabel(param)
# Histogram plot (second column)
axes[i, 1].hist(chain_samples[:, :, i].flatten(), bins=30, alpha=0.75)
axes[i, 1].set_title(f'Histogram for {param}')
axes[i, 1].set_xlabel(param)
axes[i, 1].set_ylabel('Frequency')
plt.tight_layout()
plt.show()
In many cases we want to make predictions for new model inputs. Let’s say we have observed the data \((y, x)\) and used it to infer the posterior \(p(\theta | y, x)\). We would now like to make predictions given a new input \(x^*\) using the learned posterior distribution:
\[ p(Y^* | x^*, y, x,) = \int p(Y^* | x^*, \theta) \, p(\theta | y, x) \text{d}\theta \]
For the monod model, what is \(p(Y^* | x^*, \theta)\)?
Produce predictions for the monod model for \(C = \{10, 12, 14, 16\}\) using a posterior sample from a exercises. We only need samples form the predictive distribution. How can you do this without solving the integral analytically?
Plot the result with 90% prediction interval.
How much do you trust that the interval is correct? What assumptions did you make?
The expression \(p(Y^* | x^*, \theta)\) is our probabilistic model, so the distribution of \(Y^*\) given some inputs and parameters (i.e. the likelihood function).
Typically we do not have the posterior distribution \(p(\theta | y, x)\) in analytical form but a sample from it. Hence we cannot compute the integral over all \(\theta\) (besides that this would be difficult anyway). Instead, we use an approach to obtain samples from \(Y^*\):
Both steps are computationally cheap. For 1) we already have samples from the parameter inference, and 2) is simply a forward simulation of the model as we did in exercises 1. Hence, producing posterior predictions is much cheaper than inference.
(Technically we sample from the joint distribution \(p(Y^*, \theta| x^*, y, x)\)). The marginalization over \(\theta\) is done by simply ignoring the sampled \(\theta\)s and looking only at \(Y^*\).)
It is important to keep in mind that the resulting prediction intervals are only correct as long the underlying model is correct! For example, it seems rather risky to extrapolate with the monod model to large concentrations outside of the calibration data range.
We take the function for forward simulations from exercises 1:
simulate.monod.stoch <- function(par, C){
## run model
r.det <- model.monod(par, C)
## generate noise
z <- rnorm(length(C), 0, par["sigma"])
return(r.det + z)
}
m <- 1000 # number of samples
Cstar <- c(10, 12, 14, 16) # new inputs
## posterior samples, removing burn-in
post.samples <- RAM$samples[1000:10000,]
Ystar <- matrix(NA, ncol=length(Cstar), nrow=m)
colnames(Ystar) <- paste0("C_", Cstar)
for(k in 1:m){
## 1) take a sample from posterior
i <- sample(ncol(post.samples), 1)
theta <- post.samples[i,]
## 2) forward simulation from model
Ystar[k,] <- simulate.monod.stoch(theta, Cstar)
}
We can also plots the predictions with uncertainty bands:
Ystar.quants <- apply(Ystar, MARGIN=2, FUN=quantile, probs=c(0.05, 0.5, 0.95))
## plot result
plot(Cstar, Ystar.quants[2,], ylab="r", ylim=c(0, 5))
polygon(c(Cstar,rev(Cstar)), c(Ystar.quants[1,],rev(Ystar.quants[3,])), col = "grey85")
lines(Cstar, Ystar.quants[2,], col=2, lwd=2, type="b")
We take the function for forward simulations from exercises 1:
# function to simulate stochastic realisations
function simulate_monod_stoch(C, par)
Ydet = model_monod(C, par)
z = rand(Normal(0, par.sigma), length(Ydet)) # adding noise
Ydet .+ z
end
## simulate_monod_stoch (generic function with 1 method)
m = 1000
## 1000
Cstar = [10,12,14,16]
## 4-element Vector{Int64}:
## 10
## 12
## 14
## 16
Ystar = Matrix{Float64}(undef, m, length(Cstar));
θ = copy(θinit);
for k in 1:m
i = rand(1000:10000)
θ .= res.X[:,i]
Ystar[k,:] = simulate_monod_stoch(Cstar, θ)
end
# compute quantile
low_quantile = [quantile(Ystar[:,i], 0.05) for i in 1:length(Cstar)];
med_quantile = [quantile(Ystar[:,i], 0.5) for i in 1:length(Cstar)];
upper_quantile = [quantile(Ystar[:,i], 0.95) for i in 1:length(Cstar)];
plot(Cstar, upper_quantile,
fillrange = low_quantile,
labels = false,
xlabel = "C",
ylabel = "r",
ylim=(0,5));
plot!(Cstar, med_quantile, marker=:circle,
labels = false)
We take the function for forward simulations from exercises 1:
def simulate_monod_stoch(par, C):
"""
Simulate the Monod model with stochastic noise.
Arguments:
----------
- par: Array containing the following parameters:
- r_max: maximum growth rate
- K: half-saturation concentration
- sigma: standard deviation of noise
- C: numpy array containing substrate concentrations
Value:
------
A numpy array representing the growth rate with stochastic noise added.
"""
# Run deterministic model
r_det = model_monod(par, C)
# Generate noise using a normal distribution with mean 0 and standard deviation `sigma`
sigma = par[-1]
z = np.random.normal(0, sigma, size=len(C))
# Add noise to the deterministic model results
return r_det + z
We can also select the samples from the previous exercise to work with:
# Here we select the mean of the walkers defined in the previous exercise as a single sampler chain
reshaped_chain = np.mean(chain_samples, axis=1)
# Convert reshaped chain samples to pandas DataFrame
chain_df = pd.DataFrame(reshaped_chain, columns=['r_max', 'K', 'sigma'])
m = 1000 # number of samples
Cstar = np.array([10, 12, 14, 16]) # new inputs
# Extract posterior samples, removing burn-in
post_samples = chain_df.iloc[1000:].values # Adjust the slicing as needed based on your data
# Initialize the Ystar matrix
Ystar = np.empty((m, len(Cstar)))
columns = [f"C_{c}" for c in Cstar]
# Perform the sampling and forward simulation
for k in range(m):
# 1) Take a sample from posterior
i = np.random.randint(post_samples.shape[0])
theta = post_samples[i, :]
# 2) Forward simulation from model
Ystar[k, :] = simulate_monod_stoch(theta, Cstar)
# Convert Ystar to a DataFrame for better readability (optional)
Ystar_df = pd.DataFrame(Ystar, columns=columns)
print(Ystar_df)
## C_10 C_12 C_14 C_16
## 0 3.734271 4.016479 4.370585 3.318980
## 1 4.396937 4.079691 3.785820 4.225207
## 2 4.058409 4.123515 4.393860 4.171088
## 3 3.846066 4.409189 4.506528 3.262928
## 4 4.110921 3.428150 3.565007 3.648834
## .. ... ... ... ...
## 995 3.321773 4.665270 3.876293 3.289823
## 996 4.596517 3.791552 4.077776 4.045406
## 997 3.435163 4.211694 4.070066 4.234082
## 998 3.745578 3.974642 4.170029 3.761452
## 999 3.751697 4.347671 4.145892 4.308153
##
## [1000 rows x 4 columns]
We can also plots the predictions with uncertainty bands:
# Compute quantiles
Ystar_quants = np.quantile(Ystar, q=[0.05, 0.5, 0.95], axis=0)
# Plot the result
plt.figure(figsize=(10, 6))
plt.plot(Cstar, Ystar_quants[1,], 'o-', color='red', label='Median', linewidth=2)
plt.fill_between(Cstar, Ystar_quants[0,], Ystar_quants[2,], color='grey', alpha=0.5, label='5th-95th Percentile')
plt.xlabel('Cstar')
plt.ylabel('r')
plt.ylim(0, 5)
## (0.0, 5.0)
plt.title('Quantile Plot')
plt.legend()
plt.show()
If we are able to compute the gradient of the log density, \(\nabla \log p\), we can use much more efficient sampling methods. For small numbers of parameters this may not be relevant, for larger problems (more than 20 dimensions) the differences can be huge.
Julia is particularly well suited for these applications, because many libraries for Automatic Differentiation (AD) are available - methods that compute the gradient of (almost) any Julia function by analyzing the code.
Because there is no equally powerful AD available in R, we can do this exercise only with Julia.
Most gradient based samplers assume that all parameters are in \(\mathbb{R}\). If we have a parameter that is only defined on an interval, such as a standard deviation that is never negative, we need to transform the model parameter before sampling. For this we need three ingredients:
a function that maps every vector in \(\mathbb{R}^n\) to our “normal” model parameter space,
the inverse of this function,
the determinant of the Jacobian of this function.
The package TransformVariables helps us with these transformations. We need a function that takes a vector in \(\mathbb{R}^n\) to evaluate the posterior.
using TransformVariables
# defines the 'legal' parameter space, all parameter cannot be negative
# due to the lognormal prior
trans = as((r_max = asℝ₊, K = asℝ₊, sigma = asℝ₊))
We can now sample in \(\mathbb{R}^n\) and later transform the samples to the model parameter space with:
TransformVariables.transform(trans, [-1,-1,-1]) # -> (r_max=0.367, K=0.367, sigma=0.367)
The last ingredient we need is a function that computes the gradient
of logposterior_monod_Rn
. To do so we need to differentiate
through our model, the likelihood, the prior, the parameter
transformation, and the determinant of the Jaccobian. Needles to say,
even for our very simple model this would be very tedious to do
manually!
Instead we use Automatic
Differentiation (AD). Julia has multiple packages for AD that make
different trade-offs. We use ForwardDiff
that is well suited for smaller dimensions.
AD requires all your model code to be implemented in pure Julia! Otherwise there are few restrictions. For example, you can compute the gradient of the growth model even though it uses an advanced adaptive ODE solver internally.
using TransformedLogDensities: TransformedLogDensity
using LogDensityProblemsAD: ADgradient
# Define an object that is compatible with the LogDensityProblems.jl interface.
# It requires a parameter transformation and a function to compute the
# log probability density
# This inyterface enables compability with different samplers.
lp = TransformedLogDensity(trans, θ -> logposterior_monod(ComponentVector(θ)))
lp = ADgradient(:ForwardDiff, lp) # define the AD framework to use
Hamiltonian Monte Carlo (HMC) is one of the most powerful methods to sample in high dimensions. The “No-U-Turn Sampler” (NUTS) is a popular version. For example, it is used in STAN.
The package DynamicHMC.jl
provides a robust implementation.
HMC samplers need many density- and gradient-evaluations to produce a
single proposal. However, the acceptance rate of an HMC sampler should
be close to one. We can use the wrapper function stanHMC
like this:
import Random
using DynamicHMC: mcmc_with_warmup, ProgressMeterReport, Diagnostics.summarize_tree_statistics
# run for 100 samples
par_init = [-1.0, -1.0, -1.0]; # in ℝⁿ
results = mcmc_with_warmup(Random.GLOBAL_RNG, lp, 100;
initialization = (q = par_init, ),
reporter = ProgressMeterReport());
The samples we get are in \(\mathbb{R}^n\). Before we have a look, let’s transform them to the “normal” parameter space:
# back-transform samples
_samples = [TransformVariables.transform(trans, s)
for s in eachcol(results.posterior_matrix)];
samples = vcat((hcat(i...) for i in _samples)...);
chn = MCMCChains.Chains(samples, [:r_max, :K, :sigma])
plot(chn)
Livingstone and Zanella (2021) proposed a comparably simple MCMC algorithm that uses the gradient to adapt the jump distribution. It is a promising alternative to HMC in cases where the number of parameters is not very high, or if the gradient is expected to be noisy (which is often the case if a model uses adaptive ODE solvers).
BarkerMCMC.jl
implements it including an adaptation that aims at a given acceptance
rate.
using BarkerMCMC: barker_mcmc
par_init = [-1.0, -1.0, -1.0] # note, this is in ℝⁿ
# see `?barker_mcmc` for all options
res = barker_mcmc(lp,
par_init;
n_iter = 1000,
target_acceptance_rate=0.4);
res.samples # this are the samples in ℝⁿ !
res.log_p
The samples we get are in \(\mathbb{R}^n\). Before we have a look, let’s transform them to the “normal” parameter space:
using MCMCChains
using StatsPlots
# Transform the samples to the "normal" space and convert to `Chains`
_samples = [TransformVariables.transform(trans, s)
for s in eachrow(res.samples)];
samples = vcat((hcat(i...) for i in _samples)...);
chn = Chains(samples, [:r_max, :K, :sigma])
plot(chn)
corner(chn)
First we make sure, that we can sample in an unlimited space by using a appropriate transformations:
using TransformVariables
using TransformedLogDensities: TransformedLogDensity
using LogDensityProblemsAD: ADgradient
# defines the 'legal' parameter space, all parameter cannot be negative
# due to the lognormal prior
trans = as((r_max = asℝ₊, K = asℝ₊, sigma = asℝ₊))
## [1:3] NamedTuple of transformations
## [1:1] :r_max → asℝ₊
## [2:2] :K → asℝ₊
## [3:3] :sigma → asℝ₊
# define an object that is compatible with the LogDensityProblems.jl interface.
# This enables compability with idfferent samplers.
lp = TransformedLogDensity(trans, θ -> logposterior_monod(ComponentVector(θ)))
## TransformedLogDensity of dimension 3
We compute the gradient of the logposterior with automatic
differentiation (AD). In Julia we can choose AD different back-ends.
ForwardDiff.jl
is a good option if we do not have many
parameters.
lp = ADgradient(:ForwardDiff, lp)
## ForwardDiff AD wrapper for TransformedLogDensity of dimension 3, w/ chunk size 3
We use a “No-U-turn sampler” HMC sampler, similar to the one implemented in STAN. Note, that for HMC we typically need a much lower number of samples.
import Random
using DynamicHMC: mcmc_with_warmup, ProgressMeterReport, Diagnostics.summarize_tree_statistics
# run for 100 samples
par_init = [-1.0, -1.0, -1.0]; # in ℝⁿ
## 3-element Vector{Float64}:
## -1.0
## -1.0
## -1.0
results = mcmc_with_warmup(Random.GLOBAL_RNG, lp, 100;
initialization = (q = par_init, ),
reporter = ProgressMeterReport());
# some convergence statistics
summarize_tree_statistics(results.tree_statistics)
## Hamiltonian Monte Carlo sample of length 100
## acceptance rate mean: 0.88, 5/25/50/75/95%: 0.55 0.84 0.95 0.99 1.0
## termination: divergence => 0%, max_depth => 0%, turning => 100%
## depth: 0 => 0%, 1 => 3%, 2 => 35%, 3 => 42%, 4 => 20%
results.posterior_matrix # this are the samples in ℝⁿ !
## 3×100 Matrix{Float64}:
## 1.48232 1.40673 1.48087 1.42593 1.45032 … 1.42101 1.42101 1.43784 1.4377
## 0.449344 0.278603 0.470888 0.400542 0.352349 0.284144 0.284144 0.228945 0.39664
## -0.347344 -0.425702 -1.11275 -1.11035 -1.10023 -0.720011 -0.720011 -0.647299 -0.787794
We need to back-transform the samples to the “normal” parameter
space. For plotting we convert to MCMCChains.Chains
:
# back-transform samples
_samples = [TransformVariables.transform(trans, s)
for s in eachcol(results.posterior_matrix)];
samples = vcat((hcat(i...) for i in _samples)...);
chn = MCMCChains.Chains(samples, [:r_max, :K, :sigma])
## Chains MCMC chain (100×3×1 Array{Float64, 3}):
##
## Iterations = 1:1:100
## Number of chains = 1
## Samples per chain = 100
## parameters = r_max, K, sigma
##
## Use `describe(chains)` for summary statistics and quantiles.
plot(chn)
corner(chn[25:end,:,:])
The same steps are taken for the BarkerMCMC
:
using BarkerMCMC: barker_mcmc
par_init = [-1.0, -1.0, -1.0]; # note, this is in ℝⁿ
## 3-element Vector{Float64}:
## -1.0
## -1.0
## -1.0
# see `?barker_mcmc` for all options
res = barker_mcmc(lp,
par_init;
n_iter = 1000,
target_acceptance_rate=0.4);
# Transform the samples to the "normal" space and convert to `Chains`
_samples = [TransformVariables.transform(trans, s)
for s in eachrow(res.samples)];
samples = vcat((hcat(i...) for i in _samples)...);
chn = Chains(samples, [:r_max, :K, :sigma])
## Chains MCMC chain (1000×3×1 Array{Float64, 3}):
##
## Iterations = 1:1:1000
## Number of chains = 1
## Samples per chain = 1000
## parameters = r_max, K, sigma
##
## Use `describe(chains)` for summary statistics and quantiles.
plot(chn)
corner(chn[250:end,:,:])