Objectives
In this exercise, you will write your own code to implement the most
elementary MCMC sampler, the Metropolis algorithm for one dimension.
MCMC samplers can be used to sample from any distribution, and they are
not linked to Bayesian concepts per se. In Bayesian inference, however,
they are especially useful to sample from the posterior distribution.
You will apply your self-coded sampler to sample from the posterior
distribution of the one-parameter model “survival”, given the observed
data in the file model_survival.csv
.
We want to use the model “Survial” from yesterday’s exercise and the
data model_survival.csv
to test our sampler. The experiment
was started with N=30
individuals (this information is not
contained in the data set).
It’s always a good idea to look at the data first.
data.survival <- read.table("../../data/model_survival.csv", header=TRUE)
barplot(data.survival$deaths, names=data.survival$time,
xlab="Time interval", ylab="No. of deaths",
main="Survival data")
using DataFrames
import CSV
using Plots
survival_data = CSV.read("../../data/model_survival.csv",
DataFrame);
bar(survival_data.deaths,
labels = false,
xlabel = "Times interval",
ylabel = "No. of deaths",
title = "Survival data")
import pandas as pd
import matplotlib.pyplot as plt
# Loading data
# Specify the path to the CSV file
file_path = r"../../data/model_survival.csv"
# Load the CSV file into a pandas DataFrame
data_survival = pd.read_csv(file_path, sep=" ")
plt.bar(data_survival['time'], data_survival['deaths'])
# Set the x and y labels and title
plt.xlabel("Time interval")
plt.ylabel("No. of deaths")
plt.title("Survival data")
# Display the plot
plt.show()
In the next step we will need the posterior density (or more accurately: a function that is proportional to the posterior density) for the model “survival” because \[ p(\lambda|y)\propto p(\lambda,y) = p(y|\lambda)p(\lambda) \; . \] This means that we need the likelihood function \(p(y|\lambda)\) and the prior \(p(\lambda)\).
Remember to set N=30
.
The likelihood is given below (copied from Exercise 1):
loglikelihood.survival <- function(y, N, t, par){
## Calculate survival probabilities at measurement points
## and add zero "at time-point infinity":
S <- exp(-par*c(0,t))
S <- c(S,0)
## Calcute probabilities to die within observation windows:
p <- -diff(S)
## Add number of survivors at the end of the experiment:
y <- c(y, N-sum(y))
## Calculate log-likelihood of multinomial distribution at point y:
LL <- dmultinom(y, prob=p, log=TRUE)
return(LL)
}
The likelihood can be implemented like this, copied from Exercise 1.
using Distributions
function loglikelihood_survival(N::Int, t::Vector, y::Vector, λ)
S = exp.(-λ .* t)
S = [1; S; 0]
p = -diff(S)
## Add number of survivors at the end of the experiment:
y = [y; N-sum(y)]
logpdf(Multinomial(N, p), y)
end
Note, that we return -Inf
if a negative parameter is
given. However, we still need a prior distribution for the model
parameter \(\lambda\) that encodes that
it must be positive.
The likelihood can be implemented like this, copied from Exercise 1.
def loglikelihood_survival(y, N, t, par):
# Calculate survival probabilities at measurement points
# and add zero "at time-point infinity":
S = np.exp(-par * np.concatenate(([0], t)))
S = np.concatenate((S, [0]))
# Calculate probabilities to die within observation windows:
p = -np.diff(S)
# Add number of survivors at the end of the experiment:
y = np.concatenate((y, [N - np.sum(y)]))
# Calculate log-likelihood of multinomial distribution at point y:
LL = multinomial.logpmf(y, n=N, p=p)
return LL
The parameter \(\lambda\) must be positive. Furthermore, we know that values around 0.3 are quite plausible, and that values below 0.1 and above 0.7 are pretty unlikely. Which distribution would you choose for the prior?
Implement a function that returns the logarithm of a probability density of the prior distribution for \(\lambda\).
You can use a lognormal
prior distribution by
understanding and filling-in the block below.
# we use a lognormal distribution, which naturally accounts
# for the fact that lambda is positive
logprior.survival <- function(par){
# a mean and standard deviation that approximate the prior information given above are:
mu <- ??? # since the distribution is skewed, choose it higher than 0.3 (the mode)
sigma <- ??? # a value similar to mu will do
# calculate the mean and standard deviation in the log-space
meanlog <- log(mu) - 1/2*log(1+(sigma/mu)^2) #
sdlog <- sqrt(log(1+(sigma/mu)^2)) #
# Use these to parameterize the prior
dlnorm(???)
}
You can use a lognormal
prior distribution by
understanding and filling-in the block below.
function logprior_survival(λ)
# a mean and standard deviation that approximate the prior information given above are:
m = ? # the distribution is skewed, choose a mean higher than 0.3 (the mode)
sd = ? # a value similar to the mean will do
# calculate the mean and standard deviation in the log-space
μ = log(m/sqrt(1+sd^2/m^2))
σ = sqrt(log(1+sd^2/m^2))
# Use these to parameterize the prior
return logpdf(Normal(???), ???)
end
You can use a lognormal
prior distribution by
understanding and filling-in the block below.
def logprior_survival(par):
# Define mean and standard deviation that approximate the prior information given
mu = 0.6 # Since the distribution is skewed, choose it higher than 0.3 (the mode)
sigma = 0.7 # A value similar to mu will do
# Calculate the mean and standard deviation in the log-space
meanlog = np.log(mu) - 0.5 * np.log(1 + (sigma / mu)**2)
sdlog = np.sqrt(np.log(1 + (sigma / mu)**2))
# Calculate the log prior using log-normal distribution
log_prior = lognorm.logpdf(par, s=sdlog, scale=np.exp(meanlog))
return log_prior
Finally, define a function that evaluates the density of the log posterior distribution.
This function should take the data from the file
model_survival.scv
and a parameter value as inputs.
Check what happens if you pass an “illegal”, i.e. negative parameter value. The function must not crash in this case.
log.posterior <- function(par, data) {
...
}
function logposterior_survival(λ, data)
return ...
end
def logposterior_survival(par, data):
# Calculate log-likelihood
log_likelihood = ??
# Calculate log-prior
log_prior = ??
# Calculate log-posterior
log_posterior = log_likelihood + log_prior
return log_posterior
Implement your own elementary Markov Chain Metropolis sampler and run it with the posterior defined above for a few thousand iterations. You can find a description of the steps needed in Carlo’s slides.
# set sample size (make it feasible)
sampsize <- 5000
# Create an empty matrix to hold the chain:
chain <- matrix(nrow=sampsize, ncol=2)
colnames(chain) <- c("lambda","logposterior")
# Set start value for your parameter:
par.start <- ???
# compute logosterior value of of your start parameter
chain[1,] <- ???
# Run the MCMC:
for (i in 2:sampsize){
# Propose new parameter value based on previous one
# (choose a propsal distribution):
par.prop <- ???
# Calculate logposterior of the proposed parameter value:
logpost.prop <- ???
# Calculate acceptance probability by using the Metropolis criterion min(1, ... ):
acc.prob <- ???
# Store in chain[i,] the new parameter value if accepted, or re-use the old one if rejected:
if (runif(1) < acc.prob){
???
} else {
????
}
}
# set your sampsize
sampsize = 5000
# create empty vectors to hold the chain and the log posteriors values
chain = Vector{Float64}(undef, sampsize);
log_posts = Vector{Float64}(undef, sampsize);
# Set start value for your parameter
par_start = ???
# compute logosterior value of of your start parameter
chain[1] = ???
log_posts[1] = ???
# Run the MCMC:
for i in 2:sampsize
# Propose new parameter value based on previous one (choose a propsal distribution):
par_prop = ???
# Calculate logposterior of the proposed parameter value:
logpost_prop = ???
# Calculate acceptance probability by using the Metropolis criterion min(1, ... ):
acc_prob = ???
# Store in chain[i] the new parameter value if accepted,
# or re-store old one if rejected:
if rand() < acc_prob # rand() is the same as rand(Uniform(0,1))
???
else # if not accepted, stay at the current parameter value
???
end
end
# Set sample size (make it feasible)
sampsize = 5000
# Create an empty array to hold the chain
chain = np.zeros((sampsize, 2))
# Define column names
column_names = ['lambda', 'logposterior']
# Set start value for your parameter
par_start = {'lambda': 0.5}
# Compute posterior value of the first chain using the start value
# of the parameter and the logprior and loglikelihood functions already defined
chain[0, 0] = par_start['lambda']
chain[0, 1] = logposterior_survival(par_start['lambda'], data_survival)
# Run the MCMC
for i in range(1, sampsize):
# Propose a new parameter value based on the previous one (think of proposal distribution)
par_prop = ??
# Calculate log-posterior of the proposed parameter value (use likelihood and prior)
logpost_prop = ??
# Calculate acceptance probability
acc_prob = ??
# Store in chain[i, :] the new parameter value if accepted, or re-use the old one if rejected
if np.random.rand() < acc_prob:
chain[i, :] = [par_prop, logpost_prop]
else: # if not accepted, stay at the current parameter value
chain[i, :] = chain[i - 1, :]
# If you want to convert the array to a DataFrame for further analysis
df_chain = pd.DataFrame(chain, columns=column_names)
Experiment with you sampler: - Plot the resulting chain. Does it look reasonable? - Test the effect of different jump distribution. - Find the value of \(\lambda\) within your sample, for which the posterior is maximal. - Create a histogram or density plot of the posterior sample. Look at the chain plots to decide how many the burn-in sample you need to remove.
The main difference of the “Monod” compared to the “Survival” model is that it has more than one parameter that we would like to infer.
You can either generalize our own Metropolis sampler to higher dimensions or use a sampler from a package.
We use the data model_monod_stoch.csv
:
data.monod <- read.table("../../data/model_monod_stoch.csv", header=T)
plot(data.monod$C, data.monod$r, xlab="C", ylab="r", main='"model_monod_stoch.csv"')
Again, we need to prepare a function that evaluates the log posterior. You can use the templates below.
# Logprior for model "monod": lognormal distribution
prior.monod.mean <- 0.5*c(r_max=5, K=3, sigma=0.5)
prior.monod.sd <- 0.5*prior.monod.mean
logprior.monod <- function(par,mean,sd){
sdlog <- ???
meanlog <- ???
return(sum(???)) # we work log-space, hence sum over multiple independent data points
}
# Log-likelihood for model "monod"
loglikelihood.monod <- function( y, C, par){
# deterministic part:
y.det <- ???
# Calculate loglikelihood:
return( sum(???) )
}
# Log-posterior for model "monod" given data 'data.monod' with layout 'L.monod.obs'
logposterior.monod <- function(par) {
logpri <- logprior.monod(???)
if(logpri==-Inf){
return(-Inf)
} else {
return(???) # sum log-prior and log-likelihood
}
}
include("../../models/models.jl") # load monod model
using ComponentArrays
using Distributions
# set parameters
par = ComponentVector(r_max = 5, K=3, sigma=0.5)
prior_monod_mean = 0.5 .* par
prior_monod_sd = 0.25 .* par
# Use a lognormal distribution for all model parameters
function logprior_monod(par, m, sd)
m = ...
σ = ...
return sum(...) # sum because we are in the log-space
end
# Log-likelihood for model "monod"
function loglikelihood_monod(par::ComponentVector, data::DataFrame)
y_det = ...
return sum(...)
end
# Log-posterior for model "monod"
function logposterior_monod(par::ComponentVector)
logpri = logprior_monod(...)
if logpri == -Inf
return ...
else
return ... # sum log-prior and log-likelihood
end
end
# Define the prior mean and standard deviation
prior_monod_mean = 0.5 * np.array([5, 3, 0.5]) # r_max, K, sigma
prior_monod_sd = 0.5 * prior_monod_mean
# Log prior for Monod model
def logprior_monod(par, mean, sd):
"""
Log-prior function for the Monod model.
Parameters:
- par: array-like, the parameters to evaluate the prior (e.g., ['r_max', 'K', 'sigma'])
- mean: array-like, the mean values for the parameters
- sd: array-like, the standard deviation values for the parameters
Returns:
- log_prior: the sum of log-prior values calculated from the log-normal distribution
"""
# Calculate sdlog and meanlog for log-normal distribution
var = ??
sdlog = ??
meanlog = ??
# Calculate the sum of log probabilities from the log-normal distribution
log_prior = np.sum(??)
return log_prior
def loglikelihood_monod(y, C, par):
"""
Log-likelihood function for the Monod model.
Parameters:
- y: observed growth rates
- C: substrate concentrations
- par: array-like, the parameters ('r_max', 'K', 'sigma')
Returns:
- log_likelihood: the sum of log-likelihood values
"""
# Calculate growth rate using Monod model
y_det = ??
# Calculate log-likelihood assuming a normal distribution
log_likelihood = np.sum(??)
return log_likelihood
def logposterior_monod(par, data, prior_mean, prior_sd):
"""
Log-posterior function for the Monod model.
Parameters:
- par: array-like, the parameters ('r_max', 'K', 'sigma')
- data: pandas DataFrame, the data containing observed growth rates and substrate concentrations
- prior_mean: array-like, the mean values for the parameters
- prior_sd: array-like, the standard deviation values for the parameters
Returns:
- log_posterior: the log posterior probability
"""
lp = ??
if np.isfinite(lp):
log_posterior = lp + ??
return log_posterior
else:
return -np.inf
First run an initial chain, which most likely has not yet a good mixing.
For the jump distribution, assume a diagonal covariance matrix with reasonable values. The diagonal covariance matrices (with zero-valued off-diagonal entries) correspond to independent proposal distribution.
The code below uses the function adaptMCMC::MCMC
. If the
adaptation is set to FALSE
, this corresponds to the basic
Metropolis sampler. Of course, feel free to use your own implementation
instead!
library(adaptMCMC)
## As start values for the Markov chain we can use the mean of the prior
par.start <- c(r_max=2.5, K=1.5, sigma=0.25)
## sample
monod.chain <- adaptMCMC::MCMC(p = ...,
n = ...,
init = ...,
adapt = FALSE # for this exercise we do not want to use automatic adaptation
)
monod.chain.coda <- adaptMCMC::convert.to.coda(monod.chain) # this is useful for plotting
Plot the chain and look at the rejection rate of this chain to gain information if the standard deviation of the jump distribution should be increased (if rejection frequency is too low) or decreased (if it was too high). What do you think?
monod.chain$acceptance.rate
plot(monod.chain.coda)
pairs(modod.chain$samples)
The package KissMCMC
provides a very basic Metropolis sampler. Of course, feel free to use
your own implementation instead!
using KissMCMC: metropolis
using Distributions
using LinearAlgebra: diagm
# define a function that generates a proposal given θ:
Σjump = diagm(ones(3))
sample_proposal(θ) = θ .+ rand(MvNormal(zeros(length(θ)), Σjump))
# run sampler
samples, acceptance_ratio, lp = metropolis(logposterior_monod,
sample_proposal,
par;
niter = 10_000,
nburnin = 0);
## for convenience we convert our samplers in a `MCMCChains.Chains`
using MCMCChains
using StatsPlots
chn = Chains(samples, labels(par))
plot(chn)
corner(chn)
Array(chn) # convert it to normal Array
For this exercise, we provide the function below, which is simply an adaptation of the chain you wrote in exercise 1. In the function doc strings you have an overview of how to run your function for your model. Of course, feel free to use your own implementation instead!
def run_mcmc_monod(logposterior_func, data, prior_mean, prior_sd, par_init, sampsize=5000, scale=np.diag([1, 1, 1])):
"""
Run MCMC for the model.
Parameters:
- logposterior_func: function, the log-posterior function to use
- data: pandas DataFrame, the data containing observed growth rates and substrate concentrations
- prior_mean: array-like, the mean values for the parameters
- prior_sd: array-like, the standard deviation values for the parameters
- par_init: array-like, the initial parameter values
- sampsize: int, the number of samples to draw (default: 5000)
- scale: array-like, the covariance matrix for the proposal distribution
Returns:
- chain: array, the chain of parameter samples and their log-posterior values
- acceptance_rate: float, the acceptance rate of the MCMC run
"""
chain = np.zeros((sampsize, len(par_init) + 1))
chain[0, :-1] = par_init
chain[0, -1] = logposterior_func(par_init, data, prior_mean, prior_sd)
accepted = 0
for i in range(1, sampsize):
par_prop = chain[i-1, :-1] + np.random.multivariate_normal(np.zeros(len(par_init)), scale)
logpost_prop = logposterior_func(par_prop, data, prior_mean, prior_sd)
acc_prob = min(1, np.exp(logpost_prop - chain[i-1, -1]))
if np.random.rand() < acc_prob:
chain[i, :-1] = par_prop
chain[i, -1] = logpost_prop
accepted += 1
else:
chain[i, :] = chain[i-1, :]
acceptance_rate = accepted / sampsize
return chain, acceptance_rate
Convert the chain to a data frame so you can improve the visualization of the results
# Convert the results into a dataframe
df_chain = pd.DataFrame(chain, columns=['r_max', 'K', 'sigma', 'logposterior']) # this is useful for plotting
# Visualize the results
df_chain.describe()
The plot of the previous chain gives us some indication how a more efficient jump distribution could look like.
Try to manually change the variance of the jump distribution. Can we get better chains?
Use the previous chain and estimate its covariance. Does a correlated jump distribution work better?
The function adaptMCMC::MCMC
takes argument
scale
to modify the jump distribution:
scale: vector with the variances _or_ covariance matrix of the jump
distribution.
Adapt the covariance matrix Σjump
of the jump
distribution that is used in the function
sample_proposal
.
The provided function “run_mcmc_monod” takes argument
scale
to modify the jump distribution.
The assumptions about the observation noise should be validated. Often we can do this by looking at the model residuals, that is the difference between observations and model prediction.
Find the parameters in the posterior sample that correspond to the largest posterior value (the so called maximum posterior estimate, MAP).
Run the deterministic “Monod” model with this parameters and analyze the residuals.
The function qqnorm
and acf
can be
helpful.
The function StatsBase.autocor
and
StatPlots.qqnorm(x)
can be helpful.
The function acf
from
statsmodels.tsa.stattools
and the function
stats
from scipy
might be useful.
The goal is to sample for the posterior distribution of the “Growth” model. This task is very similar to the previous one except that we provide less guidance.
model_growth.csv