Computational statistics -TP 4 Solved

30.00 $

Description

Rate this product

M2 Mathématiques
TP 4: Improve the Metropolis-Hastings algorithm
Exercise 1: Adaptive Metropolis-Hastings within Gibbs sampler MCMC samplers, such as the Metropolis-Hastings algorithm or Gibbs sampler, require that the user specify a transition kernel with a given invariant distribution (the target distribution). These transition kernels usually depend on parameters which are to be given and tuned by the user. In practice, it is often difficult (if not impossible) to find the best parameters for such algorithms given a target distribution. Moreover, if the parameters are not carefully chosen, it may result in a MCMC algorithm performing poorly as in part A. Adaptive MCMC algorithms is a class of MCMC algorithms which address the problem of parameters tuning by updating automatically some of (if not all) the parameters.
1.A – Metropolis-Hastings within Gibbs sampler We aim to sample the target distribution it, on 1ft2, given by
/ x2 (x,y) ‘-+ it(x,y) ocexp (-2
(x2 2 _y)) 4
where a > 0. We consider a Markov transition kernel P defined by
P=
2 1(P1 +P2)
where Pi( (x, y); dx’ x dy’ ) for i = 1,2 is the Markov transition kernel which only updates the i-th component: this update follows a symmetric random walk proposal mechanism and uses a Gaussian distribution with variance o2i . 1. Implement an algorithm which samples the distribution P1(z;.) where z E 1ft2 ; likewise for the distribution P2(z;.). Then, implement an algorithm which samples a chain with kernel P. 2. Run the algorithm with a = 10 and standard deviations of the proposal distributions chosen as follows: (o1, a-2) = (3,3). Discuss the performance of the algorithm in this situation. 3. How could the performance of the above algorithm be improved? Propose two methods.
1.B – Adaptive Metropolis-Hastings within Gibbs sampler Let it be a density defined on an open set U of Jftd, d > 2. We consider here a Metropolis-Hastings within Gibbs algorithm to sample from the target density it. More precisely, the HM-step is a symmetric random walk one and the proposal distribution is a Gaussian distribution centered at the current state. As usual, for i E [1,d�, let 7ri denote the i-th full conditional of it, which is given by:
x_i = {x1,… , xi_1,xi+1,. . . , xd} ;
and u2i the variance of the corresponding proposal distribution.
�i(xi | x_i) oc it(x)

  • lab_4_hm-8luavp.zip