Skip to content

Commit cb91cb7

Browse files
committed
correct approximate EKF-MCMC
1 parent 6bc3b4b commit cb91cb7

File tree

3 files changed

+8
-9
lines changed

3 files changed

+8
-9
lines changed

src/mgg_ssm.cpp

+2-1
Original file line numberDiff line numberDiff line change
@@ -472,6 +472,7 @@ arma::cube mgg_ssm::simulate_states() {
472472
um(j) = normal(engine);
473473
}
474474
asim.slice(0).col(0) = L_P1 * um;
475+
arma::vec y_tmp = y;
475476
for (unsigned int t = 0; t < (n - 1); t++) {
476477
arma::uvec na_y = arma::find_nonfinite(y.col(t));
477478
if (na_y.n_elem < p) {
@@ -500,6 +501,6 @@ arma::cube mgg_ssm::simulate_states() {
500501
}
501502

502503
asim.slice(0) += fast_smoother();
503-
504+
y = y_tmp;
504505
return asim;
505506
}

src/nlg_amcmc.cpp

+2-4
Original file line numberDiff line numberDiff line change
@@ -431,8 +431,7 @@ void nlg_amcmc::gaussian_sampling(nlg_ssm model, const unsigned int n_threads) {
431431
gaussian_state_sampler(model, theta_storage, mode_storage,
432432
alpha_storage);
433433
}
434-
posterior_storage = prior_storage + approx_loglik_storage - scales_storage +
435-
log(weight_storage);
434+
posterior_storage = prior_storage + approx_loglik_storage;
436435
}
437436

438437
void nlg_amcmc::gaussian_state_sampler(nlg_ssm model,
@@ -477,8 +476,7 @@ void nlg_amcmc::gaussian_state_sampler(nlg_ssm model,
477476
}
478477
approx_model.compute_HH();
479478
approx_model.compute_RR();
480-
481-
alpha.slice(i) = approx_model.simulate_states();
479+
alpha.slice(i) = approx_model.simulate_states().slice(0).t();
482480
}
483481
}
484482

vignettes/growth_model.Rmd

+4-4
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,6 @@ model <- nlg_ssm(y = y, a1=pntrs$a1, P1 = pntrs$P1,
118118

119119
Let's first run Extended Kalman filter and smoother using our initial guess for $\theta$:
120120
```{r ekf}
121-
library(diagis)
122121
out_filter <- ekf(model)
123122
out_smoother <- ekf_smoother(model)
124123
ts.plot(cbind(y, out_filter$att[, 2], out_smoother$alphahat[, 2]), col = 1:3)
@@ -127,12 +126,13 @@ ts.plot(cbind(out_filter$att[, 1], out_smoother$alphahat[, 1]), col = 1:2)
127126

128127
## Markov chain Monte Carlo
129128

130-
For parameter inference, we can perform full Bayesian inference with \texttt{bssm}. There are multiple choices for the MCMC algorithm in the package, and here we will use $\psi$-PF based pseudo-marginal MCMC with delayed acceptance. Let us compare this approach with EKF-based approximate MCMC:
129+
For parameter inference, we can perform full Bayesian inference with \texttt{bssm}. There are multiple choices for the MCMC algorithm in the package, and here we will use $\psi$-PF based pseudo-marginal MCMC with delayed acceptance. Let us compare this approach with EKF-based approximate MCMC (note that the number of MCMC iterations is rather small):
131130

132131
```{r mcmc}
133-
out_mcmc_pm <- run_mcmc(model, n_iter = 1e4, nsim_states = 10, method = "pm",
132+
out_mcmc_pm <- run_mcmc(model, n_iter = 5000, nsim_states = 10, method = "pm",
134133
simulation_method = "psi")
135-
out_mcmc_ekf <- run_mcmc(model, n_iter = 1e4, method = "ekf")
134+
out_mcmc_ekf <- run_mcmc(model, n_iter = 5000, method = "ekf")
135+
library("diagis")
136136
alpha_ekf <- weighted_mean(out_mcmc_ekf$alpha, out_mcmc_ekf$counts)
137137
alpha_pm <- weighted_mean(out_mcmc_pm$alpha, out_mcmc_pm$counts)
138138
ts.plot(cbind(y, alpha_pm[, 2], alpha_ekf[, 2]), col = 1:3)

0 commit comments

Comments
 (0)