
Dear Musketeers,
A colleague is coding SMC algorithm to estimate DSGEs. I have recommended to use Iris and he is, indeed, happy with that. Naturally, he need an efficient routine to calculate the likelihood. What would be the most efficient way to do it in Iris when one wants
calculate the predictive density in each data point (not a sum of those like loglik returns).
Cheers,
Antti



Just to make sure I understand correctly: you simply want to have access to all the individual contributions that make up the overall likelihood in the Kalman filter? If so, submit a feature request, and I'll add this. It's relatively easy.
Maybe also describe to me a little bit the context in which you work (what sequence of functions you call, etc) so that I can design it the most efficient way.
Jaromir



Thanks Jaromir! This is indeed what he wants, ie each component of likelihood (when loglikelihood is the sum of conditional densities). I will make a feature request.
Supposing that I understand the Sequential Monte Carlo (see, eg,
http://economics.sas.upenn.edu/~schorf/papers/smc_paper.pdf) correctly, the idea is that we start with subsample (ie let the data speak initially little) and increase the sample stepbystep. In each step we actually need the predictive density (likelihood)
just in the last (T^th) observation. I think this makes no difference in Kalman filtering, one always need to calculate the predictive density in whole sample to obtain the contribution of the last observation.
The SMC has the benefit that it is easy to parallellize massively. More importantly, it works with nasty, eg multimodal, posterior distribution and is, therefore, foolproof.
Cheers,
Antti



Hmmm. Interesting. Thanks, Antti. Maybe we should just implement the SMC algorithm in IRIS directly. Seems like it's up my alley.... :)



Indeed. But don't split up with anyone just because you want to work on SMC :))



Hahaha :)))) I should finish the neural network stuff first, as well... but I think this is probably worth putting on the list of "wants" :)



An issue has been created for this discussion thread, and a new feature added.

