
Hi Folks, is there a way to tell the filter to skip the updating procedure during a determined range for a specific parameter? If so, how?
Thanks a lot!
Hector



Can you please describe in more detail what you mean?



Yes, of course. I have a smallscale DSGE model where the data length is not an issue. Real GDP is one of the variables in the DSGE and Real GDP is also a variable that I use (within the same model object) to model an equity latent factor that is being
extracted from 4 total return indices. The issue that I am facing is that the data length of the indices is way too short compared to economic data. The filter runs without problem when I choose an estimation period that is way longer than the length of the
equity data, but the estimated loadings on the common factor in the measurement block and the estimated parameters of this particular equation in the transition block are not reasonable. The estimation procedure works well when the estimation range is shorten
to include the equity data.
Have you faced a similar situation before? My thought in the thread was simply an alternative that I thought about to solve this issue, although I'm not sure if it even makes sense...
Please let me know if I was not clear with my explanation.
Thanks!
Hector



Interesting  the estimated loadings should not be (much) affected by the sample prior to the point where the equity data start. This is simply because equations/parameters relating to missing observations are excluded from the likelihood  sort of mechanism
you ask for. Can you show me the model (or better still, a stripped down version of it to see the core of the issue?).



I couldn't figure out how to attach files in the thread so I pasted a link to a dropbox folder where a stripped down version of the model files are located. It's supposed to be public but please let me know if you have issues opening the files.
https://www.dropbox.com/sh/rdj9smhpxqse7tf/Ym2wEB_GNS
As soon as you press F5 on the ' estimate' file you will notice that parameter 6 goes up a lot. This is the loading on a common factor for dividend yields. The estimation routine starts in 1995 in this case.
If you set the variable ' Date_1' to 2002Q1, for example, the estimation procedure does well (i.e. the loading does not overshoot)
Thank you so much for taking the time to look at it!
Hector


Apr 3, 2014 at 1:54 PM
Edited Apr 3, 2014 at 1:58 PM

Hey Hector,
I can replicate the issue as described. With the longer sample, parameter 6 heads for infinity.
There's an issue because even if you take the model which is estimated with the shorter sample and try to get smoothed state estimates, then you end up with pretty wild looking stuff compared to the post2006Q2 realizations. The standard deviation of the modelimplied
values is an order of magnitude higher than the standard deviation of the observed series. This would seem to indicate a problem with model specification. Check this figure:
My conjecture is that this is the result of constraints implicitly posed on the optimization problem by not estimating some parameters and shock standard deviations. For example, if I add
either the remaining shock standard deviations, or parameters like LoadEQT1 and LoadDY1, I get convergence and parameter values which are much more similar to what one obtains with the smaller sample.
Hope this helps.
Mike



Hi Mike, thanks for your response. I appreciate gaining from your insights to test for model specification problems. Adding the standard deviations to the optimization problem yields much better results as you indicated. Before learning IRIS and statespace
models I used PCA anlaysis to model highly correlated data sets. I am fascinated that, once the appropriate conditions and restrictions are placed on the loadings and second moments of the residuals, statespace models and the Kalman filter offer a great alternative
to PCA to modelling common factors.
Now everything seems to be working as initially intended. Thank you both for your help! You are both rockstars!
Hector

