
Hi folks, I have been playing around with Jaromir's tutorial on unit root models. The example has helped me write my own model. This initial model is a very simple random walk with a drift model. I have been able to generate simulations and several other
exercises and everything is working as expected. I am now trying to estimate the parameters of my model and am a little bit confused with how IRIS estimates the drift parameter. This is the model:
!transition_variables
p;
!transition_shocks
shk1;
!parameters
mu1;
!measurement_variables
P;
!transition_equations
p/p{1} = mu1 + shk1;
!log_variables
p;P;
!measurement_equations
P = p;
Model files and data can be found here:
https://www.dropbox.com/sh/7ae3n6544zbrpkg/AAC2K6gf20BNJH6lCSrIBa?dl=0
The standard deviation of the shock seems to be estimated without a problem but the drift parameter seems to be staying at the same value of its prior/initial value. I have tried different values for the drift and the likelihood function obviously changes,
but no matter what initial value I use the program's estimate is always the same value. Why might this be happening? What am I missing?
Thanks!
Hector



Not sure, Hector. Seems to have something to do with the way log variables are handled. For example, writing things in logs works fine:
!transition_variables
lp ;
!transition_shocks
shk1;
!parameters
mu1;
!measurement_variables
LP ;
!transition_equations
lp = mu1 + lp{1} + shk1 ;
!measurement_equations
LP = lp ;
I will investigate with Jaromir.



Thank you, Michael. Have a good day.
Hector



Hi Hector
I know what the problem is now. Will explain on the weekend (an on a TA mission to India now and swamped by work).
Best,
Jaromir



Enjoy your trip!



No sarcasm, please : ))))


Nov 2, 2014 at 12:00 PM
Edited Nov 2, 2014 at 12:07 PM

Hi Hector
Here's the first (the shorter) part of my answer. Your model is nonlinear, and hence its steady state needs to be recomputed in every iteration when being estimated. By default, the function
estimate( ) does not do that: the default value for the option
'sstate= is false . In your case, switching the option to
true would not be enough yet, because your model is also nonstationary  and hence you need to tell that by specifying the steadystate solver option
'growth=' true (the way you also do it when solving the model before estimation). To do this, you simply assign the option
'sstate=' a cell array of namevalue pairs (similar, for instance, to the option
'filter=' ). The estimate( ) command then becomes something like this:
% Optim Tbx options.
oopt = {'display=','iter', ...
'maxIter=', 1e+5, ...
'maxFunEvals=', 1e+6, ...
'tolFun=',1e10,...
'tolX=', 1e10, ...
'tolCon=', 1e10, ...
'funValCheck=', 'on'};
% Kalman filter options.
fopt = {'chkFmse=',true, ...
'relative=',true, ...
'initCond=','stochastic'};
% Steadystate solver options.
sopt = {'growth=',true};
[~,~,~,~,estMODEL] = estimate(MODEL,Data,range,PRIORS,...
'optimset=', oopt, ...
'filter=', fopt, ...
'sstate=', sopt, ...
'noSolution=', 'penalty', ...
'solve=',true);
I'm also attaching the entire
UnitRoot.m file.
Now, having said that though, the way you estimate your file is extremely inefficient. There are various types of improvements you can make. In fact, in your particular case, the two parameters,
mu1 and std_shk1 , can be actually estimated without
any iterations at all, just by running a single run of the Kalman filter (i.e. not using the function
estimate( ) at all).
I will get back to this at some later point, when I have some more time (sorry about that...), and take you through the simplification step by step.
Note also that I have commented out the line that adds std_shk1 to the estimation struct. If you have a model with a single shock, and you run the underlying filter with the option
'relative=' true , the std deviation will always be estimated by a method based on concentrating the std dev out of the iterated likelihood function. In other words, estimating this parameter does not cost you anything, and will reduce the number
of iterations/function calls compared to the case when you do include the parameter in the estimation struct.
Hope this helps.
Best,
Jaromir



Thank you very much for the explanation, Jaromir. Getting to know how the Kalman filter is able to estimate the model in one run would be definitely interesting.
On a different note, I kept playing around with a slightly more advanced version of the model, and have ran into an issue not related to the estimation routine, but still thought it was appropriate to post my question here.
The transition equations are:
!transition_equations
% Key ratios:
po = d  e;
pe = p  e;
dy = d/p;
% PE and PO Dynamics:
po = po_hat + shk1;
pe = pe_hat + shk3;
% Returns
pr = p  p{1};
tr = dy + pr;
e = e{1} + g_hat + shk2;
!log_variables
e; d; p; dy; tr;
[Model files are here if you need them:
https://www.dropbox.com/s/3nvro70dqszim6h/Ratio_growth.zip?dl=0]
The model works as expected and I have no problem reporting the simulations, with the exception of one variable ; 'tr'. 'tr' represents the total return of the S&P500 and in this model that is equal to price return ('pr') + income return ('dy'). 'pr' and
'dy' are simulated as expected and in the right form (return in decimal form). See chart below:
All in all, I would like for 'tr' to be exactly equal to 'pr' + 'dy' in the chart above. I cannot understand why this is not the case.
Thank you!


Nov 7, 2014 at 11:13 PM
Edited Nov 7, 2014 at 11:21 PM

this is because of linearization error in nonlinear equation for tr



As ikarib pointed out, the problem is the following:
By default, simulations in IRIS are performed using the model's firstorder approximate solution. This means that for an equation to be perfectly accurate, it must be either of the two cases:
 the equation is fully additive and involves only variables that are NOT declared as log variables, e.g.
X = alpha*A + beta*B + gamma*C; where all the four variables X ,
A , B , and C are NOT log variables, and
alpha , beta , gamma are parameters, or
 the equation is fully multiplicative and involves only variables declared as log variables, e.g.
X = A^alpha * B^beta * C^gamma where all the four variables are log variables.
In your case, you can do two things: Either, you rewrite the model in such a way that respects the above constraints, or you can run nonlinear simulations (using the option
'nonlinear=' ).
Note also that lll these constraints follow from the fact that the model object in IRIS has been designed primarily for models with forwardlooking expectations. I'm currently working on extend IRIS to include also more efficient techniques for
backwardlooking models (such as most of the work I have seen posted by you).
Best,
Jaromir


Nov 14, 2014 at 5:07 PM
Edited Nov 14, 2014 at 5:22 PM

Jaromir,
For nonlinear simulations I propose to convert plainlinear function to mex file written in C or Fortran language. The main bottleneck is evaluation of anticipation functions in matrix form. For this purpose we should use Intel MKL library.
Iskander



Hi Iskander,
Thanks for looking into this. I myself would prefer using Java (simply because JVM is platform independent). Do you have any sense for how Java compares to C and Fortran in terms of computational speed?
I totally invite you to experiment with these things as I don't have always time to push the frontiers in all possible directions.
Best,
Jaromir



Also, could you pls possibly email me a sample profile report where plainlinear accounts for a considerable amount of time in a nonlinear simulation?



Thank you both for your replies. My intentions with declaring 'd' and 'e', for example, to be log variables and to write the equation 'po' = d  e; was to write a linear version of the equation PO = D/E; I initially understood that in doing so I was successfully
writing a linear version of the model. From the comments above it seems that I should not be declaring these variables log variables since this is an additive equation. Am I understanding this correctly?



Correct!

