This project is read-only.

Error using horzcat - Out of memory issue with judgemental forecast

Jun 23, 2015 at 11:25 AM
When I try to make a judgmental forecast for a large dataset I’ll get out of memory error. The data set has exogenised data points (1932). This is the error message I get:

Error in model/myforecastswap (line 88)
M = [Myc,My0,Myu,Mye;Mxc,Mx0,Mxu,Mxe];

In mysforecastswap arguments get these values
ny = sum(This.nametype == 1); (23)
nx = size(This.solution{1},1); (112)
nb = size(This.solution{1},2); (71)
nf = nx - nb; (41)
ne = sum(This.nametype == 3); (39)

xCurri = imag(This.solutionid{2}) == 0; (1x112)
nXCurr = sum(xCurri); (104)
fCurri = xCurri(1:nf); (1x41)
bCurri = xCurri(nf+1:end); (1x71)

Then these get really large
M = [Myc,My0,Myu,Mye;Mxc,Mx0,Mxu,Mxe];
[1932x1,1932x71,1932x3276;8736x1,8736x71,8376x3276, 8376x3276]
and my computer simply runs out of memory on this line.

I’ll use plan and the provided exogenise / endogenise funtions
Var = get(m,'yList');
e_Var = get(m,'eList');

p = plan(m,startfcst:endfcst);
p = exogenise(p,Var,startfcst:endfcst);
p = endogenise(p,e_Var,startfcst:endfcst);

This woks with shorter dataset, but my aim is to increase number of observables so the problem may come even with shorter forecast period.
Does anyone have ideas how make the matrixes smaller? Or ideas how the change the judgmental forecast procedure.

Sincerely,

Jukka
Jun 23, 2015 at 10:32 PM
How many periods out are you trying to do the endo/exo swap? If you have some sensible process for the endogenous variables you are trying to exogenise you should really not need to fix it that far into the future.

While it's probably possible to improve the memory performance of jforecast, I doubt that significant reductions can be made since the big problem is solving this large linear system to find shocks in all periods which produce values for endogenous variables that in all periods match their exogenized values.

You can get 16 GB of RAM on Newegg for under $100. But of course I understand not everyone has control over their machines, and not all employers recognize the value of having the extra memory.
Jun 24, 2015 at 11:43 AM
Hi,

thanks for the answer. In the example I tuse quarterly data for 20 years, but I guess 10 years would be enough for my purposes. Currently I can have data for 4 years which is unfortunately too short.

I do not have sensible processes for all the endog. varaibles, hence I try to do kind of backward engineering and let the Iris solve all the shocks. And even if the processes would be fine, I still would like to replicate paths for the endog. variables given outside the model with the model.

I'll then ask for more memory.

Br.

Jukka
Jun 24, 2015 at 10:35 PM
Well I don't really know what you're doing or what the context is, but a lot of TROLL users and people who are accustomed to running stack time simulations (finite period perfect foresight) are used to running simulations much longer than the actual forecast period -- with endogenous variables fixed potentially over the entire horizon -- in order to eliminate any effect of the associated model processes and/or terminal conditions. The algorithms in IRIS are not as well suited for this kind of work, or are at least not as memory efficient. But if you try specifying sensible processes for the things you are trying to exogenize, the impact of the shorter endo/exo swap should not really matter for your forecast horizon.

Yeah, or buy more RAM. :)
Jun 24, 2015 at 10:39 PM
This is the type of unsupported but more efficient algorithm I am speaking about:

http://www.unige.ch/ce/ce96/ps/hollinge.eps

Could be implemented in IRIS, in principle, and would actually be useful to me.... but I am currently stretched too thin on other projects (at my real job) to implement this. It's on my wish list. Maybe next year. :)