Hey Matthias,

I assume you are only doing a dispatch optimisation? In the case of investment, your approach would obviously not work.

Generally, you may create the different model instances, compute them (even in parallel, using the python e.g. multiprocessing package) and then concat the results. BUT:

Of course you need to check what is going to happen with all model variables which are used in constraints with two (or more) consecutive timesteps. This will be storage filling levels, but also minimum up/downtimes, gradients and for specific components like the DSMSink even more (really cool demand side component ;))

Thatâ€™s why the approach is called â€śrollingâ€ť. You will start with the first e.g. n=168 Timesteps, then fix the n-x variables (say fix the first day, i.e. 168-144), then run for timestep 24â€¦168+24 on so on.

Now: You could implement this rolling approach. however, I am not sure if that rolling approach will speed up your whole tool, because you will construct a lot of (small) models. This again will depend on the complexity of your model. If its a rather easy to solve, purely linear model, it is maybe not the best way to go. If you have a rather hard to solve MILP, rolling horizon may make sense.

Finally, if you are looking for reducing runtimes of your model you may reduce the number of timesteps by selecting representative periods of your timeseries data using e.g. `[tsam](https://github.com/FZJ-IEK3-VSA/tsam)`

. And then run a model for a couple of representative weeks instead of a whole year (some accuracy will be lost). This approach has been used by @CKaldemeyer and is doumented in his Phd Thesis pp. 62

Hope this helps.