"Perfect foresight" assumption

Hi @HugoR, it is certainly a big assumption to be making, and you’re right that modellers often get the wrong idea about ‘stochastic programming’ in linear optimisation; it’s still deterministic.

A few things you could look at:

  1. Creating a robust counterpart for your model and running the model with robust optimisation. It’s essentially a more conservative method than stochastic programming, since it optimises to account for all ‘possible’ realisations of an uncertain parameter. It has the disadvantage of only being able to describe uncertainty by its extreme bounds (e.g. +/- 10%) and not the shape of the uncertainty distribution.

  2. Requiring a reserve margin, maybe only referring to dispatchable technologies (a.k.a. ‘firm capacity’). E.g. these techs need to give the system at least +10% reserve in all timesteps. This will inevitably be dictated by the timestep with the biggest difference between demand and non-dispatchable technology output.

  3. Assessing the system ex-post with out-of-sample scenarios, to see how well it holds up to realisations of uncertainty. Once an optimisation problem has been solved, with or without multiple scenarios, you fix the capacities to the ‘optimal’ ones and run the model as a rolling horizon with either perfect or imperfect foresight - the latter emulating ‘the real world’. You can then measure the extent to which the system meets demand and, if it does, the ‘actual’ operational costs involved. Rolling horizon is (relatively) easy to solve computationally, and you can run lots of scenarios in parallel.

I’m somewhat biased on approach worth taking: we’ve been working on the 3rd point (see our implementation in Calliope), and I have just submitted a paper using it to assess the impact of the ‘perfect foresight’ assumption. Ultimately, any approach is still a vast approximation of reality, so your research questions and results dicussion should be formulated in that context.

2 Likes