"Perfect foresight" assumption

Hi everybody,

I am working on the design of distributed energy systems (DES) and most of the studies focus on deterministic optimization problems (by « deterministic » I mean that all the information is known over the horizon, leading to « the best that we can afford » but unreachable solution for a given scenario). Stochasticity is usually adressed by adding multiple scenario to the MILP formulation but the « perfect foresight » assumption (or non-anticipativity constraint) is rarely questioned (for each scenario, the problem is deterministic with perfect foresight of the future).

I understand it is a simple way to solve the problem facing intractable computational time issues but to me, the resulting design should be at least assessed on a simulator where the non-anticipativity constraint is respected in order to conclude about the quality/feasibility of the solution. Nothing has proved a priori that the performance on the real system (where the operation will only rely on past data) will match the required specifications of the case study.

I am a bit surprised that this « perfect foresight » hypothesis is rarely challenged in the community and I was wondering what is your point of view about the question.

Best,

Hugo

N.B : For instance, I think this have strong implications because in a deterministic world with the « perfect foresigth », a 100% renewable production mix is of course feasible at a least cost without reserve margin but nothing has proved that it is going to work with the same performances in the real world.

Hi @HugoR, it is certainly a big assumption to be making, and you’re right that modellers often get the wrong idea about ‘stochastic programming’ in linear optimisation; it’s still deterministic.

A few things you could look at:

  1. Creating a robust counterpart for your model and running the model with robust optimisation. It’s essentially a more conservative method than stochastic programming, since it optimises to account for all ‘possible’ realisations of an uncertain parameter. It has the disadvantage of only being able to describe uncertainty by its extreme bounds (e.g. +/- 10%) and not the shape of the uncertainty distribution.

  2. Requiring a reserve margin, maybe only referring to dispatchable technologies (a.k.a. ‘firm capacity’). E.g. these techs need to give the system at least +10% reserve in all timesteps. This will inevitably be dictated by the timestep with the biggest difference between demand and non-dispatchable technology output.

  3. Assessing the system ex-post with out-of-sample scenarios, to see how well it holds up to realisations of uncertainty. Once an optimisation problem has been solved, with or without multiple scenarios, you fix the capacities to the ‘optimal’ ones and run the model as a rolling horizon with either perfect or imperfect foresight - the latter emulating ‘the real world’. You can then measure the extent to which the system meets demand and, if it does, the ‘actual’ operational costs involved. Rolling horizon is (relatively) easy to solve computationally, and you can run lots of scenarios in parallel.

I’m somewhat biased on approach worth taking: we’ve been working on the 3rd point (see our implementation in Calliope), and I have just submitted a paper using it to assess the impact of the ‘perfect foresight’ assumption. Ultimately, any approach is still a vast approximation of reality, so your research questions and results dicussion should be formulated in that context.

2 Likes

Hi
Nice that you bring up the conversation. Affint to the remark on robust optimization I would just and this really nice and illuminating paper by Moret et al.

https://scholar.google.com/scholar?hl=sv&as_sdt=0%2C5&q=moret+”robust+optimization&oq=moret+”robust+optimizatio#d=gs_qabs&u=%23p%3D_HSECPEOou4J

Lina

Hi @HugoR,

this is a very important and interesting point, but I think I do not completely agree with you when you say that the hypothesis of “perfect foresight” has been rarely challenged in the community of energy modellers. There’s been quite some work on it, and hopefully more to come (looking forward to read @brynpickering 's paper, which sounds super interesting).

In particular, I remember at least one quite widespread paper published on Nature Energy which precisely challenged that assumption, namely: "Impact of myopic decision-making and disruptive events in power systems planning, Heuberger et al., https://www.nature.com/articles/s41560-018-0159-3 ". As Bryn was saying, the idea is that perfect foresight can be avoided by using rolling-horizon optimisation, though on the paper I’m citing the focus was more on the planning perspective than on the operational detail.

When speaking, instead, more closely of “operational detail” approximating as much as possible real-world constraints, apart from the already cited Calliope’s operation mode, there’s some more literature on MILP models combining rolling-horizon optimisation with discrete, real-life relevant dispatch details (ramping constraints, primary, secondary and tertiary reserves, start-up and shut-down costs, and more). Usually such models only focus on the simulation of a pre-defined energy system configuration, again pretty much aligned with what @brynpickering was suggesting: that’s because the computational cost of such operational details is paid at the price of avoiding decisions on capacity expansion. So, one could totally use those models as “real-life relevant operational simulations” of energy system configurations obtained by other, less operationally-detailed models. This is what has been done in a recent paper to which I collaborated, in which we used the open-soure model Dispa-SET (precisely designed for this kind of operational simulation) to test some EU-wide energy system configurations obtained by the JRC-EU-TIMES model, under various degrees of smart flexibility mechanisms: https://www.sciencedirect.com/science/article/abs/pii/S0306261920306127
A mathematical comparison of results obtainable with increasing degree of operational detail has been also done with Dispa-SET by some of the same authors: https://www.sciencedirect.com/science/article/abs/pii/S0306261919310992

I’m sure there’s also a lot more that I’m currently forgetting or that I’ve not yet read, but I hope these contributions can be a starting point.

Best,

Francesco

2 Likes

For planning purposes, you can look at work done at US DOE on forecast error generation to calculate reserve requirements:
https://www.researchgate.net/publication/236489120_Wind_and_Load_Forecast_Error_Model_for_Multiple_Geographically_Distributed_Forecasts?_iepl[viewId]=Apc619m6MjOIwl7I2TUkx0aM&_iepl[contexts][0]=projectUpdatesLog&_iepl[targetEntityId]=PB%3A236489120&_iepl[interactionType]=publicationTitle

I like your question @HugoR because it touches on the reasons behind why we model and how we should interpret model results. Having thought deeply about uncertainty in energy system models, and used various permutations of stochastic programming, you are certainly correct in highlighting these shortcomings. Yes, stochastic programming is inherently a deterministic approach. However, it is not just the perfect foresight aspects and nonanticipativity constraints which limit the approach, but also the limited number of branches that are often explored. Also, the application of weights or probabilities to each of the branches is problematic when you are dealing with parameters that are often deeply uncertain, or at least “uncertain”. Frank Knight’s definition of uncertainty says that it is the state where it is difficult to ascribe a probability to an event, even if you can well define such an event - under this condition - a scenario analysis where you on purposely do not ascribe probabilities is a functional approach.

On the other hand, robust optimisation approaches are rather conservative as they assume we know do not know the likelihood of any future events. And myopic models are still deterministic just over a shorter horizon.

So we are left with a range of imperfect approaches to choose from to deal with the fundamental issues of decision making under uncertainty in the energy sector…

Some recent papers addressing the original question:

Groissböck, Markus (1 March 2019). “Are open source energy system optimization tools mature enough for serious use?”. Renewable and Sustainable Energy Reviews. 102: 234–248. ISSN 1364-0321. doi:10.1016/j.rser.2018.11.020. Closed access.

Tozzi, Peter and Jin Ho Jo (1 December 2017). “A comparative analysis of renewable energy simulation tools: performance simulation model vs. system optimization”. Renewable and Sustainable Energy Reviews. 80: 390–398. ISSN 1364-0321. doi:10.1016/j.rser.2017.05.153. Closed access.

Trutnevyte, Evelina (1 July 2016). “Does cost optimization approximate the real-world energy transition?”. Energy. 106: 182–193. ISSN 0360-5442. doi:10.1016/j.energy.2016.03.038. Closed access.

Yue, Xiufeng, Steve Pye, Joseph DeCarolis, Francis GN Li, Fionn Rogan, and Brian Ó Gallachóir (1 August 2018). “A review of approaches to uncertainty assessment in energy system optimization models”. Energy Strategy Reviews. 21: 204–217. ISSN 2211-467X. doi:10.1016/j.esr.2018.06.003. Closed access.

1 Like

Thanks everybody for your replies and fruitful insights. I now have plenty to read and that’s a good point !

Hugo

Text and images licensed under CC BY 4.0Data licensed under CC0 1.0Code licensed under MITSite terms of serviceOpenmod mailing list.