Hi everybody,

I am working on the design of distributed energy systems (DES) and most of the studies focus on deterministic optimization problems (by « deterministic » I mean that all the information is known over the horizon, leading to « the best that we can afford » but unreachable solution for a given scenario). Stochasticity is usually adressed by adding multiple scenario to the MILP formulation but the « perfect foresight » assumption (or non-anticipativity constraint) is rarely questioned (for each scenario, the problem is deterministic with perfect foresight of the future).

I understand it is a simple way to solve the problem facing intractable computational time issues but to me, the resulting design should be at least assessed on a simulator where the non-anticipativity constraint is respected in order to conclude about the quality/feasibility of the solution. Nothing has proved a priori that the performance on the real system (where the operation will only rely on past data) will match the required specifications of the case study.

I am a bit surprised that this « perfect foresight » hypothesis is rarely challenged in the community and I was wondering what is your point of view about the question.

Best,

Hugo

N.B : For instance, I think this have strong implications because in a deterministic world with the « perfect foresigth », a 100% renewable production mix is of course feasible at a least cost without reserve margin but nothing has proved that it is going to work with the same performances in the real world.