# Oemof solving time of non-convex investments

Hi all,

we (@GregorB and I) have a problem with solving our oemof energy system model in combination with non-convex investment decisions.

Our model optimizes the power and heat supply of 7 buildings. The investment options include a district heating network with a total of 21 pipeline sections, for which two different pipeline types can be designed (DN20 or DN32), each.

Now the problem: If we give the model the possibility to design the pipelines linearly, solving the model takes about 10 minutes. If we set the pipe layout as non-convex decisions, we get no result even after 24 hours of calculation. We use the Gurobi solver with default settings.

Has anyone had such a problem before? We are grateful for every hint!

I have not really worked much with nonconvex investments, not simulated district heating networks.
But I think, your problem is a rather general one: MILP models solution times increase exponentially.
So you might think of either

• formulating the model as a linear one,
• using more computation power (esp. memory might have become a bottleneck for you and writing to disk is kind of slow) or
• reducing the complexity of your model (coming at the expense of gloabl optimality) by using time slicing, rolling horizon or whatever method of complexity reduction.

Unfortunately, this exploding complexity can already occur for models which you think of as small.

Could you try to give additional information: What error message is thrown? I assume, it is a OutOfMemoryError, right? Or is it another issue? Tracking a possible infeasibility would be a bit harder. You would have to go through your parameterization and check for contradictive constraints resulting from that if that was the issue.

Binary decisions are really problematic. You end up having a combinatory problem. It might help, if you add extra constraints so that pipes in upstream branches cannot be smaller then the ones in the downstream branches.

Hey,

24 hours is not very unusual for a big MILP, in particular, if many binary variables are involved.

In addition to what dlr_jk wrote, you can try to set you MIP Gap of the solver to a higher value, as often the last iterations close to the optimum require a lot of time but don’t yield a significantly different result.

Passing it to the solve method via io_options which takes a dictionary (use tee=True to stream solver output and see how objective function converges can help as well).

We provide access to the underlying pyomo functionalities within the `solve` method, you should be able to set the options as follows:

``````model.solve(model, solve_kwargs={"tee":True}, cmdline_options={"mipgap": 0.01})
``````

This will result not in the optimal but in a near-optimal solution. For most problem, it will still be fine with 1% MIP Gap.

Hello all,

thank you for your very quick replies!

@dlr_jk
We currently have 128 GB of memory available, of which the model is currently using about 25 GB. We are not getting an error message, the solver is just not finding a solution. I am attaching the current status of our current test run as an image to this post. (MIP gap 35.0% after ~39 h).

By reducing the time system, we can achieve a significant speedup of the model (72 time steps => ~30 sec runtime; 864 time steps => ~ 2 h runtime). Since the goal of our current study is to validate exactly these simplifications, we need to be able to solve a model run without simplifications at least once to be able to compare the results afterwards.

@simnh
Unfortunately, our MIP gap usually does not come close to a 1% value for long runs, but stays in the 30% range for a very long time (see figure). We are currently trying to speed up with the Gurobi parameters (Method, MIP focus, Heuristic …), but have had no significant success so far. Do you have any experience in this area?

@pschoen
I think you are right. With a reduction of the number of pipes and therefore the number of binary decisions we probably have the biggest adjustment potential. We will try to simplify the model at this point.

Thanks a lot for your help! I will report back if I find out anything new!