Maintenance is time-consuming: users meeting, debugging, documentation, consolidating the tools / pieces of code developed by users, organisation of development and maintenance (group management)
–> Join forces to split the effort on many shoulders
But how can we find out the advantages of each model and join forces?
–> Platforms that do the comparisons: Energy modelling forum, Energy modelling platform for Europe
Framework Fact sheets: add a section where specific sections of the libraries, plugins,… etc. can be described.
Who should be concerned?
Those maintaining a model with a small number of developers
Those maintaining a model with a high number of forks (hard to keep track of their quality)
Those who are thinking about moving to another language
Ultimate target: One super model?
–> No! Two, three models are good to compare the results
Observations from OSeMOSYS
- we are already profiting from exchange in openmod.
- what is already there is tailored to some categories of users.
- how would this practically/efficiently happen? some models are so widely used that they can not be changed centrally.
- How to:
- Overview of the existing models is a first step, in order to see what models can be joined. Also sharing their I/O libraries. Comparison of model capabilities, libraries, and model (software) structure is necessary. To describe the structure, UML could be used.
- Then, bilateral discussions on the exact special features of each model. Those need to be kept! Each team will take care of the maintenance of his special feature. Organization: there should be rules to monitor the process, and a contribution guideline!
Alternatives: Soft linking is also an option to keep the strengths of different models.
Legacy issues of branding
Maintenance of individual models is easier than contributing to a big model
Are the special features in the core, or are they on the outside, so that they can be plugged in another model?
Academic incentives (publications will not be linked to the original model / library author) and ego could be hurdles.
quality increases, credibility increases, engagement with society, more transparency
Experience from the LCA world: funding by local / national governments, aggregated effort on a global level.
Experience from the building level: many institutions took part in an effort to improve modelica. There were some pitfalls and successes.
Experience from OSeMOSYS: steering committee (executive body), community management. Similar to the editorial board of journals.
IEA Project versus H 2020 Project: IEA used to fund projects where researchers would concentrate on developing/consolidating the models (?).