optimization.py: Fix "epsilons" keyword argument in _optimize()#150
optimization.py: Fix "epsilons" keyword argument in _optimize()#150EwoutH merged 1 commit intoquaquel:masterfrom
_optimize()#150Conversation
The keyword argument "epsilon" isn't used anywhere else in the EMAworkbench, it should be epsilons (with an s). So currently if doesn't throw an error when you define a list of epsilons of the wrong length, because it searches for epsilon instead of epsilons. This commit fixes this.
|
It would also be good to add the updated tests to this PR. In my view, the check on the correct number of epsilons actually belongs in the platypus-opt rather than in the workbench. But in the meantime, it is good to have it here. However, you only need a number of values equal to the number of objectives (i.e., non-INFO type outcomes defined on a model). I am unsure how much documentation I want to do in the workbench for functionality from other libraries (e.g., SALib, platypus_opt). |
|
Shall we merge this, split out the testing to a different issue, and open a broader discussion about the scope of the EMAworkbench somewhere/-when else? |
|
I prefer the additional tests here after which this can be merged. The broader discussion on scope can move somewhere else. |
|
How do we want to handle this? We can merge this into master and create a separate issue for adding more unit tests for optimization.py. Alternatively, I can close this and create a new pull request with both the unit tests and the fix. |
|
I would like to take a crack at writing the unittest myself. Testing is an area I would like some more experience |
The keyword argument
epsilonisn't used anywhere else in the EMAworkbench, it should beepsilons(with as).So currently the
_optimize()function (and all functions that call it) doesn't throw an error when you define a list of epsilons of the wrong length, because it searches forepsiloninstead ofepsilons. This PR fixes this.Fixing this error should help catch this a lot sooner, but a bit of additional docs describing the behavior of different epsilon values, why and how certain values can be chosen and how to exactly define them should clear it up even more. I think that can be included in #147.
Maybe we can also include tests for this function, one with the right length of epsilon values and one with a wrong list length.
On a bit more personal note, this behavior threw me way off, and unfortunately invalidates a lot of our results, because we thought we were defining epsilons right. We also should have provided epsilon values for all outcomes which weren't used as KPI, instead of defining only epsilon values for our KPIs.