Scaling hints#1649
Conversation
MarcusHolly
left a comment
There was a problem hiding this comment.
Minor comments, but nothing that should hold this back
| raise TypeError(f"{constraint} is not a constraint (or is indexed).") | ||
| self._scale_component_by_default(component=constraint, overwrite=overwrite) | ||
|
|
||
| def get_expression_nominal_value(self, expression): |
There was a problem hiding this comment.
Is there not a better way to combine/differentiate the three nominal value functions?
There was a problem hiding this comment.
We have get_expression_nominal_values deprecated, and it can be removed in the next release. The difference between get_sum_terms_nominal_values and get_expression_nominal_value is that the former returns a list while the latter returns a float.
| def propagate_state_data_scaling( | ||
| self, target_state_data, source_state_data, overwrite: bool = False | ||
| ): | ||
| """ | ||
| Propagate scaling of state variables from one StateBlockData to another. | ||
|
|
||
| Args: | ||
| target_state_data: StateBlockData to set scaling factors on | ||
| source_state_data: StateBlockData to use as source for scaling factors | ||
| overwrite: whether to overwrite existing scaling factors | ||
|
|
||
| Returns: | ||
| None | ||
| """ | ||
| target_vars = target_state_data.define_state_vars() | ||
| source_vars = source_state_data.define_state_vars() | ||
|
|
||
| for state, var in target_vars.items(): | ||
| for vidx, vardata in var.items(): | ||
| self.scale_variable_by_component( | ||
| target_variable=vardata, | ||
| scaling_component=source_vars[state][vidx], | ||
| overwrite=overwrite, | ||
| ) |
There was a problem hiding this comment.
Similar point with this - is there a simple way to combine this with the existing function?
There was a problem hiding this comment.
Is the proposed behavior to have propagate_state_scaling check whether the inputs are indexed or unindexed, then handle those cases appropriately?
There was a problem hiding this comment.
^^^ I think this is right. The function signature and return value are the same, so might as well allow it to handle both types of inputs - that being said, I would make sure that the documentation clearly explains it.
idaes/core/scaling/scaling_base.py
Outdated
| CONFIG.declare( | ||
| "max_variable_scaling_factor", | ||
| ConfigValue( | ||
| # default=float("inf"), |
There was a problem hiding this comment.
Better to just delete these default values?
There was a problem hiding this comment.
I really want to get rid of these max and min values, because we're going to need to override them in pretty much every scaler in PrOMMiS, but they're required for the Autoscalers to function. I can certainly get rid of these comments, though.
strahlw
left a comment
There was a problem hiding this comment.
First pass of the PR (it is very large) - there are certainly nuances that I am missing, but I left some comments of things that popped out.
| None | ||
| """ | ||
| if not (isinstance(variable, VarData) or isinstance(variable, ExpressionData)): | ||
| raise TypeError(f"{variable} is not a variable/expression (or is indexed).") |
There was a problem hiding this comment.
See comment on scale_variable_by_units. I think it would be beneficial to have clearer feedback to the user if they encounter these types of errors.
| def propagate_state_data_scaling( | ||
| self, target_state_data, source_state_data, overwrite: bool = False | ||
| ): | ||
| """ | ||
| Propagate scaling of state variables from one StateBlockData to another. | ||
|
|
||
| Args: | ||
| target_state_data: StateBlockData to set scaling factors on | ||
| source_state_data: StateBlockData to use as source for scaling factors | ||
| overwrite: whether to overwrite existing scaling factors | ||
|
|
||
| Returns: | ||
| None | ||
| """ | ||
| target_vars = target_state_data.define_state_vars() | ||
| source_vars = source_state_data.define_state_vars() | ||
|
|
||
| for state, var in target_vars.items(): | ||
| for vidx, vardata in var.items(): | ||
| self.scale_variable_by_component( | ||
| target_variable=vardata, | ||
| scaling_component=source_vars[state][vidx], | ||
| overwrite=overwrite, | ||
| ) |
There was a problem hiding this comment.
^^^ I think this is right. The function signature and return value are the same, so might as well allow it to handle both types of inputs - that being said, I would make sure that the documentation clearly explains it.
| _log.debug(f"Created new scaling hint suffix for {_filter_unknown(blk)}") | ||
| sfx = blk.scaling_hint = Suffix(direction=Suffix.EXPORT) | ||
|
|
||
| return sfx |
There was a problem hiding this comment.
There's duplicate code in 84-94 and 121-131. I suggest breaking it out into a separate function and having an argument that determines whether it uses "hints" or "factors" in the type error message.
| if comp is not None: | ||
| if valid_types is not None and not any( | ||
| [isinstance(comp, cls) for cls in valid_types] | ||
| ): |
There was a problem hiding this comment.
You can just use the generator here, no need to encapsulate in a list
| jac, _ = get_jacobian(sm, scaled=False) | ||
| assert (jacobian_cond(jac=jac, scaled=False)) == pytest.approx( | ||
| 4.987e05, rel=1e-3 | ||
| 5.445e05, rel=1e-3 |
There was a problem hiding this comment.
I take it this changes because we now scale fixed variables for the "nominal" values.
|
|
||
| # Nominal value should be value | ||
| assert NominalValueExtractionVisitor().walk_expression(expr=m.var) == [-1] | ||
| assert NominalValueExtractionVisitor().walk_expression(expr=m.var) == [-4] |
There was a problem hiding this comment.
Assuming this is an understood/intended change -> looking further down in the PR it looks like a "fundamental" change going on is no longer using fixed values as "nominal" values. If this is a correct understanding, please update the comment on line 130.
| Inverse Sum || 9.077E+15 | Failed 200 || 1.502E+06 | Solved 6 | ||
| Inverse Root Sum Squares || 1.083E+16 | Failed 200 || 9.600E+05 | Solved 6 | ||
| Inverse Maximum || 1.066E+16 | Failed 200 || 7.846E+05 | Solved 6 | ||
| Inverse Minimum || 7.543E+18 | Failed 200 || 5.599E+12 | Solved 16 |
There was a problem hiding this comment.
Is it clear why only the inverse minimum changed for the "perfect scaling" case. That seems a little odd to me (although percentage-wise it is a small change). Also, the user-scaling cases fail many times more - is this understood behavior, or just an observed response? (Does it need to be understood?)
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## scaling_toolbox #1649 +/- ##
===================================================
+ Coverage 77.01% 77.02% +0.01%
===================================================
Files 395 395
Lines 63555 63689 +134
Branches 10365 10405 +40
===================================================
+ Hits 48944 49056 +112
- Misses 12171 12184 +13
- Partials 2440 2449 +9 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
* rescue files from overloaded git branch * fix due to api tweaks * run black * forgot to add a file * fix test errors * pin coolprop version * update version, disable superancillaries * run black * respond to Marcus's feedback * getting close * address Will's comments * tests for set_scaling_factor * support for unions in python 3.9 * testing the scaling profiler is way too fragile * modify test to be less fragile * remove pdb
Summary/Motivation:
While the old scaling toolbox supported setting scaling factors for both
VarsandExpressions, the Pyomo team indicated to us that we were abusing the scaling suffix by setting scaling factors forExpressionsbecause they would not be exported to the NLP. However, using theNominalValueExpressionWalkerfor the sorts of giga-expressions that are regularly found in IDAES thermodynamic packages can be dangerous. As a result, I implemented a system of "scaling hints" that could be assigned to namedExpressions, sequestering them in thescaling_hintsuffix.Additionally, I modified the
call_submodel_scaler_methodso that it passes the parentScaler's configuration and map of submodel scalers to the childScalerobjects. Note that some child scalers were not written expecting thesubmodel_scalersargument, so this does break backwards compatibility, but I doubt that there has been any major adoption of these new scaling tools.This resolves point (1) in #1648.
Changes proposed in this PR:
submodel_scalersto child scalers incall_submodel_scaler_methodof a parent scalerscale_constraint_by_componentmethod toCustomScalerBasein order to take the scaling factor from aVarorExpressionand assign it to a constraint.scale_variable_from_defaultin order to avoid silent failures, as well as to make it clear to the user when their input into scaling is necessary.Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: