Skip to content

Line losses with absolute approximation error#1470

Closed
lindnemi wants to merge 3 commits intomasterfrom
improve-line-losses
Closed

Line losses with absolute approximation error#1470
lindnemi wants to merge 3 commits intomasterfrom
improve-line-losses

Conversation

@lindnemi
Copy link
Copy Markdown
Contributor

@lindnemi lindnemi commented Dec 8, 2025

Closes #1320.

Changes proposed in this Pull Request

If line losses are enabled, changing the capacity maximum for lines s_nom_max, affects the results of the optimization - even if the constraint is not binding (see #1320). The reason is that the tangents which approximate the quadratic loss depend on s_nom_max. More precisely: Currently the user specifies a number N of tangents they want to use for the approximation, and then the interval [0, s_nom_max] is divided into N segments, each corresponding to one tangent. If s_nom_max is changed, these intervals change, and so do the corresponding tangents. This PR suggests to specify an absolute tolerance for the approximation error instead and to use that to partition [0, s_nom_max]. In this case, if s_nom_max is changed, the number of tangents in the approximation changes, but the tangents that still exist are the same.

Let $\epsilon$ be the absolute error tolerance, $r x^2$ be the loss parabola, and $2 r x_0 x - r x_0^2$ be its tangent at the point $x_0$. Consider the points where the approximation error is exactly $\epsilon$, that is $r x^2 - 2 r x_0 x + r x_0^2 = \epsilon$, the roots of this polynomial are $x_0 \pm \sqrt{\epsilon/r}$. We can conclude that the approximation error is smaller than $\epsilon$ in the interval $[x_0 - \sqrt{\epsilon/r}, x_0 + \sqrt{\epsilon/r}]$. Thus, if we pick the tangents at $x0 = \dots, -4\sqrt{\epsilon/r}, -2\sqrt{\epsilon/r}, 0, 2\sqrt{\epsilon/r}, 4\sqrt{\epsilon/r}, \dots$ we can be sure that the absolute error of the approximation is smaller than $\epsilon$ .

From this we can compute the number of tangents we need for a conservative loss approximation with error at most $\epsilon$ by partitioning the interval [-s_nom_max, s_nom_max] into segments of length $2\sqrt{\epsilon/r}$. If s_nom_max is very large or $\epsilon$ is too small the number of tangents may get large. To avoid performance hits from that, I added a warning.

This PR is orthogonal to #1409. It does not primarily improve the quality of the approximation, but fixes the side-effects of changing s_nom_max.

Checklist

This is a draft of the proposed change, if your feedback is positive I will polish it up.

  • Code changes are sufficiently documented; i.e. new functions contain docstrings and further explanations may be given in docs.
  • Unit tests for new features were added (if applicable).
  • A note for the release notes docs/release-notes.md of the upcoming release is included.
  • I consent to the release of this PR's code under the MIT license.

@lindnemi lindnemi requested review from Irieo and fneum December 8, 2025 16:46
@lindnemi lindnemi added the needs discussion Requires discussion before further action label Dec 8, 2025
@lindnemi lindnemi mentioned this pull request Dec 16, 2025
6 tasks
@lindnemi
Copy link
Copy Markdown
Contributor Author

Superseded by #1495

@lindnemi lindnemi closed this Dec 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs discussion Requires discussion before further action

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Changing s_nom_max affects line losses

1 participant