Skip to content

Conversation

@lucyleeow
Copy link
Member

@lucyleeow lucyleeow commented May 8, 2025

Reference Issues/PRs

closes #25639 (supercedes)
closes #21391

Thought it would be easier to open a new PR and there were no long discussions on the old PR (#25639).

What does this implement/fix? Explain your changes.

Adds a function that takes a scoring function, y_true, y_score, thresholds and outputs score per threshold.

The intention is to later add a new display class using this function, allowing us to plot metric score per threshold e.g.

image

Uses the _CurveScorer as suggested by @glemaitre . Refactors out a new _scores_from_predictions static method, that takes the predictions. The old _score takes estimator, calculates y_score and passes to the new _scores_from_prediction.

Notes:

  • _scores_from_predictions - name is inspired by the display class methods 'from_predictions' , but happy to change
  • it did not make sense to use from_scorer because we are not using a scorer (which has the signature callable(estimator, X, y)) we are using a 'scoring_functionwith signaturescore_func(y_true, y_pred, **kwargs)`
    • we instantiate _CurveScorer directly instead, and then call _scores_from_predictions in decision_threshold_curve
    • decided to make _scores_from_predictions a static method, but I also could have made it a method, and in decision_threshold_curve instantiate _CurveScorer directly first (not via from_scorer). Only went with staticmethod path because I initially didn't register from_scorer instantiates differently than directly via _CurveScorer. Not 100% on what is best here. [realised that method is nicer to avoid having too many params in decision_threshold_curve and avoids some lines of code (use self.xx directly, instead of passing self.xx to _scores_from_predictions)]

Any other comments?

cc @glemaitre

Lots more to do, but should get implementation right first.

To do:

  • Add tests
  • Add example
  • Review _decision_threshold.py module docstring

@glemaitre glemaitre self-requested a review May 9, 2025 19:06
.. currentmodule:: sklearn.model_selection

.. _TunedThresholdClassifierCV:
.. _threshold_tunning:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I referenced this page starting here for decision_threshold_curve because I thought this first paragraph was appropriate, for context, not just the section I added. Not 100% on this though and happy to change

.. _threshold_tunning:

==================================================
Tuning the decision threshold for class prediction
Copy link
Member Author

@lucyleeow lucyleeow May 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@glemaitre Can't comment where this is relevant (after L49), but I wonder if it would be interesting to add another scenario where threshold tunning may be of interest - imbalanced datasets?

good, or a loss function, meaning low is good. In the latter case, the
the output of `score_func` will be sign-flipped.
labels : array-like, default=None
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_CurveScorer uses the term "classes" but "labels" is consistent with what is used for other classification metrics, so chose this.

between the minimum and maximum of `y_score`. If an array-like, it will be
used as the thresholds.
greater_is_better : bool, default=True
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, this is consistent, with term used in other metrics, so avoided using "sign".

Comment on lines +1146 to +1160
# This could also be done in `decision_threshold_curve`, not sure which
# is better
y_true_unique = cached_unique(y_true)
if classes is None:
classes = y_true_unique
# not sure if this separate error msg needed.
# there is the possibility that set(classes) != set(y_true_unique) fails
# because `y_true` only contains one class.
if len(y_true_unique) == 1:
raise ValueError("`y_true` only contains one class label.")
if set(classes) != set(y_true_unique):
raise ValueError(
f"`classes` ({classes}) is not equal to the unique values found in "
f"`y_true` ({y_true_unique})."
)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These checks could be done in decision_threshold_curve instead, not sure which is better.

for th in potential_thresholds
]
return np.array(score_thresholds), potential_thresholds
# why 'potential' ?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for my education, why use the term "potential", in "potential_thresholds". Is it because there a possibility that a threshold is redundant because the predicted labels are the same for adjacent thresholds?

@lucyleeow
Copy link
Member Author

@glemaitre I've highlighted questions in the code.

Tests still to be added for decision_threshold_curve. Depending on where the validation checks are done, I don't think we need many value checks, as the function just uses _CurveScorer methods, which should be tested elsewhere.

Copy link
Member

@jeremiedbb jeremiedbb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @lucyleeow. I haven't looked at the whole PR yet. I'm wondering if there's a way to not have threshold as a parameter.

Comment on lines 61 to 65
thresholds : int or array-like, default=20
Specifies number of decision thresholds to compute score for. If an integer,
it will be used to generate `thresholds` thresholds uniformly distributed
between the minimum and maximum of `y_score`. If an array-like, it will be
used as the thresholds.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we can compute the thresholds automatically, like in the other curve functions.
score_func is called on y_true (independent of the threshold) and y_pred where y_pred is computed from y_score and the threshold.

If we sort y_score in ascending order, it's only where the value of sorted_y_score changes that we need to set a threshold. That way we compute the exact number of thresholds. We can probably leverage some code from _binary_clf_curve

# sort scores and corresponding truth values
desc_score_indices = xp.argsort(y_score, stable=True, descending=True)
y_score = y_score[desc_score_indices]
y_true = y_true[desc_score_indices]
if sample_weight is not None:
weight = sample_weight[desc_score_indices]
else:
weight = 1.0
# y_score typically has many tied values. Here we extract
# the indices associated with the distinct values. We also
# concatenate a value for the end of the curve.
distinct_value_indices = xp.nonzero(xp.diff(y_score))[0]
threshold_idxs = xp.concat(
[distinct_value_indices, xp.asarray([size(y_true) - 1], device=device)]
)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good point, I have added a threshold=None option that does the above.

Would you care to take a quick look at the overall PR before I add tests, as I am not sure about a few things, thanks 🙏

@jeremiedbb
Copy link
Member

Sorry for the delay, there was something puzzling me with the approach of the problem in this PR and I had a hard time figuring out precisely what it was. In particular, the need for thresholds and greater_is_better to be part of the API didn't feel right.

To me the issue is the use of the CurveScorer to tackle this. We don't need to transform the metric into a scorer to compute the curve. What we need is to a lot closer to the _binary_clf_curve

def _binary_clf_curve(y_true, y_score, pos_label=None, sample_weight=None):

The difference is that instead of computing the true positives and false positives, we'd compute any (threshold dependent) metric. Briefly, this is done in 3 steps:

  • sort the arrays by y_score
  • compute the indices in y_score that will be the thresholds, I.e. at each index where sorted y_score changes.
  • compute the metric for these thresholds

I think that we can extract most of the _binary_clf_curve code inside a new, more generic, function. Then make _binary_clf_curve call this new function and decision_threshold_curve call this new function as well.

This is related to #30134 where the goal is to make _binary_clf_curve public and return the tns and fns as well.

@lucyleeow
Copy link
Member Author

Interesting, I've reviewed #30134, hopefully that will be merged soon and I can look at your suggested changes after!

@lucyleeow
Copy link
Member Author

@jeremiedbb I've tried your approach in a separate PR #32732, as I thought it was nice to be able to see the two implementations side by side, and also it would have been complicated history/changes wise to do it all in this PR.

I think your suggested approach is better, please share your thoughts!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

add sklearn.metrics Display class to plot Precision/Recall/F1 for probability thresholds

4 participants