Skip to content

Conversation

@maximpavliv
Copy link
Contributor

Background

The likelihoods (confidences) produced by SimCCPredictor were not consistent with the expected behavior of SimCC decoding.

  • In some experiments, likelihoods exceeded 1, because the predictor was returning raw model outputs (logits) instead of proper probabilities.
  • When setting apply_softmax=True, likelihoods were extremely small (~1e-3). This is expected mathematically (softmax distributes probability mass across the entire SimCC space), but not useful as a visibility/confidence measure.
  • When setting normalize_outputs=True, most likelihoods were exactly 1, because normalization simply divided by the maximum logit value.

This mismatch originated from how the raw SimCC representation vectors (logits over discretized x/y coordinates) were converted into coordinates and visibility scores.

See the original MMPose predictor implementation for reference.

Changes

This PR brings SimCCPredictor closer to the original MMPose decoding logic:

1. Added parameters

  • sigma: variance of the Gaussian used to generate SimCC labels during training. This controls the expected sharpness of the label distribution.
  • decode_beta: scaling factor applied together with sigma to logits before softmax. It acts like an inverse temperature in softmax: sharpening peaked distributions (visible keypoints) and flattening diffuse ones (occluded keypoints).

Together, sigma * decode_beta ensures that visibility/confidence is decoded consistently with the label distribution used in training.

2. Likelihood computation

  • If normalize_outputs=False (which is the default): the raw logits are scaled by (sigma * decode_beta) before calling get_simcc_maximum with apply_softmax=True. This matches the MMPose decoding step and yields meaningful visibility/confidence scores in [0,1].

3. Default behavior

  • apply_softmax now defaults to True (to ensure proper probabilities rather than raw logits are returned).
  • normalize_outputs still defaults to False.

Outcome

  • Likelihoods are now properly bounded in [0,1].
  • The values reflect both the sharpness of the SimCC distribution and the visibility of the keypoint, consistent with the SimCC paper and MMPose implementation.
  • Users can still enable normalize_outputs for compatibility with previous behavior, but it is discouraged for real confidence estimation.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes the likelihood computation in SimCCPredictor to make it consistent with the MMPose decoding logic. The main issue was that likelihoods were either exceeding 1 (returning raw logits) or were extremely small due to improper softmax application across the entire SimCC space.

  • Updates SimCCPredictor to use sigma and decode_beta parameters for proper likelihood scaling
  • Changes default behavior to apply_softmax=True for proper probability computation
  • Adds new sigma and decode_beta parameters to RTMPose configuration files

Reviewed Changes

Copilot reviewed 33 out of 33 changed files in this pull request and generated 3 comments.

File Description
deeplabcut/pose_estimation_pytorch/models/predictors/sim_cc.py Core fix - adds sigma/decode_beta scaling and changes default apply_softmax to True
Multiple RTMPose config files Adds sigma and decode_beta parameters to model configurations
Multiple other files Cleanup of superanimal_humanbody specific code and infrastructure improvements

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@MMathisLab
Copy link
Member

cc @maximpavliv can you adress copilots suggestions?

@maximpavliv
Copy link
Contributor Author

cc @maximpavliv can you adress copilots suggestions?

@MMathisLab yes, done!

@AlexEMG AlexEMG merged commit 561d5f4 into main Sep 16, 2025
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug fix! fix for a real buggy one... DLC3.0🔥

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants