Skip to content

Conversation

@suniique
Copy link
Collaborator

@suniique suniique commented Oct 26, 2025

This PR adds an anchor-free mode for the RPF training and inference.

We followed GARF’s data pipeline implementation, which uses an anchor-fixed setup (c.f. GARF's model and its default config). However, the anchor-fixed assumption can be unrealistic for many real-world assembly scenarios. Therefore, we also trained an anchor-free version of RPF: in this mode the model is not given the anchor part’s pose in the assembled object CoM frame during training or inference.

Comparison between Two Modes:

Stage Anchor Fixed Anchor Free
Training Anchor: Global Rotation.
Non-anchor: Global Rotation + Part Centring + Part Rotation.
Anchor: Global Rotation + Part Centring.
Non-anchor: Global Rotation + Part Centring + Part Rotation.
Inference Sampling the flow with anchor's point cloud reset to GT every step. Sampling the flow without anchor resetting.

Evaluation in the Anchor-free Mode. To keep evaluation comparable between anchor-fixed and anchor-free models,
we perform the alignment steps at evaluation time for anchor-free predictions:

  1. Align the predicted anchor part to the GT anchor part using ICP.
  2. Apply the same rigid transformation to all predicted non-anchor parts.
  3. Evaluate the aligned whole point cloud against ground truth.

FAQs:

Q: Why does an anchor-fixed mode leak information of the GT?
A: During data augmentation, the full object is globally centered to its CoM (center of mass) frame and then normalized to unit scale. If the anchor part is not independently re-centered (i.e., Part Centering), its coordinates implicitly encode the assembled object’s CoM.

Q: Why don't we apply Part Rotation to the anchor part in anchor-free mode?
A: Anchor-free training already randomly rotates all non-anchor parts, so the anchor doesn’t provide a fixed orientation prior. Applying an additional Part Rotation to the anchor is essentially redundant (it can be absorbed into the Global Rotation), so we omit it.

Q: Why do we still return the anchor_indices in anchor-free mode?
A: This is necessary for evaluation. At test time, we align the predicted anchor to the GT anchor using ICP and apply the same rigid transform to all predicted non-anchor parts before computing metrics. Note: we do not reset the model’s predicted anchor part to the GT in anchor-free mode.

Q: How does the anchor-free mode affect performance?
A: Training in anchor-free mode does not substantially hurt performance. In our tests, the anchor-free model shows some expected degradation due to (i) error propagation from anchor misalignment and (ii) ambiguity induced by symmetric anchor parts. Nonetheless, our model still shows strong performance under the anchor-free setting. Notably, it significantly improves anchor-free SOTA on BreakingBad-Everyday’s Part Accuracy from 76.2% (PuzzleFussion++) to 90.2%. See Appendix B of our paper for more results and details.

Code Changes:

  • Added an anchor_free parameter (default True) to the rectified_point_flow/data/{dataset, datamodule}.py.
  • Modified the dataset transformation logic to handle anchor parts differently depending on the mode.
  • Added an align_anchor function to rectified_point_flow/eval/metrics.py and used it in the evaluator to align predicted anchor parts with ground truth via ICP in anchor-free mode.
  • Updated default checkpoint path to anchor-free version in sample.py.
  • The anchor-free mode can be toggled by model.anchor_free=true and data.anchor_free=true in the config. They are already enabled by default.

@suniique suniique requested a review from Zhu-Liyuan October 27, 2025 00:09
@suniique suniique merged commit 3c73421 into main Oct 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants