Fix for Saving Evaluation Metrics in Loop Closure Pipeline #47
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR resolves an issue where evaluation metrics were not being correctly saved during the execution of the loop closure pipeline. Specifically, the method
self.results.compute_closures_and_metrics()was being called after saving the configuration and logging, resulting in incomplete or missing evaluation metrics in the output file (evaluation_metrics.txt).The change moves the
compute_closures_and_metrics()method to be executed before the configuration is saved and the results are logged, ensuring that all metrics are correctly calculated and included in the output.Issue Details
Before this fix, running the pipeline with the command:
produced an incomplete
evaluation_metrics.txtfile with no data:After the fix, the metrics are properly calculated and saved, as shown below: