Skip to content

Evaluation subpackage and metric implementations #303

@jenspetersen

Description

@jenspetersen

I'd like to use this issue to discuss the structure of the evaluation part of MONAI, essentially what is currently monai.metrics. I plan to do the following in a PR, but obviously would like to coordinate with you guys.

  1. Rename to monai.evaluation to make it more generic, at first there will be modules
    1. metrics (Implementations of metrics)
    2. util(For now a confusion matrix helper to hold TP, FP, FN, TN and a wrapper to convert metrics into a ignite.metrics.Metric)
  2. Implement the most commonly used metrics
    1. Confusion matrix based metrics (see https://en.wikipedia.org/wiki/Confusion_matrix for an overview)
    2. Hausdorff distance (95), medpy has an implementation
    3. Surface distance, medpy has an implementation
    4. Check medpy, NiftyNet, Clara Train for other metrics (see also Port metrics from niftynet and clara train #85)

I'm assuming you also have ideas what that package should look like and requirements that I'm not aware of. Would love to hear your thoughts! I'm also a bit biased towards segmentation, I'm guessing there are special metrics for other tasks that should be integrated.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Design discussionsrelated to the generic API designsModule: metricsmetric for model quality assessmentsWG: EvaluationFor the evaluation working groupenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions