Modify dynunet forward function#1596
Merged
wyli merged 2 commits intoProject-MONAI:masterfrom Feb 19, 2021
Merged
Conversation
Signed-off-by: Yiheng Wang <[email protected]>
wyli
approved these changes
Feb 19, 2021
Contributor
There was a problem hiding this comment.
thanks, to me this solution looks nice. just wanted to note that interpolate takes some additional arguments https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.interpolate but they are not tunable in this PR. perhaps we could expand the API to support those if we have further feature requests
Nic-Ma
approved these changes
Feb 19, 2021
Contributor
Nic-Ma
left a comment
There was a problem hiding this comment.
Looks good to me.
Please also update the DynUNet tutorial notebook.
Thanks.
7 tasks
Contributor
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Signed-off-by: Yiheng Wang [email protected]
Fixes #1564 .
Description
Hi @rijobro @wyli @Nic-Ma @ericspod , after the discussion, I did some changes for the forward function of DynUNet. If we all used the list based return format, the default sliding window inferrer cannot work, thus I decided to return a single tensor for both train and eval modes. This change solves the DDP issue, and meet the restriction of TorchScript.
The change is that in deep supervision mode, all feature maps will be interpolated into the same size as the last feature map, and then stack together as a single tensor.
Therefore, in the loss calculation step, in the original implementation, the ground truth will be interpolated into each feature map's size, and then do the calculation. In this PR's implementation, the ground truth will be calculated with each interpolated feature map. These two ways have a little difference, but according to my simple test, the performance for task 04 will not be reduced.
Do you think we can change in this way?
Status
Ready
Types of changes
./runtests.sh --codeformat --coverage../runtests.sh --quick.make htmlcommand in thedocs/folder.