Skip to content

Conversation

@DmitryUlyanov
Copy link
Contributor

Implements instance normalization modules (1d, 2d, 3d). Based on batch norm, allows affine transform, but it is switched off by default.

Copy link
Contributor

@soumith soumith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

almost looks good. Please check inline comments.


return out.view(b, c, *input.size()[2:])

def force_eval(self):

This comment was marked as off-topic.

This comment was marked as off-topic.

bias = self.bias.repeat(b)

# Apply instance norm
input_reshaped = input.view(1, b * c, *input.size()[2:])

This comment was marked as off-topic.

@soumith soumith merged commit fa4f363 into pytorch:master Apr 23, 2017
@soumith
Copy link
Contributor

soumith commented Apr 23, 2017

thank you!

@mys007
Copy link
Contributor

mys007 commented Sep 16, 2017

As training=False in fixed in calling F.batch_norm(), is it really necessary to:

  1. keep running stats at all? One could throw out their keeping as parameters (they could be allocated in `forward() each time anew) as well as repeating them and averaging them back.
  2. offer momentum in constructor?

jjsjann123 added a commit to jjsjann123/pytorch that referenced this pull request Dec 5, 2021
Only allows horizontal fusion across tensor inputs. This prevents accidental fusion of operations sharing constant scalar inputs. (e.g. casting operations, where the problem was )
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants