Skip to content

Conversation

@colesbury
Copy link
Member


class GLU(Module):
"""Applies the gated linear unit function :math:`{GLU}(a, b)= a \otimes \sigma(b)`
where `a` is the first half of the input vector and `b` is the second half.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.


def glu(input, dim=-1):
if dim < 0:
dim += input.dim()

This comment was marked as off-topic.

@apaszke
Copy link
Contributor

apaszke commented Jun 13, 2017

Also, functional interface should capitalize the name as well

Copy link
Member Author

@colesbury colesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't capitalize the names in functional interfaces (see relu, elu, etc.)


class GLU(Module):
"""Applies the gated linear unit function :math:`{GLU}(a, b)= a \otimes \sigma(b)`
where `a` is the first half of the input vector and `b` is the second half.

This comment was marked as off-topic.

@soumith
Copy link
Contributor

soumith commented Jun 14, 2017

this is now merged into master

@soumith soumith closed this Jun 14, 2017
jjsjann123 added a commit to jjsjann123/pytorch that referenced this pull request Jun 30, 2022
Caching strides along with sizes. This is to support current expand, which introduces non-contiguous output tensor
pytorchmergebot pushed a commit to jjsjann123/pytorch that referenced this pull request Jul 12, 2022
Caching strides along with sizes. This is to support current expand, which introduces non-contiguous output tensor
jjsjann123 added a commit to jjsjann123/pytorch that referenced this pull request Jul 15, 2022
Caching strides along with sizes. This is to support current expand, which introduces non-contiguous output tensor
jjsjann123 added a commit that referenced this pull request Jul 21, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 21, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

ghstack-source-id: f24793f
Pull Request resolved: #81861
jjsjann123 added a commit that referenced this pull request Jul 21, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 21, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

ghstack-source-id: cfd5278
Pull Request resolved: #81861
jjsjann123 added a commit that referenced this pull request Jul 23, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 23, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

ghstack-source-id: 93c6b1e
Pull Request resolved: #81861
jjsjann123 added a commit that referenced this pull request Jul 26, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 26, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 27, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 27, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 27, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 27, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)

[ghstack-poisoned]
jjsjann123 added a commit that referenced this pull request Jul 27, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

ghstack-source-id: a74f653
Pull Request resolved: #81861
pytorchmergebot pushed a commit that referenced this pull request Jul 28, 2022
Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Differential Revision: [D38043938](https://our.internmc.facebook.com/intern/diff/D38043938)
Pull Request resolved: #81861
Approved by: https://github.com/davidberard98
facebook-github-bot pushed a commit that referenced this pull request Jul 28, 2022
Summary:
Pull Request resolved: #81861

Syncing nvfuser devel branch to upstream master. https://github.com/csarofeen/pytorch/

Code changes includes:

- codegen improvements:
  1. Indexing refactor -> Remove reference tensor in predicate indexing logic
  2. MMA Rfactor support for cross-warp and cross-CTA split on K dimension
  3. Grouping grid allreduces across iterations
  4. Swizzle op formulation for non-affine swizzles
  5. Use scheduler_utils to cache inputs and outputs in schedulePointwise
- scheduler refactor
  1. New compute at interface
- transformation propagation refactor on MaxInfoSpanningTree
  1. Added sibling path that is required to generate consistent replay for some cases where `MaxInfoSpanningTree` is used with a selector.
  2. Optimization to skip Transform propagator
  3. SpanningTreePrinter for debugging
- parser update
  1. Fixes `div`
  2. Added `_to_copy`
  3. Broadcast in dim with expand to support expanding to concrete size
  4. Dropout prob extremal patch
- executor patch on caching strides for output allocation

Squashed commits to WAR github API
Commits that's actually in this PR from the devel branch:

```
3b87896 Fix allocation of work buffers and `fused_reduction::ParallelReduce` with unswitch (#1818)
4cae122 schedulePointwise cleanup: - computeAt + InlinePropagator (#1815)
3df9742 Use scheduler_utils to cache inputs and outputs in schedulePointwise (#1811)
03180aa improve broadcast resolution (#1792)
bee6c69 bug fix (#1819)
4413c8f Support PYTORCH_NVFUSER_DUMP=transform_propagator (#1812)
de6b7ca Fix negative position in InlinePropagator (#1813)
10a996c Remove redundant check in schedulePointwise (#1810)
acd5ed4 Swizzle op formulation for non-affine swizzles (#1441)
3ed8330 Kernel args patch to show zero_init buffer (#1809)
037a75a Dropout prob extremal patch (#1804)
282c429 spam nvrtc options (#1783)
3ba6a5f Broadcast in dim with expand (#1794)
fd4be12 remove dead indexing code (#1806)
fa4e6a4 Check siblings in getMaxPosAll (#1805)
025c840 Grouping grid allreduces across iterations (#1755)
37c579e Temporarily disable test requring large shared memory. (#1802)
5f375d0 More cleanup on InlinePropagator (#1800)
8d384da Indexing refactor stage 2 : Remove reference tensor in predicate indexing logic (#1784)
f008140 MMA Rfactor support for cross-warp and cross-CTA split on K dimension (#1554)
76b3cca Add parsing support for `_to_copy` to handle AMP casts. (#1756)
ef04f6c Coding style cleanups (#1798)
38c7f3c InlinePropagator please don't replay (#1797)
3f2c263 validateDomain in TransformPropagator (#1796)
c077085 Use TransformPropagatorWithCheck in many tests (#1795)
d0d0908 Some further cleanup for the new computeAt interface (#1793)
45f5203 Fix TransformReplay::getMatchedLeafPosWithoutReplay* (#1791)
28cbaf9 New compute at interface (#1743)
635ebfc Add SpanningTreePrinter (#1786)
59f3c32 Output allocate patch (#1790)
fe93bf5 Transform propagator skip replay when possible (#1782)
ebf23a5 Fix isIntegralType error msg (#1789)
0c82ecf Disable register reuse across serial broadcast ops (#1787)
33a824d Adding sibling path for MaxInfoSpanningTree (#1776)
86f46aa Fix div(Val, TensorView) (#1778)
d3de227 Fix FusionMaxRootDomainInfoSpanningTreePrintTwice_CUDA (#1781)
ecc7a87 Extend mma dimension and layout checking to support strided batched matmul and tensor contractions (#1761)
```

RUN_TORCHBENCH: nvfuser

Test Plan: Imported from OSS

Reviewed By: samdow

Differential Revision: D38043938

Pulled By: davidberard98

fbshipit-source-id: b94245f83dab6faee31e0c154d3b969bddeb3d47
akashveramd pushed a commit to akashveramd/pytorch that referenced this pull request Apr 9, 2025
* Fix universal gemm profiler for pk_i4_t

* fix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants