Skip to content

Conversation

@jramseyer
Copy link
Contributor

Fixing #8518

Sorry for the pile of commits; I forgot to rebase.

Viswanath Sivakumar and others added 30 commits July 30, 2018 16:02
…compatibility (pytorch#9403)

Summary:
Pull Request resolved: pytorch#9403

In BBoxTransform and GenerateProposal ops, clip_boxes makes sure the bbox fits
within the images. For rotated boxes, this doesn't always make sense as there
could be multiple ways to clip a rotated box within an image boundary.
Moreover, clipping to a horizontal box means we leave out pixels of interest
potentially. Therefore, we clip only boxes with angle almost equal to 0 (with a
specified `angle_thresh` tolerance).

Reviewed By: pjh5

Differential Revision: D8828588

fbshipit-source-id: 39c1eafdb5d39d383780faa0a47e76149145e50c
Summary:
Enable fusion for IDEEP in optimizeForIdeep
including Conv+ReLU, Conv+Sum, Conv+Sum+ReLU, Conv+BN
Pull Request resolved: pytorch#9255

Reviewed By: bddppq

Differential Revision: D8809030

Pulled By: yinghai

fbshipit-source-id: af30bad3b96cb965bd26a4dfa810370faec4bb88
Summary:
I noticed that `Sequential::clone()` does not work. This is because `Sequential` does not use `reset()` which is normally where modules have to initialize and register its submodules. Further, this is because of the way `Sequential` allows its modules to be passed in the constructor, which doesn't work with `reset()` (since it does "late" initialization).

I've added some better error messages inside `Cloneable::clone()` which makes this kind of mistake clearer for other users, and tests for `Sequential::clone()`.

I also had to give `AnyModule` a deep `clone()` method.

ebetica ezyang
Pull Request resolved: pytorch#9372

Differential Revision: D8865189

Pulled By: goldsborough

fbshipit-source-id: b81586e0d3157cd3c4265b19ac8dd87c5d8dcf94
Summary:
onnx/onnx@b2817a6
Pull Request resolved: pytorch#9476

Reviewed By: houseroad

Differential Revision: D8868253

Pulled By: bddppq

fbshipit-source-id: b1f14bab47f020f0bc0239da7e2bbf959a407d6a
Summary:
This change makes README.md compatible with both Github and VSTS markdown engines. Images can be reduced if necessary
Pull Request resolved: pytorch#9296

Differential Revision: D8874931

Pulled By: soumith

fbshipit-source-id: 0c530c1e00b06fc891301644c92c33007060bf27
Summary:
1. Added tests
2. Added doc string
3. Remove view_as redundant definition from tensor.py

Closes pytorch#9416
Pull Request resolved: pytorch#9452

Differential Revision: D8851794

Pulled By: ezyang

fbshipit-source-id: 0aa0430dd0a174e1a5caddbc50a7e2c9eb7802bc
…ytorch#9475)

Summary:
test_cuda.py uses routine 'number' to prepare many testscases.
number should return a floating point value for float-type tensor
types, or integer otherwise. But number's test to classify the type
is incorrect, so it always returns the integer value.
(type(t).__name__ is always 'torch.tensortype' so never matches
'Double', 'Float', or 'Half'.)

Update number to use the existing is_floating() helper to make the
check.

The change to number causes a few tests to fail for HalfTensor. Relax
the tolerance for those in line with other HalfTensor testcases. The
failing tests--for addcdiv and fill--were not previously relaxed for
HalfTensor so are held to the over-strict 1e-5 default tolerance.

Finally, update a couple other tests for HalfTensor type to use the
existing is_half() helper.
Pull Request resolved: pytorch#9475

Reviewed By: yf225

Differential Revision: D8872112

Pulled By: ezyang

fbshipit-source-id: 016e3e15adb23f6606bd4c08218954c1396699db
Summary:
Pull Request resolved: pytorch#9497

Fixes pytorch#7883 by using `rfft`.

It's worth noting that this is BC breaking. And it's impossible to detect the change because the two signatures before and after this change supports a common subset of calling patterns, e.g., `stft(Tensor, int, int)`. (some other calling patterns will raise error).

soumith and I plan to change the current `stft` interface because it is a bit messy and non-standard. rafaelvalle suggested us that `librosa` is a good reference API to align with. After discussing with soumith and ezyang , and given that `stft` is only out for 1 release, I decide to go with directly changing the signature. Also, my understanding is that most researchers in this field will welcome this change as `librosa` seems to be the golden-standard here. (it doesn't yet support all `pad_mode` but those will become available if added to `F.pad`.)
Pull Request resolved: pytorch#9308

Reviewed By: ezyang

Differential Revision: D8806148

Pulled By: SsnL

fbshipit-source-id: f6e8777d0c34d4a4d7024e638dc9c63242e8bb58
Summary:
This issue was fixed in 976f925

Fixes pytorch#5311.

Signed-off-by: Edward Z. Yang <[email protected]>
Pull Request resolved: pytorch#9498

Differential Revision: D8875605

Pulled By: ezyang

fbshipit-source-id: 449ffe975d35c959f92874437ba9be37d4d3a1f2
Summary:
It was only used to toggle refcounting, but we ALWAYS
refcount tensors.

Signed-off-by: Edward Z. Yang <[email protected]>
Pull Request resolved: pytorch#9494

Differential Revision: D8875169

Pulled By: ezyang

fbshipit-source-id: 3a8618fb288334e62942bbaf388f3c9e473e7524
Summary: Pull Request resolved: pytorch#9485

Reviewed By: houseroad

Differential Revision: D8873733

Pulled By: bddppq

fbshipit-source-id: 3a3cc351834cbbedce360760504ea16f5fa0ea06
Summary:
Pull Request resolved: pytorch#9480

Ops like Reshape sometimes take a second input tensor of long with the new
shape (can also be specified in arg). If this input tensor is passed in via
external input (which ONNX does sometimes), LoadOp fails with an exception.

Such ops anyway are executed by IDEEPFallbackOp, so this should be fine.

Reviewed By: yinghai

Differential Revision: D8872671

fbshipit-source-id: 659a02416c374e373ce041a7d65a174be828702d
pytorch#9482)

Summary:
…ors (CPU).

This includes (mainly) CPU fixes; CUDA fixes are a little more involved because you can't use an empty grid.
This also includes a fix for index_copy, which checked that self.size(dim) == src.size(0), which isn't correct (the same dimension should be compared).
Finally, also includes a fix for CUDA flip (although it's not tested yet), to get the stride using multiplication rather than division to avoid divide-by-0.
Pull Request resolved: pytorch#9482

Reviewed By: ezyang

Differential Revision: D8873047

Pulled By: gchanan

fbshipit-source-id: 86523afd3d50277834f654cd559dfbc7875cdffe
Summary:
ebetica asked for a way to add parameters to `Optimizer`s after they are created.

ebetica ezyang
Pull Request resolved: pytorch#9472

Differential Revision: D8872176

Pulled By: goldsborough

fbshipit-source-id: 39a4032c519a6d3b458dd3596361b04afea10365
Summary:
This is enabled by the allocator patch; previously we could not
deduplicate THStorage_free/THCStorage_free; now we can.

Pull Request resolved: pytorch#9495

Reviewed By: SsnL

Differential Revision: D8875497

Pulled By: ezyang

fbshipit-source-id: 387198dff446eb9f84d2d6187066fae1d595dea7
Summary:
Pull Request resolved: pytorch#9017

Closes pytorch#9017

Added "get_blob_size_bytes" to "pybind_state.cc" in Caffe2 to expose the size of blob in bytes.

Reviewed By: kuttas

Differential Revision: D8685696

fbshipit-source-id: 9a9d38f207c8c59ef534217181e8ce1514617628
Summary: Pull Request resolved: pytorch#9470

Reviewed By: pjh5

Differential Revision: D8826713

fbshipit-source-id: 47674af86b3a5ae0752056faf3b93f0d96e38fc2
Summary:
It implements per-channel alpha_dropout. It also creates corresponding function classes and unifies the process of dropout and alpha_dropout.
Pull Request resolved: pytorch#9073

Differential Revision: D8727008

Pulled By: ezyang

fbshipit-source-id: 9d509f9c5db4e98f7b698cdfc4443505a4d2b331
Summary:
If this is good, I could write some tests to ensure collision doesn't occur within a given range.

Closes pytorch#7228
Pull Request resolved: pytorch#9246

Differential Revision: D8872608

Pulled By: ezyang

fbshipit-source-id: 0ed29a73188f4167b42756f59a5c9a3d5cb37326
…nd (pytorch#9458)

Summary:
Pull Request resolved: pytorch#9458

The goal is to support count_include_pad in Caffe2 ONNX backend. This commit contains the first step - support 4-D tensor cases.
AveragePool with count_include_pad can be expressed as PadImage + AveragePool.

Reviewed By: houseroad

Differential Revision: D8852180

fbshipit-source-id: 4db00e9771be7a000a2d92850dfd066d9c9c38bf
Summary:
Pull Request resolved: pytorch#9126

Closes pytorch#9126

Allow concurrent read and writes in dispatcher table

Reviewed By: smessmer

Differential Revision: D8722560

fbshipit-source-id: e376bcd59f1b9f6b0e6fd3dd376a55561ea3c9c3
Summary:
Stacked on pytorch#9495
Pull Request resolved: pytorch#9496

Differential Revision: D8875528

Pulled By: ezyang

fbshipit-source-id: 6419d2ffb07aaf49c1462e7b64737019abbb7f61
Summary:
- I ran into this couple days ago, and thought it might be useful to take note on that
Pull Request resolved: pytorch#9504

Reviewed By: soumith

Differential Revision: D8887396

Pulled By: weiyangfb

fbshipit-source-id: d2061cf379ce140d6e43ef6c18241f7ce00dbab6
Summary:
Signed-off-by: Edward Z. Yang <[email protected]>
Pull Request resolved: pytorch#9516

Differential Revision: D8886493

Pulled By: ezyang

fbshipit-source-id: fea974fd96c7d81126a129eb5b8b06eb1b028526
…ytorch#9520)

Summary:
Pull Request resolved: pytorch#9520

Add random data filler to predictor bench to support production nets

Reviewed By: salexspb

Differential Revision: D8712757

fbshipit-source-id: 2c732b2ba71ab210f9222adf94d08442ca71dc03
Summary: Pull Request resolved: pytorch#9523

Differential Revision: D8890124

Pulled By: soumith

fbshipit-source-id: dea8d153fc352c36b219298c52f2c97caf9999f4
…h#9524)

Summary:
This command (suggested by albanD when I raised a related question in pytorch slack) is super useful to me. I have used it several times and it worked like a charm (without it, I have to delete entire pytorch folder and clone things again). So I guess it is nice to have in the CONTRIBUTING doc.
Pull Request resolved: pytorch#9524

Differential Revision: D8890126

Pulled By: soumith

fbshipit-source-id: c1798ff1ab2423627fcd8e0662a66c4e85cb2413
Summary:
This PR contains the ROCm contributions of last week:
* documentation of pyHIPIFY data format originating from pytorch#8812 reviewing comments by ezyang
* removal of most patch files from the `amd_build` directory and integration into the code base
* enabling of previously disabled_features that do compile now
* improvement to the static_cast feature in pyHIPIFY (it will only apply static_cast to kernel arguments, not launch arguments)
* addition of two workarounds to pyHIPIFY for ROCm/HIP shortcomings: a) `__forceinline__` does not imply `static`, hence change to `__inline__`, b) `std::[exp,log,pow]` math functions cannot be selected in device code, use `::[exp,log,pow]` instead. Both of these workarounds will be removed once the issues are fixed upstream. Neither of these issues have surfaced on the CI but were reproduced internally.
Pull Request resolved: pytorch#9432

Differential Revision: D8887441

Pulled By: ezyang

fbshipit-source-id: 71cf5c6b13772a66d10be369a45ebf06e4e268e1
Summary:
A 0-dimensional tensor is now returned when squeezing a tensor with a single element.
Pull Request resolved: pytorch#9529

Differential Revision: D8893103

Pulled By: soumith

fbshipit-source-id: 658189ecfff283b2b7281feb16a397692d6dbd8f
Summary:
fixes pytorch#9132
Pull Request resolved: pytorch#9487

Reviewed By: soumith

Differential Revision: D8875529

Pulled By: SsnL

fbshipit-source-id: d1b8aa825d202cfbdca27897da6a8bc1b714f856
Yinghai Lu and others added 9 commits July 31, 2018 11:16
Summary:
Which test should I look at, bddppq?
Pull Request resolved: pytorch#10022

Reviewed By: bddppq

Differential Revision: D9068732

Pulled By: yinghai

fbshipit-source-id: 241ef72c7fac0ed0b8c58ecdffbb5e24eb956217
Summary:
Pull Request resolved: pytorch#9890

Minor cleanups for Graph.h to make it more consistent with our style guide

Also fix opt/device.cc and binary_match_test.cc to not access subgraph.nodes_ which is now private

Reviewed By: bwasti

Differential Revision: D9017108

fbshipit-source-id: 9f5cba4a2cd2a452a955005f4704f6c120bbc1d5
Reviewed By: Maratyszcza

Differential Revision: D9068091

fbshipit-source-id: 4aeac45f9732a86979a08488637bf0ba6cc79b34
Summary: Pull Request resolved: pytorch#10039

Reviewed By: houseroad

Differential Revision: D9074261

Pulled By: bddppq

fbshipit-source-id: 26df516633d5a4ec539a03a62cf9e7839e1e1964
Summary:
I was dumb lol
Pull Request resolved: pytorch#10047

Differential Revision: D9076023

Pulled By: bddppq

fbshipit-source-id: 10587875d04ac2aed2e015846fc73ce9e4717a4f
Summary:
ATenCore.h is a dummy header to just test that this is working at all.
Pull Request resolved: pytorch#10019

Reviewed By: smessmer

Differential Revision: D9067262

Pulled By: ezyang

fbshipit-source-id: 58bab9c0aa83b56335e36b719b9b6505400d8dee
Summary:
We missed the upsample symbolic when bumping up the opset to 7.
Pull Request resolved: pytorch#10001

Reviewed By: bddppq

Differential Revision: D9067212

Pulled By: houseroad

fbshipit-source-id: 3e285d2800a32cb04fa82f8e7f261bdd010a8883
Summary: Pull Request resolved: pytorch#10064

Differential Revision: D9082082

Pulled By: gchanan

fbshipit-source-id: ae49470f5b4c89b13beb55fd825de1ba05b6a4fa
…es (pytorch#9948)

Summary:
zdevito
Pull Request resolved: pytorch#9948

Reviewed By: ezyang

Differential Revision: D9033666

Pulled By: apaszke

fbshipit-source-id: 02d75e391ed6dee62500842df50f0b6ee5e38846
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jramseyer has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jramseyer has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@goldsborough
Copy link
Contributor

You may want to set --rebase as default for the future to avoid merging by accident. You'll really only ever need the --rebase option to be true: git config --global pull.rebase true

@jramseyer
Copy link
Contributor Author

Thanks! I will do that!

Copy link
Contributor

@zdevito zdevito left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, thanks!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jramseyer is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

goodlux pushed a commit to goodlux/pytorch that referenced this pull request Aug 15, 2018
Summary:
Fixing pytorch#8518

Sorry for the pile of commits; I forgot to rebase.
Pull Request resolved: pytorch#10027

Reviewed By: ezyang

Differential Revision: D9070028

Pulled By: jramseyer

fbshipit-source-id: 49729c9755ab8a586711e9f6d6a574f3035a7e75
@ezyang ezyang added the merged label Jun 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.