Skip to content

Update XLAPreAutograd keys.#40265

Closed
ailzhang wants to merge 1 commit intopytorch:masterfrom
ailzhang:update_xlapreautograd
Closed

Update XLAPreAutograd keys.#40265
ailzhang wants to merge 1 commit intopytorch:masterfrom
ailzhang:update_xlapreautograd

Conversation

@ailzhang
Copy link
Copy Markdown
Contributor

No description provided.

@dr-ci
Copy link
Copy Markdown

dr-ci bot commented Jun 19, 2020

💊 CI failures summary and remediations

As of commit 08013b9 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-build-x86_32 (1/1)

Step: "pytorch android gradle build only x86_32 (for PR)" (full log | diagnosis details | 🔁 rerun)

Jun 19 03:22:34 ./.circleci/scripts/build_android_gradle.sh: line 73: /opt/gradle/gradle-6.5/bin/gradle: No such file or directory
Jun 19 03:22:34 ANDROID_NDK_HOME=/opt/ndk 
Jun 19 03:22:34 INSTALLED_DB= 
Jun 19 03:22:34 _=/usr/bin/env 
Jun 19 03:22:34 + export GRADLE_LOCAL_PROPERTIES=/var/lib/jenkins/workspace/android/local.properties 
Jun 19 03:22:34 + GRADLE_LOCAL_PROPERTIES=/var/lib/jenkins/workspace/android/local.properties 
Jun 19 03:22:34 + rm -f /var/lib/jenkins/workspace/android/local.properties 
Jun 19 03:22:34 + echo sdk.dir=/opt/android/sdk 
Jun 19 03:22:34 + echo ndk.dir=/opt/ndk 
Jun 19 03:22:34 + echo cmake.dir=/usr/local 
Jun 19 03:22:34 + /opt/gradle/gradle-6.5/bin/gradle -p android assembleRelease --debug --stacktrace -PABI_FILTERS=x86 --offline 
Jun 19 03:22:34 ./.circleci/scripts/build_android_gradle.sh: line 73: /opt/gradle/gradle-6.5/bin/gradle: No such file or directory 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 3 times.

@ailzhang ailzhang force-pushed the update_xlapreautograd branch from 77ee78e to 08013b9 Compare June 19, 2020 02:28
@ailzhang ailzhang requested review from ezyang and smessmer June 19, 2020 15:05
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ailzhang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ailzhang ailzhang requested a review from bhosmer June 19, 2020 16:58
} else if (tid == DispatchKey::XLA) {
return DeviceType::XLA;
} else if (tid == DispatchKey::XLAPreAutograd) {
return DeviceType::XLA;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You shouldn't really ever hit this.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's triggered by a test calling legacy new constructor ;) . Here's backtrace without this change:

#0  __cxxabiv1::__cxa_throw (obj=0x55555850f190, tinfo=0x7fffcebf9008 <typeinfo for c10::Error>,
    dest=0x7fffcd2c03bc <c10::Error::~Error()>)
    at /home/nwani/m3/conda-bld/compilers_linux-64_1560109574129/work/.build/x86_64-conda_cos6-linux-gnu/src/gcc/libstdc++-v3/libsupc++/eh_throw.cc:80
#1  0x00007fffcd626eb1 in c10::computeDeviceType (tid=c10::DispatchKey::XLAPreAutograd)
    at ../c10/core/TensorOptions.h:659
#2  0x00007fffcdbc4dcb in torch::utils::(anonymous namespace)::options (
    dispatch_key=c10::DispatchKey::XLAPreAutograd, scalar_type=c10::ScalarType::Float, device=...)
    at ../torch/csrc/utils/tensor_new.cpp:69
#3  0x00007fffcdbc51cc in torch::utils::(anonymous namespace)::new_with_sizes (
    dispatch_key=c10::DispatchKey::XLAPreAutograd, scalar_type=c10::ScalarType::Float, device=..., sizes=...)
    at ../torch/csrc/utils/tensor_new.cpp:110
#4  0x00007fffcdbcaafc in torch::utils::legacy_tensor_new (dispatch_key=c10::DispatchKey::XLAPreAutograd,
    scalar_type=c10::ScalarType::Float, args=(0, 3, 3), kwargs=0x0) at ../torch/csrc/utils/tensor_new.cpp:577
#5  0x00007fffcd2dc84f in torch::autograd::THPVariable_new (self=<Tensor at remote 0x7ffeb01c8550>,
    args=(0, 3, 3), kwargs=0x0) at ../torch/csrc/autograd/generated/python_variable_methods.cpp:689

@ezyang Looks like this change is necessary?

// (I don't want to fix this in XLA right now because there might be
// more renaming coming in the future.)
static inline DispatchKeySet XLA() {
return DispatchKeySet{DispatchKey::XLA, DispatchKey::XLAPreAutograd};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if we should really treat this as BC with XLA repo or not. Maybe this ought to just be a permanent fixture (especially if we add more Autograd keys for other backends, it will be better to store these in a centralized place.)

@ailzhang
Copy link
Copy Markdown
Contributor Author

(Actually let me land this PR first to unblock the XLA PR(which fixes a perf regression) since this change only affects XLA. :P Will send new PRs for followups. ;)

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@ailzhang merged this pull request in cfe1c6e.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants