Conversation
💊 CI failures summary and remediationsAs of commit 08013b9 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
77ee78e to
08013b9
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ailzhang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
| } else if (tid == DispatchKey::XLA) { | ||
| return DeviceType::XLA; | ||
| } else if (tid == DispatchKey::XLAPreAutograd) { | ||
| return DeviceType::XLA; |
There was a problem hiding this comment.
You shouldn't really ever hit this.
There was a problem hiding this comment.
It's triggered by a test calling legacy new constructor ;) . Here's backtrace without this change:
#0 __cxxabiv1::__cxa_throw (obj=0x55555850f190, tinfo=0x7fffcebf9008 <typeinfo for c10::Error>,
dest=0x7fffcd2c03bc <c10::Error::~Error()>)
at /home/nwani/m3/conda-bld/compilers_linux-64_1560109574129/work/.build/x86_64-conda_cos6-linux-gnu/src/gcc/libstdc++-v3/libsupc++/eh_throw.cc:80
#1 0x00007fffcd626eb1 in c10::computeDeviceType (tid=c10::DispatchKey::XLAPreAutograd)
at ../c10/core/TensorOptions.h:659
#2 0x00007fffcdbc4dcb in torch::utils::(anonymous namespace)::options (
dispatch_key=c10::DispatchKey::XLAPreAutograd, scalar_type=c10::ScalarType::Float, device=...)
at ../torch/csrc/utils/tensor_new.cpp:69
#3 0x00007fffcdbc51cc in torch::utils::(anonymous namespace)::new_with_sizes (
dispatch_key=c10::DispatchKey::XLAPreAutograd, scalar_type=c10::ScalarType::Float, device=..., sizes=...)
at ../torch/csrc/utils/tensor_new.cpp:110
#4 0x00007fffcdbcaafc in torch::utils::legacy_tensor_new (dispatch_key=c10::DispatchKey::XLAPreAutograd,
scalar_type=c10::ScalarType::Float, args=(0, 3, 3), kwargs=0x0) at ../torch/csrc/utils/tensor_new.cpp:577
#5 0x00007fffcd2dc84f in torch::autograd::THPVariable_new (self=<Tensor at remote 0x7ffeb01c8550>,
args=(0, 3, 3), kwargs=0x0) at ../torch/csrc/autograd/generated/python_variable_methods.cpp:689
@ezyang Looks like this change is necessary?
| // (I don't want to fix this in XLA right now because there might be | ||
| // more renaming coming in the future.) | ||
| static inline DispatchKeySet XLA() { | ||
| return DispatchKeySet{DispatchKey::XLA, DispatchKey::XLAPreAutograd}; |
There was a problem hiding this comment.
I don't know if we should really treat this as BC with XLA repo or not. Maybe this ought to just be a permanent fixture (especially if we add more Autograd keys for other backends, it will be better to store these in a centralized place.)
|
(Actually let me land this PR first to unblock the XLA PR(which fixes a perf regression) since this change only affects XLA. :P Will send new PRs for followups. ;) |
No description provided.