-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Added hash for device #9246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added hash for device #9246
Conversation
torch/csrc/Device.cpp
Outdated
| } | ||
| case at::Device::Type::CUDA: { | ||
| if (self->device.index() > 254) { | ||
| AT_WARN("Device indices of > 254 might result in non-deterministic hashes"); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
How about implementing this at the level of ATen, with a specialization of |
| int64_t operator()(const at::Device& device) const noexcept { | ||
| int64_t hash_val = static_cast<int64_t>(device.index()); | ||
| if (device.type() == at::Device::Type::CUDA) { | ||
| hash_val += 2; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot retest this please |
aten/src/ATen/Device.h
Outdated
| namespace std { | ||
| template<> struct hash<at::Device> | ||
| { | ||
| int64_t operator()(const at::Device& device) const noexcept { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
aten/src/ATen/Device.h
Outdated
| { | ||
| int64_t operator()(const at::Device& device) const noexcept { | ||
| int64_t hash_val = static_cast<int64_t>(device.index()); | ||
| if (device.type() == at::Device::Type::CUDA) { |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
@pytorchbot retest this please |
1 similar comment
|
@pytorchbot retest this please |
torch/csrc/Device.cpp
Outdated
| } | ||
|
|
||
| PyObject *THPDevice_hash(THPDevice *self) | ||
| static size_t THPDevice_hash(THPDevice *self) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/csrc/Device.cpp
Outdated
| END_HANDLE_TH_ERRORS | ||
| } | ||
|
|
||
| static size_t THPDevice_hash(THPDevice *self) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
Actually a final improvement we could do would be to do a range check on the Sorry, I didn't think of that before 😕 |
|
I think this should be protected by the way the hash is designed. It is merely adding 3 to the device index, which is upper bounded by the limit of |
|
@vishwakftw yes, the current code is correct for sure, but what if for some reason we'll end up using |
|
Thank you for the detailed explanation. Should something like this do: if (hash_val > std::numeric_limits<Py_ssize_t>::max()) { // hash_val is size_t
throw std::runtime_error("Hash value limit exceeded, can overflow");
} |
|
No, that's kind of hard to fix for the user. Let's just use modulo arithmetic to limit the range. |
|
Are you suggesting setting an upper bound for the device index, like in the issue ? return static_cast<Py_ssize_t>(std::hash<at::Device>{}(self->device) % MAX_HASH);If this is how you want it done, where should I add MAX_HASH? Sorry about too many questions. |
|
Yeah, just use the |
facebook-github-bot
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: If this is good, I could write some tests to ensure collision doesn't occur within a given range. Closes #7228 Pull Request resolved: pytorch/pytorch#9246 Differential Revision: D8872608 Pulled By: ezyang fbshipit-source-id: 0ed29a73188f4167b42756f59a5c9a3d5cb37326
Summary: If this is good, I could write some tests to ensure collision doesn't occur within a given range. Closes pytorch#7228 Pull Request resolved: pytorch#9246 Differential Revision: D8872608 Pulled By: ezyang fbshipit-source-id: 0ed29a73188f4167b42756f59a5c9a3d5cb37326
Summary: If this is good, I could write some tests to ensure collision doesn't occur within a given range. Closes pytorch#7228 Pull Request resolved: pytorch#9246 Differential Revision: D8872608 Pulled By: ezyang fbshipit-source-id: 0ed29a73188f4167b42756f59a5c9a3d5cb37326
If this is good, I could write some tests to ensure collision doesn't occur within a given range.
Closes #7228