-
-
Notifications
You must be signed in to change notification settings - Fork 692
type_as will not move Tensor to the same device #554
Copy link
Copy link
Closed
Labels
Description
In some metrics, we use type_as to change the data type of tensors. But type_as does not move the tensor to the same device.
ignite/ignite/metrics/confusion_matrix.py
Lines 73 to 90 in c92838c
| def update(self, output): | |
| y_pred, y = self._check_shape(output) | |
| if y_pred.shape != y.shape: | |
| y_ohe = to_onehot(y.reshape(-1), self.num_classes) | |
| y_ohe_t = y_ohe.transpose(0, 1).float() | |
| else: | |
| y_ohe_t = y.transpose(0, 1).reshape(y.shape[1], -1).float() | |
| indices = torch.argmax(y_pred, dim=1) | |
| y_pred_ohe = to_onehot(indices.reshape(-1), self.num_classes) | |
| y_pred_ohe = y_pred_ohe.float() | |
| if self.confusion_matrix.type() != y_ohe_t.type(): | |
| self.confusion_matrix = self.confusion_matrix.type_as(y_ohe_t) | |
| self.confusion_matrix += torch.matmul(y_ohe_t, y_pred_ohe).float() | |
| self._num_examples += y_pred.shape[0] |
In the code above, if
y is on a non-default cuda device, then confusion_matrix will be move to the default cuda device at line 87. This makes line 89 fail because two tensors are not on the same device. I think replacing type_as with to can resolve this issue.
BTW, the .float() in line 89 seems unnecessary.
Reactions are currently unavailable