Skip to content

[MPS] Tracking issue for ModuleInfo failures when enabling testing for torch.float16 #119108

@mikaylagawarecki

Description

@mikaylagawarecki

🐛 Describe the bug

Issue accompanying #119039. The following tests fail when I enable torch.float16 ModuleInfo testing for MPS

Not specific to MPS (also fail on CPU and CUDA)

  • test_check_inplace_nn_CELU_mps_float16
  • test_check_inplace_nn_ELU_mps_float16

Errors with "error: input types 'tensor' and 'tensor<...f16>' are not broadcast compatible"

  • test_forward_nn_AvgPool2d_mps_float16
  • test_forward_nn_BCELoss_mps_float16
  • test_if_train_and_eval_modes_differ_nn_AvgPool2d_mps_float16
  • test_if_train_and_eval_modes_differ_nn_BCELoss_mps_float16
  • test_memory_format_nn_AvgPool2d_mps_float16
  • test_non_contiguous_tensors_nn_AvgPool2d_mps_float16
  • test_non_contiguous_tensors_nn_BCELoss_mps_float16
  • test_pickle_nn_AvgPool2d_mps_float16
  • test_pickle_nn_BCELoss_mps_float16
  • test_non_contiguous_tensors_nn_MSELoss_mps_float16
  • test_non_contiguous_tensors_nn_SmoothL1Loss_mps_float16

Errors with "MPSNDArrayConvolutionA14.mm:3976: failed assertion `destination datatype must be fp32'"

  • test_memory_format_nn_Conv1d_mps_float16
  • test_memory_format_nn_Conv2d_mps_float16
  • test_memory_format_nn_ConvTranspose1d_mps_float16
  • test_memory_format_nn_ConvTranspose2d_mps_float16
  • test_memory_format_nn_LazyConv1d_mps_float16
  • test_memory_format_nn_LazyConv2d_mps_float16
  • test_memory_format_nn_LazyConvTranspose1d_mps_float16
  • test_memory_format_nn_LazyConvTranspose2d_mps_float16
  • test_non_contiguous_tensors_nn_Conv1d_mps_float16
  • test_non_contiguous_tensors_nn_Conv2d_mps_float16
  • test_non_contiguous_tensors_nn_ConvTranspose1d_mps_float16
  • test_non_contiguous_tensors_nn_ConvTranspose2d_mps_float16
  • test_non_contiguous_tensors_nn_LazyConv1d_mps_float16
  • test_non_contiguous_tensors_nn_LazyConv2d_mps_float16
  • test_non_contiguous_tensors_nn_LazyConvTranspose1d_mps_float16
  • test_non_contiguous_tensors_nn_LazyConvTranspose2d_mps_float16

Tolerance errors

  • test_forward_nn_Bilinear_mps_float16
  • test_forward_nn_BCEWithLogitsLoss_mps_float16
  • test_forward_nn_GELU_mps_float16
  • test_forward_nn_LogSigmoid_mps_float16
  • test_forward_nn_MSELoss_mps_float16
  • test_forward_nn_NLLLoss_mps_float16
  • test_forward_nn_SoftMarginLoss_mps_float16
  • test_forward_nn_Softmax2d_mps_float16

inf nan error

  • test_forward_nn_LogSoftmax_mps_float16

Incorrect output dtype? values for attribute 'dtype' do not match: torch.float16 != torch.float32.

  • test_forward_nn_HuberLoss_mps_float16

Versions

main

cc @kulinseth @albanD @malfet @DenisVieriu97 @razarmehr

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: mpsRelated to Apple Metal Performance Shaders frameworktriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions