Skip to content

[XLA] can't compile tf.linalg.inv when input tensor dtype is tf.complex64 #88040

@shaoyuyoung

Description

@shaoyuyoung

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

nightly 20250223

Custom code

Yes

OS platform and distribution

No response

Mobile device

No response

Python version

No response

Bazel version

No response

GCC/compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

eager can pass the check for tf.linalg.inv when input dtype is tf.complex64. So XLA should also pass the compilation but it fails

Standalone code to reproduce the issue

import os
import tensorflow
import tensorflow as tf
import numpy as np

os.environ["CUDA_VISIBLE_DEVICES"] = "-1"


class FFTInverseModel(tf.keras.Model):

    def __init__(self):
        super().__init__()

    def call(self, x):
        inv = tf.linalg.inv(x)
        return inv


model = FFTInverseModel()

input_shape = (2, 2)

x = tf.constant([[1.0, 2.0], [3.0, 4.0]], dtype=tf.complex64, shape=input_shape)  # tf.complex64 is the trigger condition

inputs = [x]

model(*inputs)
print("succeed on eager")


class FFTInverseModel(tf.keras.Model):

    def __init__(self):
        super().__init__()

    @tf.function(jit_compile=True)
    def call(self, x):
        inv = tf.linalg.inv(x)
        return inv


model = FFTInverseModel()
model(*inputs)
print("succeed on XLA")

Relevant log output

succeed on eager
tf2xla conversion failed while converting __inference_call_7[_XlaMustCompile=true,config_proto=13561319589895757934,executor_type=11160318154034397263]. Run with TF_DUMP_GRAPH_PREFIX=/path/to/dump/dir and --vmodule=xla_compiler=2 to obtain a dump of the compiled functions. [Op:__inference_call_7]

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions