-
Notifications
You must be signed in to change notification settings - Fork 216
Description
In this example code:
from typing import dataclass_transform
from dataclasses import field, dataclass
@dataclass_transform()
def create_model(*, init: bool = True):
def deco[T](cls: T) -> T:
return cls
return deco
@create_model()
class A:
name: str = field(init=False)
reveal_type(A(name="foo").name)
@dataclass
class B:
name: str = field(init=False)
reveal_type(B(name="foo").name)We have a custom dataclass transform, and one class created using it, and another class created using regular dataclass decorator. Both classes use dataclasses.field to specify a field.
The typing spec says this about the field_specifiers argument to dataclass_transform:
If not specified, field_specifiers will default to an empty tuple (no field specifiers supported). The standard dataclass behavior supports only one type of field specifier called Field plus a helper function (field) that instantiates this class, so if we were describing the stdlib dataclass behavior, we would provide the tuple argument (dataclasses.Field, dataclasses.field)
This pretty clearly means that dataclasses.Field and dataclasses.field should not be accepted by default as field specifiers for a custom dataclass transform that doesn't list them explicitly in field_specifiers.
Mypy/pyright/pyrefly all implement this, with the result that in the above code, the call A(name="foo") is not an error, since field is not a field specifier for create_model, therefore init=False is ignored there. Only B(name="foo") is an error.
In ty, we currently wrongly respect dataclasses.field as a field specifier for create_model.