Hotfix: Add ParallelismConfig fallback for transformers with old accelerate #4063
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hotfix: Add ParallelismConfig fallback for transformers with old accelerate.
Fix
TrainingArguments.parallelism_configNameError with accelerate<1.10.1:Problem
When accelerate<1.10.1, the module accelerate.parallelism_config does not exist.
Transformers’ TrainingArguments currently annotates:
This creates a ForwardRef. On Python ≥3.12, when TRL constructs its CLI parsers, transformers.HfArgumentParser calls typing.get_type_hints(TrainingArguments). Since ParallelismConfig is undefined in transformers.training_args, a NameError is raised:
Fix
This PR add a compat helper that pre-binds transformers.training_args.ParallelismConfig = Any if the name is missing. This ensures get_type_hints can always resolve the reference. The helper is called early before any HfArgumentParser is created.
Upstream
The proper fix is proposed in: