Skip to content

Conversation

@Qubitium
Copy link
Collaborator

@Qubitium Qubitium commented Oct 9, 2024

I don't know why our ci did not pick up on this bug there is some strange dependency override issue with intel extension for transformers is overriding the torch[cuda]=2.4.1 installed with a torch[cpu] 2.3.0.

EDIT: source of bug is actually autoround 0.3 setup code

@Qubitium Qubitium changed the title Update requirements.txt [FIX] pip install on python 3.12 Oct 9, 2024
@Qubitium
Copy link
Collaborator Author

Qubitium commented Oct 9, 2024

@LRL-ModelCloud we need to fix this. First, check if auto-round depends on intel-extension-for-transformers and test pip install -vv . on a clean python 3.12 conda env.

We may need to move auto-round to optional/secondary package if it becomes an issue.

First check who is using intel-extension-for-transformers?

There is also a second issue where the vllm/sglang depends are too old. Update the versions and flashinfer (do we need to add flashinfer since they are auto-depen by vllm and sglang latest?).

@Qubitium Qubitium changed the title [FIX] pip install on python 3.12 [FIX] autoround depend causing torch-cpu to be installed Oct 10, 2024
@Qubitium Qubitium merged commit ec8e077 into main Oct 10, 2024
@Qubitium Qubitium deleted the Qubitium-patch-6 branch October 10, 2024 04:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants