Skip to content

Conversation

@ggerganov
Copy link
Member

I think this functionality never ended up being used, plus I think GGUF support was at some point implemented in the OG repo. So I think there is no reason to keep the code around

@ggerganov ggerganov merged commit 78aacf3 into master Feb 28, 2024
@ggerganov ggerganov deleted the gg/remove-awq branch February 28, 2024 15:36
@wilderfield
Copy link

@ggerganov looks like the functionality was referenced by convert.py ?
https://github.com/ggerganov/llama.cpp/blob/652ca2bded3c818320d92c70d2b67f64bdbff5e5/convert.py#L1395-L1407

I'm a bit confused now, does this repo support quantizing models with AWQ

ggerganov added a commit that referenced this pull request Mar 6, 2024
@ggerganov
Copy link
Member Author

I just removed the leftovers from convert.py

llama.cpp does not provide AWQ quantization functionality. I believe there was work in the AWQ repos to add support for generating GGUF files directly which in turn are compatible with llama.cpp

@wilderfield
Copy link

Thanks @ggerganov. PS, I really want to know whats in your .vimrc.

@ggerganov
Copy link
Member Author

hazelnutcloud pushed a commit to hazelnutcloud/llama.cpp that referenced this pull request Mar 10, 2024
NeoZhangJianyu pushed a commit to NeoZhangJianyu/llama.cpp that referenced this pull request Mar 12, 2024
jordankanter pushed a commit to jordankanter/llama.cpp that referenced this pull request Mar 13, 2024
jordankanter pushed a commit to jordankanter/llama.cpp that referenced this pull request Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants