add support for Orion-14B#5118
Conversation
|
Can confirm that it works with https://huggingface.co/OrionStarAI/Orion-14B-Chat/blob/main/Orion-14B-Chat.gguf (converted to Q5_K_M). Although, it is not clear what the correct prompt format is, |
|
Can confirm working on rocm |
Tangweirui2021
left a comment
There was a problem hiding this comment.
These changes do can fix the convert problem. And it also enables the model to run correctly.
sharpHL
left a comment
There was a problem hiding this comment.
Orion-14B-support
Co-authored-by: Georgi Gerganov <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>
Co-authored-by: slaren <[email protected]>
|
llm_load_print_meta: BOS token = 1 ' |
* add support for Orion-14B(https://huggingface.co/OrionStarAI/Orion-14B-Chat) * flake8 support * Update llama.cpp Co-authored-by: Georgi Gerganov <[email protected]> * Update llama.cpp Co-authored-by: Georgi Gerganov <[email protected]> * Update llama.cpp Co-authored-by: Georgi Gerganov <[email protected]> * Update llama.cpp Co-authored-by: Georgi Gerganov <[email protected]> * Update llama.cpp Co-authored-by: slaren <[email protected]> * Update llama.cpp * Update llama.cpp --------- Co-authored-by: lixiaopu <[email protected]> Co-authored-by: Georgi Gerganov <[email protected]> Co-authored-by: slaren <[email protected]>
support for the Orion-14B related models
https://huggingface.co/OrionStarAI/Orion-14B-Chat
https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin
https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG