For embedding models that lack BOS/EOS tokens (such as BAAI/bge-*), the BOS/EOS token ids default to -1, which causes a segfault on loading when calling token_get_text. I would recommend either short-circuiting these calls to the empty string in that case or skipping the chat template code entirely for embedding models.