Mixtral Instruct: detect prompt format for llama.cpp loader
Workaround until the tokenizer.chat_template kv field gets implemented
This commit is contained in:
parent
3bbf6c601d
commit
a060908d6c
2 changed files with 2 additions and 11 deletions
|
@ -174,7 +174,7 @@
|
|||
instruction_template: 'OpenChat'
|
||||
.*codellama.*instruct:
|
||||
instruction_template: 'Llama-v2'
|
||||
.*mistral.*instruct:
|
||||
.*(mistral|mixtral).*instruct:
|
||||
instruction_template: 'Mistral'
|
||||
.*mistral.*openorca:
|
||||
instruction_template: 'ChatML'
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue