Add CFG to llamacpp_HF (second attempt) (#3678)
This commit is contained in:
parent
d6934bc7bc
commit
3320accfdc
3 changed files with 14 additions and 6 deletions
|
@ -280,6 +280,7 @@ Optionally, you can use the following command-line flags:
|
|||
| `--n_gqa N_GQA` | grouped-query attention. Must be 8 for llama-2 70b. |
|
||||
| `--rms_norm_eps RMS_NORM_EPS` | 5e-6 is a good value for llama-2 models. |
|
||||
| `--cpu` | Use the CPU version of llama-cpp-python instead of the GPU-accelerated version. |
|
||||
|`--cfg-cache` | llamacpp_HF: Create an additional cache for CFG negative prompts. |
|
||||
|
||||
#### ctransformers
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue