Make llama.cpp read prompt size and seed from settings (#2299)
This commit is contained in:
parent
ee674afa50
commit
cf088566f8
5 changed files with 9 additions and 3 deletions
|
@ -242,6 +242,8 @@ Optionally, you can use the following command-line flags:
|
|||
| `--mlock` | Force the system to keep the model in RAM. |
|
||||
| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. |
|
||||
| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. |
|
||||
| `--n_ctx N_CTX` | Size of the prompt context. |
|
||||
| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default 0 (random). |
|
||||
|
||||
#### GPTQ
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue