Make llama.cpp read prompt size and seed from settings (#2299)

This commit is contained in:
DGdev91 2023-05-25 15:29:31 +02:00 committed by GitHub
parent ee674afa50
commit cf088566f8
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 9 additions and 3 deletions

View file

@ -242,6 +242,8 @@ Optionally, you can use the following command-line flags:
| `--mlock` | Force the system to keep the model in RAM. |
| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. |
| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. |
| `--n_ctx N_CTX` | Size of the prompt context. |
| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default 0 (random). |
#### GPTQ