Add llama-2-70b GGML support (#3285)
This commit is contained in:
parent
6f4830b4d3
commit
a07d070b6c
8 changed files with 20 additions and 4 deletions
|
@ -247,6 +247,8 @@ Optionally, you can use the following command-line flags:
|
|||
| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. |
|
||||
| `--n_ctx N_CTX` | Size of the prompt context. |
|
||||
| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default 0 (random). |
|
||||
| `--n_gqa N_GQA` | grouped-query attention. Must be 8 for llama2 70b. |
|
||||
| `--rms_norm_eps RMS_NORM_EPS` | Must be 1e-5 for llama2 70b. |
|
||||
|
||||
#### AutoGPTQ
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue