Keep minimal change.
This commit is contained in:
parent
1d8526849b
commit
f3591ccfa1
3 changed files with 53 additions and 22 deletions
|
@ -238,6 +238,7 @@ Optionally, you can use the following command-line flags:
|
|||
| `--model_type MODEL_TYPE` | GPTQ: Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. |
|
||||
| `--groupsize GROUPSIZE` | GPTQ: Group size. |
|
||||
| `--pre_layer PRE_LAYER` | GPTQ: The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. |
|
||||
| `--warmup_autotune` | GPTQ: Enable warmup autotune. Only usable for triton. |
|
||||
|
||||
#### FlexGen
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue