Add the --disable_exllama option for AutoGPTQ

This commit is contained in:
Chris Lefever 2023-08-12 02:26:58 -04:00
parent 0e05818266
commit 0230fa4e9c
6 changed files with 6 additions and 0 deletions

View file

@ -262,6 +262,7 @@ Optionally, you can use the following command-line flags:
| `--no_inject_fused_mlp` | Triton mode only: disable the use of fused MLP, which will use less VRAM at the cost of slower inference. |
| `--no_use_cuda_fp16` | This can make models faster on some systems. |
| `--desc_act` | For models that don't have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig. |
| `--disable_exllama` | Disable ExLlama kernel, which can improve inference speed on some systems. |
#### ExLlama