From 97e67d136b0df4e977c0f5db6efd4e674c360c7f Mon Sep 17 00:00:00 2001 From: Light Date: Thu, 13 Apr 2023 21:00:58 +0800 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 45379d6..382f4e2 100644 --- a/README.md +++ b/README.md @@ -239,7 +239,7 @@ Optionally, you can use the following command-line flags: | `--model_type MODEL_TYPE` | GPTQ: Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. | | `--groupsize GROUPSIZE` | GPTQ: Group size. | | `--pre_layer PRE_LAYER` | GPTQ: The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. | -| `--warmup_autotune` | GPTQ: Enable warmup autotune. Only usable for triton. | +| `--no-warmup_autotune` | GPTQ: Disable warmup autotune for triton. | #### FlexGen