Add 4-bit LoRA support (#1200)

This commit is contained in:
oobabooga 2023-04-16 23:26:52 -03:00 committed by GitHub
parent ec3e869c27
commit 39099663a0
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
7 changed files with 100 additions and 34 deletions

View file

@ -237,6 +237,7 @@ Optionally, you can use the following command-line flags:
| `--groupsize GROUPSIZE` | GPTQ: Group size. |
| `--pre_layer PRE_LAYER` | GPTQ: The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. |
| `--no-warmup_autotune` | GPTQ: Disable warmup autotune for triton. |
| `--monkey-patch` | GPTQ: Apply the monkey patch for using LoRAs with quantized models. |
#### FlexGen