Revert "Remove GPTQ-for-LLaMa monkey patch support"
This reverts commit e3d3565b2a
.
This commit is contained in:
parent
16e2b117b4
commit
c7f52bbdc1
6 changed files with 103 additions and 0 deletions
|
@ -279,6 +279,7 @@ Optionally, you can use the following command-line flags:
|
|||
| `--groupsize GROUPSIZE` | Group size. |
|
||||
| `--pre_layer PRE_LAYER [PRE_LAYER ...]` | The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg `--pre_layer 30 60`. |
|
||||
| `--checkpoint CHECKPOINT` | The path to the quantized checkpoint file. If not specified, it will be automatically detected. |
|
||||
| `--monkey-patch` | Apply the monkey patch for using LoRAs with quantized models.
|
||||
|
||||
#### DeepSpeed
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue