Remove GPTQ-for-LLaMa monkey patch support

AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
This commit is contained in:
jllllll 2023-08-09 23:59:04 -05:00
parent bee73cedbd
commit e3d3565b2a
No known key found for this signature in database
GPG key ID: 7FCD00C417935797
6 changed files with 0 additions and 103 deletions

View file

@ -279,7 +279,6 @@ Optionally, you can use the following command-line flags:
| `--groupsize GROUPSIZE` | Group size. |
| `--pre_layer PRE_LAYER [PRE_LAYER ...]` | The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg `--pre_layer 30 60`. |
| `--checkpoint CHECKPOINT` | The path to the quantized checkpoint file. If not specified, it will be automatically detected. |
| `--monkey-patch` | Apply the monkey patch for using LoRAs with quantized models.
#### DeepSpeed