Remove GPTQ-for-LLaMa monkey patch support

AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
This commit is contained in:
jllllll 2023-08-09 23:59:04 -05:00
parent bee73cedbd
commit e3d3565b2a
No known key found for this signature in database
GPG key ID: 7FCD00C417935797
6 changed files with 0 additions and 103 deletions

View file

@ -11,7 +11,6 @@ This is the current state of LoRA integration in the web UI:
| Transformers | Full support in 16-bit, `--load-in-8bit`, `--load-in-4bit`, and CPU modes. |
| ExLlama | Single LoRA support. Fast to remove the LoRA afterwards. |
| AutoGPTQ | Single LoRA support. Removing the LoRA requires reloading the entire model.|
| GPTQ-for-LLaMa | Full support with the [monkey patch](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#using-loras-with-gptq-for-llama). |
## Downloading a LoRA