Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. (#2777)

This commit is contained in:
LarryVRH 2023-06-22 02:31:42 +08:00 committed by GitHub
parent a06acd6d09
commit 580c1ee748
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
7 changed files with 101 additions and 6 deletions

View file

@ -212,7 +212,7 @@ Optionally, you can use the following command-line flags:
| Flag | Description |
|--------------------------------------------|-------------|
| `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, llamacpp, rwkv, flexgen |
| `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv, flexgen |
#### Accelerate/transformers