Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
--------- Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
This commit is contained in:
parent
60ae80cf28
commit
204731952a
6 changed files with 9 additions and 5 deletions
|
@ -226,7 +226,7 @@ Optionally, you can use the following command-line flags:
|
|||
| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. |
|
||||
| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. |
|
||||
| `--sdp-attention` | Use torch 2.0's sdp attention. |
|
||||
| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM. |
|
||||
| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon. |
|
||||
|
||||
#### Accelerate 4-bit
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue