Add --use_fast option (closes #3741)
This commit is contained in:
parent
b973b91d73
commit
d0d221df49
6 changed files with 22 additions and 13 deletions
|
@ -269,6 +269,7 @@ Optionally, you can use the following command-line flags:
|
|||
| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. |
|
||||
| `--sdp-attention` | Use torch 2.0's sdp attention. |
|
||||
| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon. |
|
||||
| `--use_fast` | Set use_fast=True while loading a tokenizer. |
|
||||
|
||||
#### Accelerate 4-bit
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue