Add ChatGLM support (#1256)

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
This commit is contained in:
Forkoz 2023-04-16 22:15:03 +00:00 committed by GitHub
parent 6a03ad0824
commit c6fe1ced01
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
7 changed files with 31 additions and 10 deletions

View file

@ -219,6 +219,7 @@ Optionally, you can use the following command-line flags:
| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. |
| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. |
| `--sdp-attention` | Use torch 2.0's sdp attention. |
| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM. |
#### llama.cpp