Deprecate torch dumps, move to safetensors (they load even faster)
This commit is contained in:
parent
14ffa0b418
commit
e195377050
5 changed files with 42 additions and 35 deletions
|
@ -112,14 +112,6 @@ After downloading the model, follow these steps:
|
|||
python download-model.py EleutherAI/gpt-j-6B --text-only
|
||||
```
|
||||
|
||||
#### Converting to pytorch (optional)
|
||||
|
||||
The script `convert-to-torch.py` allows you to convert models to .pt format, which can be a lot faster to load to the GPU:
|
||||
|
||||
python convert-to-torch.py models/model-name
|
||||
|
||||
The output model will be saved to `torch-dumps/model-name.pt`. When you load a new model, the web UI first looks for this .pt file; if it is not found, it loads the model as usual from `models/model-name`.
|
||||
|
||||
## Starting the web UI
|
||||
|
||||
conda activate textgen
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue