extensions/openai: load extension settings via settings.yaml (#3953)

This commit is contained in:
Chenxiao Wang 2023-09-18 09:39:29 +08:00 committed by GitHub
parent cc8eda298a
commit 347aed4254
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
6 changed files with 48 additions and 16 deletions

View file

@ -44,6 +44,18 @@ OPENAI_API_BASE=http://0.0.0.0:5001/v1
If needed, replace 0.0.0.0 with the IP/port of your server.
## Settings
To adjust your default settings, you can add the following to your `settings.yaml` file.
```
openai-port: 5002
openai-embedding_device: cuda
openai-sd_webui_url: http://127.0.0.1:7861
openai-debug: 1
```
### Models
This has been successfully tested with Alpaca, Koala, Vicuna, WizardLM and their variants, (ex. gpt4-x-alpaca, GPT4all-snoozy, stable-vicuna, wizard-vicuna, etc.) and many others. Models that have been trained for **Instruction Following** work best. If you test with other models please let me know how it goes. Less than satisfying results (so far) from: RWKV-4-Raven, llama, mpt-7b-instruct/chat.