Commit graph

  • 753f6c5250 Attempt at making interface restart more robust oobabooga 2023-05-22 00:26:07 -03:00
  • 30225b9dd0 Fix --no-stream queue bug oobabooga 2023-05-22 00:02:59 -03:00
  • 288912baf1 Add a description for the extensions checkbox group oobabooga 2023-05-21 23:33:37 -03:00
  • 6e77844733 Add a description for penalty_alpha oobabooga 2023-05-21 23:08:44 -03:00
  • d63ef59a0f Apply LLaMA-Precise preset to Vicuna by default oobabooga 2023-05-21 23:00:42 -03:00
  • e3d578502a Improve "Chat settings" tab appearance a bit oobabooga 2023-05-21 22:58:14 -03:00
  • dcc3e54005 Various "impersonate" fixes oobabooga 2023-05-21 22:54:28 -03:00
  • e116d31180 Prevent unwanted log messages from modules oobabooga 2023-05-21 22:42:34 -03:00
  • fb91406e93 Fix generation_attempts continuing after an empty reply oobabooga 2023-05-21 22:14:50 -03:00
  • e18534fe12 Fix "continue" in chat-instruct mode oobabooga 2023-05-21 22:05:59 -03:00
  • d7fabe693d Reorganize parameters tab oobabooga 2023-05-21 16:24:35 -03:00
  • 8ac3636966
    Add epsilon_cutoff/eta_cutoff parameters (#2258) oobabooga 2023-05-21 15:11:57 -03:00
  • 767a767989 Fix elevenlabs_tts too oobabooga 2023-05-21 14:11:46 -03:00
  • 1e5821bd9e Fix silero tts autoplay (attempt #2) oobabooga 2023-05-21 13:24:54 -03:00
  • a5d5bb9390 Fix silero tts autoplay oobabooga 2023-05-21 12:11:59 -03:00
  • 78b2478d9c
    assistant: space fix, system: prompt fix (#2219) matatonic 2023-05-20 22:32:34 -04:00
  • 05593a7834 Minor bug fix oobabooga 2023-05-20 23:22:36 -03:00
  • 9c53517d2c
    Fix superbooga error when querying empty DB (Issue #2160) (#2212) Luis Lopez 2023-05-21 09:27:22 +08:00
  • ab6acddcc5
    Add Save/Delete character buttons (#1870) Matthew McAllister 2023-05-20 17:48:45 -07:00
  • c5af549d4b
    Add chat API (#2233) oobabooga 2023-05-20 18:42:17 -03:00
  • 2aa01e2303
    Fix broken version of peft (#2229) jllllll 2023-05-20 15:54:51 -05:00
  • 159eccac7e
    Update Audio-Notification.md oobabooga 2023-05-19 23:20:42 -03:00
  • a3e9769e31
    Added an audible notification after text generation in web. (#1277) HappyWorldGames 2023-05-20 05:16:06 +03:00
  • 1b52bddfcc
    Mitigate UnboundLocalError (#2136) Konstantin Gukov 2023-05-19 19:46:18 +02:00
  • 50c70e28f0
    Lora Trainer improvements, part 6 - slightly better raw text inputs (#2108) Alex "mcmonkey" Goodwin 2023-05-19 08:58:54 -07:00
  • 511470a89b Bump llama-cpp-python version oobabooga 2023-05-19 12:13:25 -03:00
  • a9733d4a99
    Metharme context fix (#2153) Carl Kenner 2023-05-20 00:16:13 +09:30
  • c86231377b
    Wizard Mega, Ziya, KoAlpaca, OpenBuddy, Chinese-Vicuna, Vigogne, Bactrian, H2O support, fix Baize (#2159) Carl Kenner 2023-05-20 00:12:41 +09:30
  • c98d6ad27f
    Create chat_style-messenger.css (#2187) Mykeehu 2023-05-19 16:31:06 +02:00
  • 499c2e009e Remove problematic regex from models/config.yaml oobabooga 2023-05-19 11:20:35 -03:00
  • 9d5025f531 Improve error handling while loading GPTQ models oobabooga 2023-05-19 11:20:08 -03:00
  • 39dab18307 Add a timeout to download-model.py requests oobabooga 2023-05-19 11:19:34 -03:00
  • 4ef2de3486
    Fix dependencies downgrading from gptq install (#61) jllllll 2023-05-18 10:46:04 -05:00
  • 07510a2414
    Change a message oobabooga 2023-05-18 10:58:37 -03:00
  • 0bcd5b6894
    Soothe anxious users oobabooga 2023-05-18 10:56:49 -03:00
  • f052ab9c8f Fix setting pre_layer from within the ui oobabooga 2023-05-17 23:17:44 -03:00
  • b667ffa51d Simplify GPTQ_loader.py oobabooga 2023-05-17 16:22:56 -03:00
  • ef10ffc6b4 Add various checks to model loading functions oobabooga 2023-05-17 15:52:23 -03:00
  • abd361b3a0 Minor change oobabooga 2023-05-17 11:33:43 -03:00
  • 21ecc3701e Avoid a name conflict oobabooga 2023-05-17 11:23:13 -03:00
  • fb91c07191 Minor bug fix oobabooga 2023-05-17 11:16:37 -03:00
  • 1a8151a2b6
    Add AutoGPTQ support (basic) (#2132) oobabooga 2023-05-17 11:12:12 -03:00
  • 10cf7831f7
    Update Extensions.md oobabooga 2023-05-17 10:45:29 -03:00
  • 1f50dbe352
    Experimental jank multiGPU inference that's 2x faster than native somehow (#2100) Alex "mcmonkey" Goodwin 2023-05-17 06:41:09 -07:00
  • fd743a0207 Small change oobabooga 2023-05-17 02:34:29 -03:00
  • aeb1b7a9c5
    feature to save prompts with custom names (#1583) LoopLooter 2023-05-17 08:30:45 +03:00
  • c9c6aa2b6e Update docs/Extensions.md oobabooga 2023-05-17 02:04:37 -03:00
  • 85f74961f9 Update "Interface mode" tab oobabooga 2023-05-17 01:57:51 -03:00
  • 9e558cba9b Update docs/Extensions.md oobabooga 2023-05-17 01:43:32 -03:00
  • 687f21f965 Update docs/Extensions.md oobabooga 2023-05-17 01:41:01 -03:00
  • 8f85d84e08 Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-05-17 01:32:42 -03:00
  • ce21804ec7 Allow extensions to define a new tab oobabooga 2023-05-17 01:25:01 -03:00
  • acf3dbbcc5
    Allow extensions to have custom display_name (#1242) ye7iaserag 2023-05-17 07:08:22 +03:00
  • ad0b71af11 Add missing file oobabooga 2023-05-17 00:37:34 -03:00
  • a84f499718 Allow extensions to define custom CSS and JS oobabooga 2023-05-17 00:03:39 -03:00
  • 824fa8fc0e Attempt at making interface restart more robust oobabooga 2023-05-16 22:27:43 -03:00
  • 259020a0be Bump gradio to 3.31.0 oobabooga 2023-05-16 22:21:05 -03:00
  • 458a627ab9
    fix: elevenlabs cloned voices do not show up in webui after entering API key (#2107) pixel 2023-05-16 17:21:36 -06:00
  • 7584d46c29
    Refactor models.py (#2113) oobabooga 2023-05-16 19:52:22 -03:00
  • 5cd6dd4287 Fix no-mmap bug oobabooga 2023-05-16 17:35:49 -03:00
  • 89e37626ab Reorganize chat settings tab oobabooga 2023-05-16 17:22:59 -03:00
  • d205ec9706
    Fix Training fails when evaluation dataset is selected (#2099) Forkoz 2023-05-16 16:40:19 +00:00
  • 428261eede
    fix: elevenlabs removed the need for the api key for refreshing voices (#2097) Orbitoid 2023-05-17 02:34:49 +10:00
  • cd9be4c2ba
    Update llama.cpp-models.md oobabooga 2023-05-16 00:49:32 -03:00
  • 26cf8c2545
    add api port options (#1990) atriantafy 2023-05-16 00:44:16 +01:00
  • e657dd342d
    Add in-memory cache support for llama.cpp (#1936) Andrei 2023-05-15 19:19:55 -04:00
  • 0227e738ed
    Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087) Jakub Strnad 2023-05-16 00:51:23 +02:00
  • 10869de0f4 Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-05-15 19:39:48 -03:00
  • c07215cc08 Improve the default Assistant character oobabooga 2023-05-15 19:39:08 -03:00
  • 4e66f68115 Create get_max_memory_dict() function oobabooga 2023-05-15 19:38:27 -03:00
  • ae54d83455
    Bump transformers from 4.28.1 to 4.29.1 (#2089) dependabot[bot] 2023-05-15 19:25:24 -03:00
  • 071f0776ad
    Add llama.cpp GPU offload option (#2060) AlphaAtlas 2023-05-14 21:58:11 -04:00
  • eee986348c
    Update llama-cpp-python from 0.1.45 to 0.1.50 (#2058) feeelX 2023-05-15 03:41:14 +02:00
  • 897fa60069 Sort selected superbooga chunks by insertion order oobabooga 2023-05-14 22:19:29 -03:00
  • b07f849e41
    Add superbooga chunk separator option (#2051) Luis Lopez 2023-05-15 08:44:52 +08:00
  • ab08cf6465
    [extensions/openai] clip extra leading space (#2042) matatonic 2023-05-14 11:57:52 -04:00
  • 3b886f9c9f
    Add chat-instruct mode (#2049) oobabooga 2023-05-14 10:43:55 -03:00
  • 5f6cf39f36 Change the injection context string oobabooga 2023-05-13 14:23:02 -03:00
  • 7cc17e3f1f Refactor superbooga oobabooga 2023-05-13 14:14:59 -03:00
  • 826c74c201 Expand superbooga to instruct mode and change the chat implementation oobabooga 2023-05-13 12:50:19 -03:00
  • c746a5bd00 Add .rstrip(' ') to openai api oobabooga 2023-05-12 14:40:48 -03:00
  • 3f1bfba718
    Clarify how to start server.py with multimodal API support (#2025) Damian Stewart 2023-05-12 19:37:49 +02:00
  • 437d1c7ead Fix bug in save_model_settings oobabooga 2023-05-12 14:33:00 -03:00
  • 146a9cb393 Allow superbooga to download URLs in parallel oobabooga 2023-05-12 14:19:55 -03:00
  • df37ba5256 Update impersonate_wrapper oobabooga 2023-05-12 12:59:48 -03:00
  • e283ddc559 Change how spaces are handled in continue/generation attempts oobabooga 2023-05-12 12:50:29 -03:00
  • 2eeb27659d Fix bug in --cpu-memory oobabooga 2023-05-12 06:17:07 -03:00
  • fcb46282c5 Add a rule to config.yaml oobabooga 2023-05-12 06:11:58 -03:00
  • 5eaa914e1b Fix settings.json being ignored because of config.yaml oobabooga 2023-05-12 06:09:45 -03:00
  • a77965e801 Make the regex for "Save settings for this model" exact oobabooga 2023-05-12 00:43:13 -03:00
  • f98fd01dcd
    is_chat=False for /edits (#2011) matatonic 2023-05-11 18:15:11 -04:00
  • 71693161eb Better handle spaces in LlamaTokenizer oobabooga 2023-05-11 17:55:50 -03:00
  • 7221d1389a Fix a bug oobabooga 2023-05-11 17:11:10 -03:00
  • 0d36c18f5d Always return only the new tokens in generation functions oobabooga 2023-05-11 17:07:20 -03:00
  • c4f0e6d740
    is_chat changes fix for openai extension (#2008) matatonic 2023-05-11 15:32:25 -04:00
  • 394bb253db Syntax improvement oobabooga 2023-05-11 16:27:50 -03:00
  • f7dbddfff5 Add a variable for tts extensions to use oobabooga 2023-05-11 16:12:46 -03:00
  • 638c6a65a2
    Refactor chat functions (#2003) oobabooga 2023-05-11 15:37:04 -03:00
  • 4e9da22c58
    missing stream api port added to docker compose (#2005) real 2023-05-11 20:07:56 +02:00
  • 309b72e549
    [extension/openai] add edits & image endpoints & fix prompt return in non --chat modes (#1935) matatonic 2023-05-11 10:06:39 -04:00