Commit graph

  • f740ee558c
    Merge branch 'oobabooga:main' into lora-and-model-dir catalpaaa 2023-03-25 01:28:33 -07:00
  • ce9a5e3b53
    Update install.bat jllllll 2023-03-25 02:22:02 -05:00
  • 2e02d42682 Changed things around to allow Micromamba to work with paths containing spaces. jllllll 2023-03-25 01:14:29 -05:00
  • 70f9565f37
    Update README.md oobabooga 2023-03-25 02:35:30 -03:00
  • 25be9698c7
    Fix LoRA on mps oobabooga 2023-03-25 01:18:32 -03:00
  • 3da633a497
    Merge pull request #529 from EyeDeck/main oobabooga 2023-03-24 23:51:01 -03:00
  • 1e260544cd
    Update install.bat jllllll 2023-03-24 21:25:14 -05:00
  • d51cb8292b Update server.py catalpaaa 2023-03-24 17:36:31 -07:00
  • 9e2963e0c8 Update server.py catalpaaa 2023-03-24 17:35:45 -07:00
  • ec2a1facee Update server.py catalpaaa 2023-03-24 17:34:33 -07:00
  • b37c54edcf lora-dir, model-dir and login auth catalpaaa 2023-03-24 17:30:18 -07:00
  • fa916aa1de
    Update INSTRUCTIONS.txt jllllll 2023-03-24 18:28:46 -05:00
  • 586775ad47
    Update download-model.bat jllllll 2023-03-24 18:25:49 -05:00
  • bddbc2f898
    Update start-webui.bat jllllll 2023-03-24 18:19:23 -05:00
  • 2604e3f7ac
    Update download-model.bat jllllll 2023-03-24 18:15:24 -05:00
  • 24870e51ed
    Update micromamba-cmd.bat jllllll 2023-03-24 18:12:02 -05:00
  • f0c82f06c3
    Add files via upload jllllll 2023-03-24 18:09:44 -05:00
  • 9fa47c0eed
    Revert GPTQ_loader.py (accident) oobabooga 2023-03-24 19:57:12 -03:00
  • a6bf54739c
    Revert models.py (accident) oobabooga 2023-03-24 19:56:45 -03:00
  • eec773b1f4
    Update install.bat jllllll 2023-03-24 17:54:47 -05:00
  • 0a16224451
    Update GPTQ_loader.py oobabooga 2023-03-24 19:54:36 -03:00
  • a80aa65986
    Update models.py oobabooga 2023-03-24 19:53:20 -03:00
  • 817e6c681e
    Update install.bat jllllll 2023-03-24 17:51:13 -05:00
  • a80a5465f2
    Update install.bat jllllll 2023-03-24 17:27:29 -05:00
  • 507db0929d
    Do not use empty user messages in chat mode oobabooga 2023-03-24 17:22:22 -03:00
  • 6e1b16c2aa
    Update html_generator.py oobabooga 2023-03-24 17:18:27 -03:00
  • ffb0187e83
    Update chat.py oobabooga 2023-03-24 17:17:29 -03:00
  • c14e598f14
    Merge pull request #433 from mayaeary/fix/api-reload oobabooga 2023-03-24 16:56:10 -03:00
  • bfe960731f
    Merge branch 'main' into fix/api-reload oobabooga 2023-03-24 16:54:41 -03:00
  • 4a724ed22f
    Reorder imports oobabooga 2023-03-24 16:53:56 -03:00
  • 8fad84abc2
    Update extensions.py oobabooga 2023-03-24 16:51:27 -03:00
  • d8e950d6bd
    Don't load the model twice when using --lora oobabooga 2023-03-24 16:30:32 -03:00
  • fd99995b01
    Make the Stop button more consistent in chat mode oobabooga 2023-03-24 15:59:27 -03:00
  • b740c5b284
    Add display of context when input was generated Forkoz 2023-03-24 08:56:07 -05:00
  • 4f5c2ce785
    Fix chat_generation_attempts oobabooga 2023-03-24 02:03:30 -03:00
  • 04417b658b
    Update README.md oobabooga 2023-03-24 01:40:43 -03:00
  • bb4cb22453
    Download .pt files using download-model.py (for 4-bit models) oobabooga 2023-03-24 00:49:04 -03:00
  • 143b5b5edf
    Mention one-click-bandaid in the README oobabooga 2023-03-23 23:28:50 -03:00
  • dcfd866402 Allow loading of .safetensors through GPTQ-for-LLaMa EyeDeck 2023-03-23 21:31:34 -04:00
  • 8747c74339
    Another missing import oobabooga 2023-03-23 22:19:01 -03:00
  • 7078d168c3
    Missing import oobabooga 2023-03-23 22:16:08 -03:00
  • d1327f99f9
    Fix broken callbacks.py oobabooga 2023-03-23 22:12:24 -03:00
  • 9bdb3c784d
    Minor fix oobabooga 2023-03-23 22:02:40 -03:00
  • b0abb327d8
    Update LoRA.py oobabooga 2023-03-23 22:02:09 -03:00
  • bf22d16ebc
    Clear cache while switching LoRAs oobabooga 2023-03-23 21:56:26 -03:00
  • 4578e88ffd
    Stop the bot from talking for you in chat mode oobabooga 2023-03-23 21:38:20 -03:00
  • 9bf6ecf9e2
    Fix LoRA device map (attempt) oobabooga 2023-03-23 16:49:41 -03:00
  • c5ebcc5f7e
    Change the default names (#518) oobabooga 2023-03-23 13:36:00 -03:00
  • 483d173d23 Code reuse + indication Φφ 2023-03-21 20:19:38 +03:00
  • 1917b15275 Unload and reload models on request Φφ 2023-03-21 13:15:42 +03:00
  • 29bd41d453
    Fix LoRA in CPU mode oobabooga 2023-03-23 01:05:13 -03:00
  • eac27f4f55
    Make LoRAs work in 16-bit mode oobabooga 2023-03-23 00:55:33 -03:00
  • bfa81e105e
    Fix FlexGen streaming oobabooga 2023-03-23 00:22:14 -03:00
  • 7b6f85d327
    Fix markdown headers in light mode oobabooga 2023-03-23 00:13:34 -03:00
  • de6a09dc7f
    Properly separate the original prompt from the reply oobabooga 2023-03-23 00:12:40 -03:00
  • d5fc1bead7
    Merge pull request #489 from Brawlence/ext-fixes oobabooga 2023-03-22 16:10:59 -03:00
  • bfb1be2820
    Minor fix oobabooga 2023-03-22 16:09:48 -03:00
  • 0abff499e2
    Use image.thumbnail oobabooga 2023-03-22 16:03:05 -03:00
  • 104212529f
    Minor changes oobabooga 2023-03-22 15:55:03 -03:00
  • 61346b88ea
    Add "seed" menu in the Parameters tab wywywywy 2023-03-22 18:40:20 +00:00
  • 5389fce8e1 Extensions performance & memory optimisations Φφ 2023-03-22 07:47:54 +03:00
  • 45b7e53565
    Only catch proper Exceptions in the text generation function oobabooga 2023-03-20 20:36:02 -03:00
  • 6872ffd976
    Update README.md oobabooga 2023-03-20 16:53:14 -03:00
  • db4219a340
    Update comments oobabooga 2023-03-20 16:40:08 -03:00
  • 7618f3fe8c
    Add -gptq-preload for 4-bit offloading (#460) oobabooga 2023-03-20 16:30:56 -03:00
  • e96687b1d6 Do not send empty user input as part of the prompt. Vladimir Belitskiy 2023-03-20 14:16:48 -04:00
  • 9a3bed50c3
    Attempt at fixing 4-bit with CPU offload oobabooga 2023-03-20 15:11:56 -03:00
  • 536d0a4d93
    Add an import oobabooga 2023-03-20 14:00:40 -03:00
  • ca47e016b4
    Do not display empty user messages in chat mode. Vladimir Belitskiy 2023-03-20 12:55:57 -04:00
  • 75a7a84ef2
    Exception handling (#454) oobabooga 2023-03-20 13:36:52 -03:00
  • a90f507abe
    Exit elevenlabs_tts if streaming is enabled oobabooga 2023-03-20 11:49:42 -03:00
  • 31ab2be8ef Remove redundant requirements #309 oobabooga 2023-03-19 22:10:55 -03:00
  • 164e05daad Download .py files using download-model.py oobabooga 2023-03-19 20:34:52 -03:00
  • dd4374edde Update README oobabooga 2023-03-19 20:15:15 -03:00
  • 9378754cc7 Update README oobabooga 2023-03-19 20:14:50 -03:00
  • 7ddf6147ac
    Update README.md oobabooga 2023-03-19 19:25:52 -03:00
  • b552d2b58a Remove unused imports o oobabooga 2023-03-19 19:24:41 -03:00
  • ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control oobabooga 2023-03-19 19:21:41 -03:00
  • 4bafe45a51
    Merge pull request #309 from Brawlence/main oobabooga 2023-03-19 13:24:47 -03:00
  • eab8de0d4a Merge branch 'main' into Brawlence-main oobabooga 2023-03-19 13:09:59 -03:00
  • 4d701a6eb9 Create a mirror for the preset menu oobabooga 2023-03-19 12:51:47 -03:00
  • 257edf5f56 Make the Default preset more reasonable oobabooga 2023-03-19 12:30:51 -03:00
  • a78b6508fc Make custom LoRAs work by default #385 oobabooga 2023-03-19 12:11:35 -03:00
  • 7073e96093 Add back RWKV dependency #98 oobabooga 2023-03-19 12:05:28 -03:00
  • 217e1d9fdf Merge branch 'main' of github.com:oobabooga/text-generation-webui oobabooga 2023-03-19 10:37:23 -03:00
  • c79fc69e95 Fix the API example with streaming #417 oobabooga 2023-03-19 10:36:57 -03:00
  • acdbd6b708 Check if app should display extensions ui Maya 2023-03-19 13:31:21 +00:00
  • 81c9d130f2 Fix global Maya 2023-03-19 13:25:49 +00:00
  • 099d7a844b Add setup method to extensions Maya 2023-03-19 13:22:24 +00:00
  • bd27353a08 Fix duplicating server on ui reload Maya 2023-03-19 12:51:27 +00:00
  • 0cbe2dd7e9
    Update README.md oobabooga 2023-03-18 12:24:54 -03:00
  • 36ac7be76d
    Merge pull request #407 from ThisIsPIRI/gitignore oobabooga 2023-03-18 11:57:10 -03:00
  • d2a7fac8ea
    Use pip instead of conda for pytorch oobabooga 2023-03-18 11:56:04 -03:00
  • 705f513c4c Add loras to .gitignore ThisIsPIRI 2023-03-18 23:33:24 +09:00
  • 9ed3a03d4b
    Don't use the official instructions oobabooga 2023-03-18 11:25:08 -03:00
  • a0b1a30fd5
    Specify torchvision/torchaudio versions oobabooga 2023-03-18 11:23:56 -03:00
  • c753261338 Disable stop_at_newline by default oobabooga 2023-03-18 10:55:57 -03:00
  • 7c945cfe8e Don't include PeftModel every time oobabooga 2023-03-18 10:55:24 -03:00
  • 86b99006d9
    Remove rwkv dependency oobabooga 2023-03-18 10:27:52 -03:00
  • a163807f86
    Update README.md oobabooga 2023-03-18 03:07:27 -03:00