Commit graph

  • 1445ea86f7 Add --output and better metadata for downloading models oobabooga 2023-03-29 20:26:44 -03:00
  • 58349f44a0
    Handle training exception for unsupported models oobabooga 2023-03-29 11:55:34 -03:00
  • a6d0373063
    Fix training dataset loading #636 oobabooga 2023-03-29 11:48:17 -03:00
  • 41b58bc47e
    Update README.md oobabooga 2023-03-29 11:02:29 -03:00
  • 0de4f24b12
    Merge pull request #4 from jllllll/oobabooga-windows oobabooga 2023-03-29 09:49:32 -03:00
  • ed0e593161
    Change Micromamba download jllllll 2023-03-29 02:47:19 -05:00
  • 3b4447a4fe
    Update README.md oobabooga 2023-03-29 02:24:11 -03:00
  • 5d0b83c341
    Update README.md oobabooga 2023-03-29 02:22:19 -03:00
  • c2a863f87d
    Mention the updated one-click installer oobabooga 2023-03-29 02:11:51 -03:00
  • da3aa8fbda
    Merge pull request #2 from jllllll/oobabooga-windows oobabooga 2023-03-29 01:55:47 -03:00
  • 1edfb96778
    Fix loading extensions from within the interface oobabooga 2023-03-28 23:27:02 -03:00
  • aaa218a102 Remove unused import. Nikita Skakun 2023-03-28 18:32:49 -07:00
  • ff515ec2fe Improve progress bar visual style Nikita Skakun 2023-03-28 18:29:20 -07:00
  • 304f812c63 Gracefully handle CUDA out of memory errors with streaming oobabooga 2023-03-28 19:20:50 -03:00
  • 4d8e101006 Refactor download process to use multiprocessing Nikita Skakun 2023-03-28 14:24:23 -07:00
  • b2f356a9ae
    Generalize GPTQ_loader, support any model (#615 from mayaeary/feature/gpt-j-4bit-v2) oobabooga 2023-03-28 18:00:09 -03:00
  • 010b259dde Update documentation oobabooga 2023-03-28 17:46:00 -03:00
  • 0bec15ebcd Reorder imports oobabooga 2023-03-28 17:34:15 -03:00
  • 41ec682834 Disable kernel threshold for gpt-j Maya Eary 2023-03-28 22:45:38 +03:00
  • 1ac003d41c
    Merge branch 'oobabooga:main' into feature/gpt-j-4bit-v2 Maya 2023-03-28 22:30:39 +03:00
  • aebd3cf110
    Merge pull request #616 from mayaeary/fix/api-convert-params oobabooga 2023-03-28 15:21:58 -03:00
  • d1377c37af Fixes for api server - chat mode and integer temperature Maya Eary 2023-03-28 20:57:16 +03:00
  • 1c075d8d21 Fix typo Maya Eary 2023-03-28 20:43:50 +03:00
  • c8207d474f Generalized load_quantized Maya Eary 2023-03-28 20:38:55 +03:00
  • cac577d99f Fix interface reloading oobabooga 2023-03-28 13:25:58 -03:00
  • 88ad86249d Remove unnecessary file oobabooga 2023-03-28 13:19:52 -03:00
  • 91aa5b460e If both .pt and .safetensors are present, download only safetensors oobabooga 2023-03-28 13:08:38 -03:00
  • 8579fe51dd Fix new lines in the HTML tab oobabooga 2023-03-28 12:59:34 -03:00
  • 46f6536fae
    Merge pull request #570 from mcmonkey4eva/add-train-lora-tab oobabooga 2023-03-28 02:53:51 -03:00
  • b0f05046b3 remove duplicate import Alex "mcmonkey" Goodwin 2023-03-27 22:50:37 -07:00
  • e817fac542 better defaults Alex "mcmonkey" Goodwin 2023-03-27 22:29:23 -07:00
  • 9cc811a0e6 fix LoRA path typo in #549 Alex "mcmonkey" Goodwin 2023-03-27 22:16:40 -07:00
  • 2e08af4edf implement initial Raw Text File Input Alex "mcmonkey" Goodwin 2023-03-27 22:15:32 -07:00
  • b749952fe3 change number minimums to 0 Alex "mcmonkey" Goodwin 2023-03-27 21:22:43 -07:00
  • ec6224f556 use new shared.args.lora_dir Alex "mcmonkey" Goodwin 2023-03-27 20:04:16 -07:00
  • 31f04dc615 Merge branch 'main' into add-train-lora-tab Alex "mcmonkey" Goodwin 2023-03-27 20:03:30 -07:00
  • 966168bd2a
    Merge pull request #602 from oobabooga/dependabot/pip/accelerate-0.18.0 oobabooga 2023-03-27 23:53:26 -03:00
  • c188975a01
    Merge pull request #549 from catalpaaa/lora-and-model-dir oobabooga 2023-03-27 23:46:47 -03:00
  • 53da672315 Fix FlexGen oobabooga 2023-03-27 23:44:21 -03:00
  • ee95e55df6 Fix RWKV tokenizer oobabooga 2023-03-27 23:42:29 -03:00
  • 036163a751 Change description oobabooga 2023-03-27 23:39:26 -03:00
  • 30585b3e71 Update README oobabooga 2023-03-27 23:35:01 -03:00
  • 005f552ea3 Some simplifications oobabooga 2023-03-27 23:29:52 -03:00
  • fde92048af Merge branch 'main' into catalpaaa-lora-and-model-dir oobabooga 2023-03-27 23:16:44 -03:00
  • 8a97f6ba29 corrections per the PR comments Alex "mcmonkey" Goodwin 2023-03-27 18:39:06 -07:00
  • 1e02f75f2b
    Bump accelerate from 0.17.1 to 0.18.0 dependabot[bot] 2023-03-28 01:19:34 +00:00
  • 37f11803e3
    Merge pull request #603 from oobabooga/dependabot/pip/rwkv-0.7.1 oobabooga 2023-03-27 22:19:08 -03:00
  • 7fab7ea1b6 couple missed camelCases Alex "mcmonkey" Goodwin 2023-03-27 18:19:06 -07:00
  • 1fc7ff065d
    Bump bitsandbytes from 0.37.1 to 0.37.2 oobabooga 2023-03-27 22:18:52 -03:00
  • 6368dad7db Fix camelCase to snake_case to match repo format standard Alex "mcmonkey" Goodwin 2023-03-27 18:17:42 -07:00
  • 2f0571bfa4 Small style changes oobabooga 2023-03-27 21:24:39 -03:00
  • c2cad30772 Merge branch 'main' into mcmonkey4eva-add-train-lora-tab oobabooga 2023-03-27 21:05:44 -03:00
  • e9c0226b09
    Bump rwkv from 0.7.0 to 0.7.1 dependabot[bot] 2023-03-27 21:05:35 +00:00
  • 9c96919121
    Bump bitsandbytes from 0.37.1 to 0.37.2 dependabot[bot] 2023-03-27 21:05:19 +00:00
  • 9ec6c56680
    Update stale.yml oobabooga 2023-03-27 15:12:43 -03:00
  • 9ced75746d add total time estimate Alex "mcmonkey" Goodwin 2023-03-27 10:57:27 -07:00
  • 641e1a09a7 Don't flash when selecting a new prompt oobabooga 2023-03-27 14:48:43 -03:00
  • 16ea4fc36d interrupt button Alex "mcmonkey" Goodwin 2023-03-27 10:43:01 -07:00
  • 8fc723fc95 initial progress tracker in UI Alex "mcmonkey" Goodwin 2023-03-27 10:25:08 -07:00
  • 48a6c9513e
    Merge pull request #572 from clusterfudge/issues/571 oobabooga 2023-03-27 14:06:38 -03:00
  • 268abd1cba Add some space in notebook mode oobabooga 2023-03-27 13:52:12 -03:00
  • c07bcd0850 add some outputs to indicate progress updates (sorta) Alex "mcmonkey" Goodwin 2023-03-27 09:41:06 -07:00
  • af65c12900 Change Stop button behavior oobabooga 2023-03-27 13:23:59 -03:00
  • addb9777f9 Increase size of GALACTICA equations oobabooga 2023-03-27 12:59:07 -03:00
  • 572bafcd24 Less verbose message oobabooga 2023-03-27 12:43:37 -03:00
  • 2afe1c13c1 move Training to before Interface mode Alex "mcmonkey" Goodwin 2023-03-27 08:32:32 -07:00
  • d911c22af9 use shared rows to make the LoRA Trainer interface a bit more compact / clean Alex "mcmonkey" Goodwin 2023-03-27 08:31:49 -07:00
  • 202e981d00 Make Generate/Stop buttons smaller in notebook mode oobabooga 2023-03-27 12:30:57 -03:00
  • e439228ed8 Merge branch 'main' into add-train-lora-tab Alex "mcmonkey" Goodwin 2023-03-27 08:21:19 -07:00
  • 8e2d94a5a1 Add saved promtps to gitignore oobabooga 2023-03-27 12:21:19 -03:00
  • 57345b8f30 Add prompt loading/saving menus + reorganize interface oobabooga 2023-03-27 12:16:37 -03:00
  • cb5dff0087
    Update installer to use official micromamba url jllllll 2023-03-26 23:40:46 -05:00
  • 3dc61284d5 Handle unloading LoRA from dropdown menu icon oobabooga 2023-03-27 00:04:43 -03:00
  • b6e38e8b97
    silero_tts streaming fix (#568 from Brawlence/silero_tts-fix) oobabooga 2023-03-26 23:59:07 -03:00
  • bdf85ffcf9
    Remove explicit pytorch installation jllllll 2023-03-26 21:56:16 -05:00
  • af603a142a
    Unload models on request (#471 from Brawlence/main) oobabooga 2023-03-26 23:53:39 -03:00
  • 95c97e1747 Unload the model using the "Remove all" button oobabooga 2023-03-26 23:47:29 -03:00
  • e07c9e3093 Merge branch 'main' into Brawlence-main oobabooga 2023-03-26 23:40:51 -03:00
  • 511be06dcc Merge branch 'main' of github.com:oobabooga/text-generation-webui oobabooga 2023-03-26 22:21:29 -03:00
  • 1c77fdca4c Change notebook mode appearance oobabooga 2023-03-26 22:20:30 -03:00
  • 9ff6a538b6 Bump gradio version oobabooga 2023-03-26 22:11:19 -03:00
  • a04b7cf264
    Merge pull request #585 from fkusche/also-download-markdown oobabooga 2023-03-26 14:51:23 -03:00
  • 19174842b8 Also download Markdown files Florian Kusche 2023-03-26 19:41:14 +02:00
  • 8222d32240
    Merge pull request #565 from mcmonkey4eva/improve-gitignore oobabooga 2023-03-26 13:31:45 -03:00
  • 6f89242094
    Remove temporary fix for GPTQ-for-LLaMa jllllll 2023-03-26 03:29:14 -05:00
  • 6dcfcf4fed
    Amended fix for GPTQ-for-LLaMa jllllll 2023-03-26 01:00:52 -05:00
  • 12baa0e84b
    Update for latest GPTQ-for-LLaMa jllllll 2023-03-26 00:46:07 -05:00
  • 247e8e5b79
    Fix for issue in current GPTQ-for-LLaMa. jllllll 2023-03-26 00:24:00 -05:00
  • 49c10c5570
    Add support for the latest GPTQ models with group-size (#530) oobabooga 2023-03-26 00:11:33 -03:00
  • 0bac80d9eb Potential fix for issues/571 Sean Fitzgerald 2023-03-25 13:08:45 -07:00
  • f1ba2196b1 make 'model' variables less ambiguous Alex "mcmonkey" Goodwin 2023-03-25 12:57:36 -07:00
  • 8da237223e document options better Alex "mcmonkey" Goodwin 2023-03-25 12:48:35 -07:00
  • 8134c4b334 add training/datsets to gitignore for #570 Alex "mcmonkey" Goodwin 2023-03-25 12:41:18 -07:00
  • 5c49a0dcd0 fix error from prepare call running twice in a row Alex "mcmonkey" Goodwin 2023-03-25 12:37:32 -07:00
  • 7bf601107c automatically strip empty data entries (for better alpaca dataset compat) Alex "mcmonkey" Goodwin 2023-03-25 12:28:46 -07:00
  • 566898a79a initial lora training tab Alex "mcmonkey" Goodwin 2023-03-25 12:08:26 -07:00
  • 1a1e420e65 Silero_tts streaming fix Φφ 2023-03-25 21:31:13 +03:00
  • 9ccf505ccd improve/simplify gitignore Alex "mcmonkey" Goodwin 2023-03-25 10:04:00 -07:00
  • 8c8e8b4450
    Fix the early stopping callback #559 oobabooga 2023-03-25 12:35:52 -03:00
  • a1f12d607f
    Merge pull request #538 from Ph0rk0z/display-input-context oobabooga 2023-03-25 11:56:18 -03:00