Commit graph

  • 57be2eecdf
    Update README.md oobabooga 2023-06-16 15:04:16 -03:00
  • 772d4080b2
    Update llama.cpp-models.md for macOS (#2711) Meng-Yuan Huang 2023-06-16 11:00:24 +08:00
  • 646b0c889f
    AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648) Tom Jobbins 2023-06-16 03:59:54 +01:00
  • 909d8c6ae3
    Bump transformers from 4.30.0 to 4.30.2 (#2695) dependabot[bot] 2023-06-14 19:56:28 -03:00
  • 2b9a6b9259 Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-06-14 18:45:24 -03:00
  • 4d508cbe58 Add some checks to AutoGPTQ loader oobabooga 2023-06-14 18:44:43 -03:00
  • 56c19e623c
    Add LORA name instead of "default" in PeftModel (#2689) FartyPants 2023-06-14 17:29:42 -04:00
  • 134430bbe2 Minor change oobabooga 2023-06-14 11:34:29 -03:00
  • 474dc7355a Allow API requests to use parameter presets oobabooga 2023-06-13 20:34:35 -03:00
  • 8936160e54
    Add WSL installer to README (thanks jllllll) oobabooga 2023-06-13 00:07:34 -03:00
  • c42f183d3f
    Installer for WSL (#78) jllllll 2023-06-12 22:04:15 -05:00
  • 9f150aedc3
    A small UI change in Models menu (#2640) FartyPants 2023-06-12 00:24:44 -04:00
  • da5d9a28d8 Fix tabbed extensions showing up at the bottom of the UI oobabooga 2023-06-11 21:20:51 -03:00
  • ae5e2b3470 Reorganize a bit oobabooga 2023-06-11 19:50:20 -03:00
  • e471919e6d Make llava/minigpt-4 work with AutoGPTQ oobabooga 2023-06-11 17:52:23 -03:00
  • f4defde752 Add a menu for installing extensions oobabooga 2023-06-11 17:11:06 -03:00
  • 8e73806b20 Improve "Interface mode" appearance oobabooga 2023-06-11 15:29:45 -03:00
  • a06c953692 Minor style change oobabooga 2023-06-11 15:13:26 -03:00
  • ac122832f7 Make dropdown menus more similar to automatic1111 oobabooga 2023-06-11 14:20:16 -03:00
  • 8275dbc68c
    Update WSL-installation-guide.md (#2626) Amine Djeghri 2023-06-11 17:30:34 +02:00
  • 6133675e0f
    Add menus for saving presets/characters/instruction templates/prompts (#2621) oobabooga 2023-06-11 12:19:18 -03:00
  • ea0eabd266 Bump llama-cpp-python version oobabooga 2023-06-10 21:59:29 -03:00
  • ec2b5bae39
    Merge pull request #2616 from oobabooga/dev oobabooga 2023-06-10 21:55:59 -03:00
  • b04e18d10c
    Add Mirostat v2 sampling to transformer models (#2571) brandonj60 2023-06-09 19:26:31 -05:00
  • aff3e04df4 Remove irrelevant docs oobabooga 2023-06-09 21:15:37 -03:00
  • d7db25dac9 Fix a permission oobabooga 2023-06-09 01:44:17 -03:00
  • d033c85cf9 Fix a permission oobabooga 2023-06-09 01:43:22 -03:00
  • 741afd74f6 Update requirements-minimal.txt oobabooga 2023-06-09 00:48:41 -03:00
  • c333e4c906 Add docs for performance optimizations oobabooga 2023-06-09 00:45:49 -03:00
  • aaf240a14c
    Merge pull request #2587 from oobabooga/dev oobabooga 2023-06-09 00:30:59 -03:00
  • c6552785af Minor cleanup oobabooga 2023-06-09 00:30:22 -03:00
  • 92b45cb3f5 Merge branch 'main' into dev oobabooga 2023-06-09 00:27:11 -03:00
  • 8a7a8343be Detect TheBloke_WizardLM-30B-GPTQ oobabooga 2023-06-09 00:26:34 -03:00
  • 0f8140e99d Bump transformers/accelerate/peft/autogptq oobabooga 2023-06-09 00:25:13 -03:00
  • ac40c59ac3
    Added Guanaco-QLoRA to Instruct character (#2574) FartyPants 2023-06-08 11:24:32 -04:00
  • db2cbe7b5a Detect WizardLM-30B-V1.0 instruction format oobabooga 2023-06-08 11:41:06 -03:00
  • e0b43102e6 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2023-06-08 11:35:23 -03:00
  • 7be6fe126b
    extensions/api: models api for blocking_api (updated) (#2539) matatonic 2023-06-08 10:34:36 -04:00
  • 240752617d Increase download timeout to 20s oobabooga 2023-06-08 11:16:38 -03:00
  • 084b006cfe
    Update LLaMA-model.md (#2460) zaypen 2023-06-08 02:34:50 +08:00
  • c05edfcdfc
    fix: reverse-proxied URI should end with 'chat', not 'generate' (#2556) dnobs 2023-06-06 20:08:04 -07:00
  • 878250d609 Merge branch 'main' into dev oobabooga 2023-06-06 19:43:53 -03:00
  • f55e85e28a Fix multimodal with model loaded through AutoGPTQ oobabooga 2023-06-06 19:42:40 -03:00
  • eb2601a8c3 Reorganize Parameters tab oobabooga 2023-06-06 14:51:02 -03:00
  • 3cc5ce3c42
    Merge pull request #2551 from oobabooga/dev oobabooga 2023-06-06 14:40:52 -03:00
  • 6015616338 Style changes oobabooga 2023-06-06 13:06:05 -03:00
  • f040073ef1 Handle the case of older autogptq install oobabooga 2023-06-06 13:05:05 -03:00
  • 5d515eeb8c Bump llama-cpp-python wheel oobabooga 2023-06-06 13:01:15 -03:00
  • bc58dc40bd Fix a minor bug oobabooga 2023-06-06 12:57:13 -03:00
  • f06a1387f0 Reorganize Models tab oobabooga 2023-06-06 07:58:07 -03:00
  • d49d299b67 Change a message oobabooga 2023-06-06 07:54:56 -03:00
  • f9b8bed953 Remove folder oobabooga 2023-06-06 07:49:12 -03:00
  • 90fdb8edc6 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2023-06-06 07:46:51 -03:00
  • 7ed1e35fbf Reorganize Parameters tab in chat mode oobabooga 2023-06-06 07:46:25 -03:00
  • 00b94847da Remove softprompt support oobabooga 2023-06-06 07:42:23 -03:00
  • 643c44e975
    Add ngrok shared URL ingress support (#1944) bobzilla 2023-06-06 03:34:20 -07:00
  • ccb4c9f178 Add some padding to chat box oobabooga 2023-06-06 07:21:16 -03:00
  • 0aebc838a0 Don't save the history for 'None' character oobabooga 2023-06-06 07:21:07 -03:00
  • 9f215523e2 Remove some unused imports oobabooga 2023-06-06 07:05:32 -03:00
  • b9bc9665d9 Remove some extra space oobabooga 2023-06-06 07:01:37 -03:00
  • 177ab7912a Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2023-06-06 07:01:00 -03:00
  • 0f0108ce34 Never load the history for default character oobabooga 2023-06-06 07:00:11 -03:00
  • ae25b21d61 Improve instruct style in dark mode oobabooga 2023-06-06 07:00:00 -03:00
  • 4a17a5db67
    [extensions/openai] various fixes (#2533) matatonic 2023-06-06 00:43:04 -04:00
  • 97f3fa843f
    Bump llama-cpp-python from 0.1.56 to 0.1.57 (#2537) dependabot[bot] 2023-06-05 23:45:58 -03:00
  • 11f38b5c2b Add AutoGPTQ LoRA support oobabooga 2023-06-05 23:29:29 -03:00
  • 3a5cfe96f0 Increase chat_prompt_size_max oobabooga 2023-06-05 17:37:37 -03:00
  • 4e9937aa99 Bump gradio oobabooga 2023-06-05 17:29:21 -03:00
  • 53496ffa80
    Create stale.yml oobabooga 2023-06-05 17:15:31 -03:00
  • 0377e385e0
    Update .gitignore (#2504) pandego 2023-06-05 22:11:03 +02:00
  • 60bfd0b722
    Merge pull request #2535 from oobabooga/dev oobabooga 2023-06-05 17:07:54 -03:00
  • eda224c92d Update README oobabooga 2023-06-05 17:04:09 -03:00
  • bef94b9ebb Update README oobabooga 2023-06-05 17:01:13 -03:00
  • 99d701994a Update GPTQ-models-(4-bit-mode).md oobabooga 2023-06-05 15:55:00 -03:00
  • f276d88546 Use AutoGPTQ by default for GPTQ models oobabooga 2023-06-05 15:41:48 -03:00
  • 632571a009 Update README oobabooga 2023-06-05 15:16:06 -03:00
  • 6a75bda419 Assign some 4096 seq lengths oobabooga 2023-06-05 12:07:52 -03:00
  • 9b0e95abeb Fix "regenerate" when "Start reply with" is set oobabooga 2023-06-05 11:56:03 -03:00
  • e61316ce0b Detect airoboros and Nous-Hermes oobabooga 2023-06-05 11:52:13 -03:00
  • 19f78684e6 Add "Start reply with" feature to chat mode oobabooga 2023-06-02 13:58:08 -03:00
  • f7b07c4705
    Fix the missing Chinese character bug (#2497) GralchemOz 2023-06-03 00:45:41 +08:00
  • 28198bc15c Change some headers oobabooga 2023-06-02 11:28:43 -03:00
  • 5177cdf634 Change AutoGPTQ info oobabooga 2023-06-02 11:19:44 -03:00
  • 8e98633efd Add a description for chat_prompt_size oobabooga 2023-06-02 11:13:22 -03:00
  • 5a8162a46d Reorganize models tab oobabooga 2023-06-02 02:24:15 -03:00
  • d183c7d29e Fix streaming japanese/chinese characters oobabooga 2023-06-02 02:09:52 -03:00
  • 5216117a63
    Fix MacOS incompatibility in requirements.txt (#2485) jllllll 2023-06-01 23:46:16 -05:00
  • 2f6631195a Add desc_act checkbox to the UI oobabooga 2023-06-02 01:45:46 -03:00
  • 9c066601f5
    Extend AutoGPTQ support for any GPTQ model (#1668) LaaZa 2023-06-02 07:33:55 +03:00
  • b4ad060c1f Use cuda 11.7 instead of 11.8 oobabooga 2023-06-02 01:04:44 -03:00
  • d0aca83b53 Add AutoGPTQ wheels to requirements.txt oobabooga 2023-06-02 00:47:11 -03:00
  • f344ccdddb Add a template for bluemoon oobabooga 2023-06-01 14:42:12 -03:00
  • 522b01d051 Grammar oobabooga 2023-06-01 14:05:29 -03:00
  • 5540335819 Better way to detect if a model has been downloaded oobabooga 2023-06-01 14:01:19 -03:00
  • aa83fc21d4
    Update Low-VRAM-guide.md oobabooga 2023-06-01 12:14:27 -03:00
  • ee99a87330
    Update README.md oobabooga 2023-06-01 12:08:44 -03:00
  • a83f9aa65b
    Update shared.py oobabooga 2023-06-01 12:08:39 -03:00
  • 146505a16b
    Update README.md oobabooga 2023-06-01 12:04:58 -03:00
  • 756e3afbcc
    Update llama.cpp-models.md oobabooga 2023-06-01 12:04:31 -03:00
  • 3347395944
    Update README.md oobabooga 2023-06-01 12:01:20 -03:00