Commit graph

  • 689f264979 Fix permission oobabooga 2023-08-12 21:14:37 -07:00
  • f7ad634634 Remove --chat flag oobabooga 2023-08-12 21:13:50 -07:00
  • a1a9ec895d
    Unify the 3 interface modes (#3554) oobabooga 2023-08-13 01:12:15 -03:00
  • bf70c19603
    ctransformers: move thread and seed parameters (#3543) cal066 2023-08-13 03:04:03 +00:00
  • 73421b1fed
    Bump ctransformers wheel version (#3558) jllllll 2023-08-12 21:02:47 -05:00
  • 0230fa4e9c Add the --disable_exllama option for AutoGPTQ Chris Lefever 2023-08-12 02:26:58 -04:00
  • 0e05818266 Style changes oobabooga 2023-08-11 16:33:15 -07:00
  • 4c450e6b70
    Update README.md oobabooga 2023-08-11 15:50:16 -03:00
  • 2f918ccf7c Remove unused parameter oobabooga 2023-08-11 11:15:22 -07:00
  • 28c8df337b Add repetition_penalty_range to ctransformers oobabooga 2023-08-11 11:02:56 -07:00
  • 7a4fcee069
    Add ctransformers support (#3313) cal066 2023-08-11 17:41:33 +00:00
  • 8dbaa20ca8 Don't replace last reply with an empty message oobabooga 2023-08-10 13:14:48 -07:00
  • 949c92d7df
    Create README.md oobabooga 2023-08-10 14:32:40 -03:00
  • 0789554f65 Allow --lora to use an absolute path oobabooga 2023-08-10 09:54:28 -07:00
  • 3929971b66 Don't show oobabooga_llama-tokenizer in the model dropdown oobabooga 2023-08-10 10:01:12 -07:00
  • e12a1852d9
    Add Vicuna-v1.5 detection (#3524) Gennadij 2023-08-10 19:42:24 +03:00
  • 28e3ce4317
    Simplify GPTQ-for-LLaMa installation (#122) jllllll 2023-08-10 11:19:47 -05:00
  • e3d2ddd170
    Streamline GPTQ-for-LLaMa support (#3526 from jllllll/gptqllama) oobabooga 2023-08-10 12:54:59 -03:00
  • c7f52bbdc1 Revert "Remove GPTQ-for-LLaMa monkey patch support" oobabooga 2023-08-10 08:39:41 -07:00
  • 16e2b117b4 Minor doc change oobabooga 2023-08-10 08:38:10 -07:00
  • d6765bebc4
    Update installation documentation jllllll 2023-08-10 00:53:48 -05:00
  • d7ee4c2386
    Remove unused import jllllll 2023-08-10 00:10:14 -05:00
  • e3d3565b2a
    Remove GPTQ-for-LLaMa monkey patch support jllllll 2023-08-09 23:59:04 -05:00
  • bee73cedbd
    Streamline GPTQ-for-LLaMa support jllllll 2023-08-09 23:42:34 -05:00
  • a3295dd666 Detect n_gqa and prompt template for wizardlm-70b oobabooga 2023-08-09 10:38:35 -07:00
  • a4e48cbdb6 Bump AutoGPTQ oobabooga 2023-08-09 08:31:17 -07:00
  • 7c1300fab5 Pin aiofiles version to fix statvfs issue oobabooga 2023-08-09 08:07:55 -07:00
  • 6c6a52aaad Change the filenames for caches and histories oobabooga 2023-08-09 07:47:19 -07:00
  • 2255349f19 Update README oobabooga 2023-08-09 05:46:25 -07:00
  • 5bfcfcfc5a
    Added the logic for starchat model series (#3185) GiganticPrime 2023-08-09 21:26:12 +09:00
  • fa4a948b38
    Allow users to write one flag per line in CMD_FLAGS.txt oobabooga 2023-08-09 01:58:23 -03:00
  • d8fb506aff Add RoPE scaling support for transformers (including dynamic NTK) oobabooga 2023-08-08 21:24:28 -07:00
  • f4caaf337a
    Fix superbooga when using regenerate (#3362) Hans Raaf 2023-08-09 04:26:28 +02:00
  • 901b028d55
    Add option for named cloudflare tunnels (#3364) Friedemann Lipphardt 2023-08-09 03:20:27 +02:00
  • 4ba30f6765 Add OpenChat template oobabooga 2023-08-08 14:10:04 -07:00
  • bf08b16b32 Fix disappearing profile picture bug oobabooga 2023-08-08 14:09:01 -07:00
  • 0e78f3b4d4
    Fixed a typo in "rms_norm_eps", incorrectly set as n_gqa (#3494) Gennadij 2023-08-08 06:31:11 +03:00
  • 37fb719452
    Increase the Context/Greeting boxes sizes oobabooga 2023-08-08 00:09:00 -03:00
  • 6d354bb50b
    Allow the webui to do multiple tasks simultaneously oobabooga 2023-08-07 23:57:25 -03:00
  • 584dd33424
    Fix missing example_dialogue when uploading characters oobabooga 2023-08-07 23:44:59 -03:00
  • bbe4a29a25
    Add back dark theme code oobabooga 2023-08-07 23:03:09 -03:00
  • 2d0634cd07 Bump transformers commit for positive prompts oobabooga 2023-08-07 08:57:19 -07:00
  • 3b27404865
    Make dockerfile respect specified cuda version (#3474) Sam 2023-08-07 23:19:16 +10:00
  • 412f6ff9d3 Change alpha_value maximum and step oobabooga 2023-08-07 06:08:51 -07:00
  • a373c96d59 Fix a bug in modules/shared.py oobabooga 2023-08-06 20:36:35 -07:00
  • 2cf64474f2
    Use chat_instruct_command in API (#3482) jllllll 2023-08-06 21:46:25 -05:00
  • 3d48933f27 Remove ancient deprecation warnings oobabooga 2023-08-06 18:58:59 -07:00
  • c237ce607e Move characters/instruction-following to instruction-templates oobabooga 2023-08-06 17:50:07 -07:00
  • 65aa11890f
    Refactor everything (#3481) oobabooga 2023-08-06 21:49:27 -03:00
  • d4b851bdc8 Credit turboderp oobabooga 2023-08-06 13:42:43 -07:00
  • 0af10ab49b
    Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325) oobabooga 2023-08-06 17:22:48 -03:00
  • 5134878344
    Fix chat message order (#3461) missionfloyd 2023-08-05 10:53:54 -06:00
  • 44f31731af
    Create logs dir if missing when saving history (#3462) jllllll 2023-08-05 11:47:16 -05:00
  • 5ee95d126c
    Bump exllama wheels to 0.0.10 (#3467) jllllll 2023-08-05 11:46:14 -05:00
  • 9dcb37e8d4
    Fix: Mirostat fails on models split across multiple GPUs Forkoz 2023-08-05 16:45:47 +00:00
  • 9e17325207
    Add CMD_FLAGS.txt functionality to WSL installer (#119) jllllll 2023-08-05 08:26:24 -05:00
  • 23055b21ee
    [Bug fix] Remove html tags form the Prompt sent to Stable Diffusion (#3151) SodaPrettyCold 2023-08-05 07:20:28 +08:00
  • 6e30f76ba5
    Bump bitsandbytes to 0.41.1 (#3457) jllllll 2023-08-04 17:28:59 -05:00
  • 8df3cdfd51
    Add SSL certificate support (#3453) oobabooga 2023-08-04 13:57:31 -03:00
  • ed57a79c6e
    Add back silero preview by @missionfloyd (#3446) oobabooga 2023-08-04 02:29:14 -03:00
  • 2336b75d92
    Remove unnecessary chat.js (#3445) missionfloyd 2023-08-03 22:58:37 -06:00
  • 4b3384e353 Handle unfinished lists during markdown streaming oobabooga 2023-08-03 17:10:57 -07:00
  • f4005164f4
    Fix llama.cpp truncation (#3400) Pete 2023-08-03 19:01:15 -04:00
  • 4e6dc6d99d Add Contributing guidelines oobabooga 2023-08-03 14:36:35 -07:00
  • 8f98268252
    extensions/openai: include content-length for json replies (#3416) matatonic 2023-08-03 15:10:49 -04:00
  • 32e7cbb635
    More models: +StableBeluga2 (#3415) matatonic 2023-08-03 15:02:54 -04:00
  • f61573bbde
    Add standalone Dockerfile for NVIDIA Jetson (#3336) Paul DeCarlo 2023-08-03 21:57:33 +03:00
  • d578baeb2c
    Use character settings from API properties if present (#3428) rafa-9 2023-08-03 14:56:40 -04:00
  • 601fc424cd
    Several improvements (#117) oobabooga 2023-08-03 14:39:46 -03:00
  • d93087adc3 Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-08-03 08:14:10 -07:00
  • 1839dff763 Use Esc to Stop the generation oobabooga 2023-08-03 08:13:17 -07:00
  • 87dab03dc0
    Add the --cpu option for llama.cpp to prevent CUDA from being used (#3432) oobabooga 2023-08-03 11:00:36 -03:00
  • 3e70bce576 Properly format exceptions in the UI oobabooga 2023-08-03 06:57:21 -07:00
  • 3390196a14 Add some javascript alerts for confirmations oobabooga 2023-08-02 22:13:57 -07:00
  • e074538b58 Revert "Make long_replies ban the eos token as well" oobabooga 2023-08-02 21:45:10 -07:00
  • 6bf9e855f8 Minor change oobabooga 2023-08-02 21:39:56 -07:00
  • 32c564509e Fix loading session in chat mode oobabooga 2023-08-02 21:13:16 -07:00
  • 4b6c1d3f08 CSS change oobabooga 2023-08-02 20:20:23 -07:00
  • 0e8f9354b5 Add direct download for session/chat history JSONs oobabooga 2023-08-02 18:50:13 -07:00
  • aca5679968
    Properly fix broken gcc_linux-64 package (#115) jllllll 2023-08-02 21:39:07 -05:00
  • 32a2bbee4a Implement auto_max_new_tokens for ExLlama oobabooga 2023-08-02 11:01:29 -07:00
  • e931844fe2
    Add auto_max_new_tokens parameter (#3419) oobabooga 2023-08-02 14:52:20 -03:00
  • 0d9932815c Improve TheEncrypted777 on mobile devices oobabooga 2023-08-02 08:45:14 -07:00
  • 6afc1a193b
    Add a scrollbar to notebook/default, improve chat scrollbar style (#3403) Pete 2023-08-02 11:02:36 -04:00
  • 6c521ce967 Make long_replies ban the eos token as well oobabooga 2023-08-01 18:47:49 -07:00
  • 9ae0eab989
    extensions/openai: +Array input (batched) , +Fixes (#3309) matatonic 2023-08-01 21:26:00 -04:00
  • 40038fdb82
    add chat instruction config for BaiChuan model (#3332) CrazyShipOne 2023-08-02 09:25:20 +08:00
  • c8a59d79be Add a template for NewHope oobabooga 2023-08-01 13:27:29 -07:00
  • b53ed70a70 Make llamacpp_HF 6x faster oobabooga 2023-08-01 13:15:14 -07:00
  • 385229313f Increase the interface area a bit oobabooga 2023-08-01 09:41:57 -07:00
  • 8d46a8c50a Change the default chat style and the default preset oobabooga 2023-08-01 09:35:17 -07:00
  • 9773534181 Update Chat-mode.md oobabooga 2023-08-01 07:57:47 -07:00
  • 959feba602 When saving model settings, only save the settings for the current loader oobabooga 2023-08-01 06:10:09 -07:00
  • ebb4f22028 Change a comment oobabooga 2023-07-31 20:06:10 -07:00
  • 8e2217a029 Minor changes to the Parameters tab oobabooga 2023-07-31 19:55:11 -07:00
  • b2207f123b Update docs oobabooga 2023-07-31 19:20:33 -07:00
  • f094330df0 When saving a preset, only save params that differ from the defaults oobabooga 2023-07-31 19:13:29 -07:00
  • 84297d05c4 Add a "Filter by loader" menu to the Parameters tab oobabooga 2023-07-31 18:44:00 -07:00
  • abea8d9ad3 Make settings-template.yaml more readable oobabooga 2023-07-31 12:01:50 -07:00
  • 7de7b3d495 Fix newlines in exported character yamls oobabooga 2023-07-31 10:46:02 -07:00