Commit graph

  • 18f756ada6
    Bump gradio from 3.24.0 to 3.24.1 (#746) dependabot[bot] 2023-04-03 14:29:37 -03:00
  • 7aab88bcc6
    Give API extension access to all generate_reply parameters (#744) Niels Mündler 2023-04-03 18:31:12 +02:00
  • 9318e16ed5 Expand .gitignore oobabooga 2023-04-03 12:51:30 -03:00
  • 3012bdb5e0 Fix a label oobabooga 2023-04-03 12:20:53 -03:00
  • dcf61a8897
    "character greeting" displayed and editable on the fly (#743) OWKenobi 2023-04-03 17:16:15 +02:00
  • 8b1f20aa04
    Fix some old JSON characters not loading (#740) Alex "mcmonkey" Goodwin 2023-04-03 06:49:28 -07:00
  • 8b442305ac Rename another variable oobabooga 2023-04-03 01:15:20 -03:00
  • 08448fb637 Rename a variable oobabooga 2023-04-03 01:02:11 -03:00
  • 2a267011dc Use Path.stem for simplicity oobabooga 2023-04-03 00:54:56 -03:00
  • 9b4e9a98f0
    Merge pull request #9 from jllllll/oobabooga-windows oobabooga 2023-04-03 00:31:14 -03:00
  • c86d3e9c74
    Add -k flag to curl command jllllll 2023-04-02 21:28:04 -05:00
  • ea97303509
    Apply dialogue format in all character fields not just example dialogue (#650) Alex "mcmonkey" Goodwin 2023-04-02 17:54:29 -07:00
  • 525f729b8e
    Update README.md oobabooga 2023-04-02 21:12:41 -03:00
  • 53084241b4
    Update README.md oobabooga 2023-04-02 20:50:06 -03:00
  • 2157bb4319
    New yaml character format (#337 from TheTerrasque/feature/yaml-characters) TheTerrasque 2023-04-03 01:34:25 +02:00
  • 7ce608d101
    Merge pull request #732 from StefanDanielSchwarz/fix-verbose-(beam-search)-preset oobabooga 2023-04-02 19:38:11 -03:00
  • 34c3b4af6e
    Fix "Verbose (Beam Search)" preset SDS 2023-04-03 00:31:58 +02:00
  • 1a823aaeb5
    Clear text input for chat (#715 from bmoconno/clear-chat-input) oobabooga 2023-04-02 18:08:25 -03:00
  • 0dc6fa038b Use gr.State() to store the user input oobabooga 2023-04-02 18:05:21 -03:00
  • 5f3f3faa96 Better handle CUDA out of memory errors in chat mode oobabooga 2023-04-02 17:48:00 -03:00
  • e3c348e42b
    Add .git oobabooga 2023-04-02 01:11:05 -03:00
  • b704fe7878
    Use my fork of GPTQ-for-LLaMa for stability oobabooga 2023-04-02 01:10:22 -03:00
  • d0f9625f0b Clear text input for chat Brian O'Connor 2023-04-01 21:48:24 -04:00
  • b0890a7925 Add shared.is_chat() function oobabooga 2023-04-01 20:14:43 -03:00
  • b38ba230f4
    Update download-model.py oobabooga 2023-04-01 15:03:24 -03:00
  • b6f817be45
    Update README.md oobabooga 2023-04-01 14:54:10 -03:00
  • 88fa38ac01
    Update README.md oobabooga 2023-04-01 14:49:03 -03:00
  • 526d5725db
    Update download-model.py oobabooga 2023-04-01 14:47:47 -03:00
  • 4b57bd0d99
    Update README.md oobabooga 2023-04-01 14:38:04 -03:00
  • b53bec5a1f
    Update README.md oobabooga 2023-04-01 14:37:35 -03:00
  • 9160586c04
    Update README.md oobabooga 2023-04-01 14:31:10 -03:00
  • 7ec11ae000
    Update README.md oobabooga 2023-04-01 14:15:19 -03:00
  • b857f4655b
    Update shared.py oobabooga 2023-04-01 13:56:47 -03:00
  • 012f4f83b8
    Update README.md oobabooga 2023-04-01 13:55:15 -03:00
  • fcda3f8776 Add also_return_rows to generate_chat_prompt oobabooga 2023-04-01 01:12:13 -03:00
  • 8c51b405e4 Progress towards generalizing Interface mode tab oobabooga 2023-03-31 23:41:10 -03:00
  • 23116b88ef
    Add support for resuming downloads (#654 from nikita-skakun/support-partial-downloads) oobabooga 2023-03-31 22:55:55 -03:00
  • 74462ac713 Don't override the metadata when checking the sha256sum oobabooga 2023-03-31 22:52:52 -03:00
  • 2c52310642 Add --threads flag for llama.cpp oobabooga 2023-03-31 21:18:05 -03:00
  • eeafd60713 Fix streaming oobabooga 2023-03-31 19:05:38 -03:00
  • 52065ae4cd Add repetition_penalty oobabooga 2023-03-31 19:01:34 -03:00
  • 2259143fec Fix llama.cpp with --no-stream oobabooga 2023-03-31 18:43:45 -03:00
  • 875de5d983 Update ggml template oobabooga 2023-03-31 17:57:31 -03:00
  • cbfe0b944a
    Update README.md oobabooga 2023-03-31 17:49:11 -03:00
  • 6a44f4aec6 Add support for downloading ggml files oobabooga 2023-03-31 17:33:10 -03:00
  • 3a47a602a3 Detect ggml*.bin files automatically oobabooga 2023-03-31 17:18:21 -03:00
  • 0aee7341d8 Properly count tokens/s for llama.cpp in chat mode oobabooga 2023-03-31 17:00:55 -03:00
  • 5c4e44b452
    llama.cpp documentation oobabooga 2023-03-31 15:20:39 -03:00
  • 6fd70d0032
    Add llama.cpp support (#447 from thomasantony/feature/llamacpp) oobabooga 2023-03-31 15:17:32 -03:00
  • a5c9b7d977 Bump llamacpp version oobabooga 2023-03-31 15:08:01 -03:00
  • ea3ba6fc73 Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp oobabooga 2023-03-31 14:45:53 -03:00
  • 09b0a3aafb Add repetition_penalty oobabooga 2023-03-31 14:45:17 -03:00
  • 4d98623041
    Merge branch 'main' into feature/llamacpp oobabooga 2023-03-31 14:37:04 -03:00
  • 4c27562157 Minor changes oobabooga 2023-03-31 14:33:46 -03:00
  • 9d1dcf880a General improvements oobabooga 2023-03-31 14:27:01 -03:00
  • 770ff0efa9 Merge branch 'main' of github.com:oobabooga/text-generation-webui oobabooga 2023-03-31 12:22:22 -03:00
  • 1d1d9e40cd Add seed to settings oobabooga 2023-03-31 12:22:07 -03:00
  • daeab6bac7
    Merge pull request #678 from mayaeary/fix/python3.8 oobabooga 2023-03-31 12:19:06 -03:00
  • 75465fa041
    Merge pull request #6 from jllllll/oobabooga-windows oobabooga 2023-03-31 11:27:23 -03:00
  • 5a6f939f05 Change the preset here too oobabooga 2023-03-31 10:43:05 -03:00
  • b246d17513
    Fix type object is not subscriptable Maya 2023-03-31 14:20:31 +03:00
  • b99bea3c69 Fixed reported header affecting resuming download Nikita Skakun 2023-03-30 23:11:59 -07:00
  • 3e1267af79
    Merge pull request #673 from ye7iaserag/patch-1 oobabooga 2023-03-31 02:04:52 -03:00
  • 3b90d604d7 Sort the imports oobabooga 2023-03-31 02:01:48 -03:00
  • d28a5c9569 Remove unnecessary css oobabooga 2023-03-31 02:01:13 -03:00
  • ec093a5af7
    Fix div alignment for long strings ye7iaserag 2023-03-31 06:54:24 +02:00
  • 92c7068daf Don't download if --check is specified oobabooga 2023-03-31 01:31:47 -03:00
  • 3737eafeaa Remove a border and allow more characters per pagination page oobabooga 2023-03-31 00:48:50 -03:00
  • fd72afd8e7 Increase the textbox sizes oobabooga 2023-03-31 00:43:00 -03:00
  • f27a66b014 Bump gradio version (make sure to update) oobabooga 2023-03-31 00:42:26 -03:00
  • 0cc89e7755 Checksum code now activated by --check flag. Nikita Skakun 2023-03-30 20:06:12 -07:00
  • f9940b79dc
    Implement character gallery using Dataset ye7iaserag 2023-03-31 04:56:49 +02:00
  • e4e3c9095d
    Add warning for long paths jllllll 2023-03-30 20:48:40 -05:00
  • 172035d2e1
    Minor Correction jllllll 2023-03-30 20:44:56 -05:00
  • 0b4ee14edc
    Attempt to Improve Reliability jllllll 2023-03-30 20:04:16 -05:00
  • bb69e054a7 Add dummy file oobabooga 2023-03-30 21:08:50 -03:00
  • 85e4ec6e6b
    Download the cuda branch directly oobabooga 2023-03-30 18:22:48 -03:00
  • 78c0da4a18
    Use the cuda branch of gptq-for-llama oobabooga 2023-03-30 18:04:05 -03:00
  • d4a9b5ea97 Remove redundant preset (see the plot in #587) oobabooga 2023-03-30 17:34:44 -03:00
  • d550c12a3e Fixed the bug with additional bytes. Nikita Skakun 2023-03-30 12:52:16 -07:00
  • 7fa5d96c22 Update to use new llamacpp API Thomas Antony 2023-03-29 21:20:22 +01:00
  • 79fa2b6d7e Add support for alpaca Thomas Antony 2023-03-19 21:30:24 -07:00
  • 8953a262cb Add llamacpp to requirements.txt Thomas Antony 2023-03-19 19:59:25 -07:00
  • a5f5736e74 Add to text_generation.py Thomas Antony 2023-03-19 19:51:43 -07:00
  • 7745faa7bb Add llamacpp to models.py Thomas Antony 2023-03-18 23:42:28 -07:00
  • 7a562481fa Initial version of llamacpp_model.py Thomas Antony 2023-03-18 23:42:10 -07:00
  • 53ab1e285d Update .gitignore Thomas Antony 2023-03-19 19:52:08 -07:00
  • 297ac051d9 Added sha256 validation of model files. Nikita Skakun 2023-03-30 02:34:19 -07:00
  • 8c590c2362 Added a 'clean' flag to not resume download. Nikita Skakun 2023-03-30 00:42:19 -07:00
  • e17af59261 Add support for resuming downloads Nikita Skakun 2023-03-30 00:21:34 -07:00
  • f0fdab08d3 Increase --chat height oobabooga 2023-03-30 01:02:11 -03:00
  • bd65940a48 Increase --chat box height oobabooga 2023-03-30 00:43:49 -03:00
  • 131753fcf5 Save the sha256sum of downloaded models oobabooga 2023-03-29 23:28:16 -03:00
  • a21e580782 Move an import oobabooga 2023-03-29 22:50:58 -03:00
  • 55755e27b9 Don't hardcode prompts in the settings dict/json oobabooga 2023-03-29 22:40:04 -03:00
  • 1cb9246160 Adapt to the new model names oobabooga 2023-03-29 21:47:36 -03:00
  • 0345e04249 Fix "Unknown argument(s): {'verbose': False}" oobabooga 2023-03-29 21:17:48 -03:00
  • 9104164297
    Merge pull request #618 from nikita-skakun/optimize-download-model oobabooga 2023-03-29 20:54:19 -03:00
  • 37754164eb Move argparse oobabooga 2023-03-29 20:47:36 -03:00
  • 6403e72062 Merge branch 'main' into nikita-skakun-optimize-download-model oobabooga 2023-03-29 20:45:33 -03:00