Commit graph

  • 9c77ab4fc2 Improve some warnings oobabooga 2023-05-03 22:06:46 -03:00
  • 057b1b2978 Add credits oobabooga 2023-05-03 21:49:55 -03:00
  • 95d04d6a8d Better warning messages oobabooga 2023-05-03 21:43:17 -03:00
  • 0a48b29cd8 Prevent websocket disconnection on the client side oobabooga 2023-05-03 20:44:30 -03:00
  • 4bf7253ec5 Fix typing bug in api oobabooga 2023-05-03 19:27:20 -03:00
  • d6410a1b36 Bump recommended monkey patch commit oobabooga 2023-05-03 14:49:25 -03:00
  • 60be76f0fc Revert gradio bump (gallery is broken) oobabooga 2023-05-03 11:53:30 -03:00
  • 4883e20fa7
    Fix openai extension script.py - TypeError: '_Environ' object is not callable (#1753) Thireus ☠ 2023-05-03 13:51:49 +01:00
  • f54256e348 Rename no_mmap to no-mmap oobabooga 2023-05-03 09:50:31 -03:00
  • dec31af910
    Create .gitignore (#43) Roberts Slisans 2023-05-03 05:47:19 +03:00
  • 24c5ba2b9c
    Fixed error when $OS_ARCH returns aarch64 (#45) Semih Aslan 2023-05-03 02:47:03 +00:00
  • 875da16b7b Minor CSS improvements in chat mode oobabooga 2023-05-02 23:38:51 -03:00
  • e3968f7dd0
    Fix Training Pad Token (#1678) practicaldreamer 2023-05-02 21:16:08 -05:00
  • 80c2f25131
    LLaVA: small fixes (#1664) Wojtab 2023-05-03 04:12:22 +02:00
  • c31b0f15a7 Remove some spaces oobabooga 2023-05-02 23:07:07 -03:00
  • 320fcfde4e Style/pep8 improvements oobabooga 2023-05-02 23:05:38 -03:00
  • ecd79caa68
    Update Extensions.md oobabooga 2023-05-02 22:52:32 -03:00
  • 7ac41b87df
    add openai compatible api (#1475) matatonic 2023-05-02 21:49:53 -04:00
  • 4e09df4034 Only show extension in UI if it has an ui() function oobabooga 2023-05-02 19:20:02 -03:00
  • d016c38640 Bump gradio version oobabooga 2023-05-02 19:19:33 -03:00
  • 88cdf6ed3d Prevent websocket from disconnecting oobabooga 2023-05-02 19:03:19 -03:00
  • fbcd32988e
    added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649) Ahmed Said 2023-05-03 00:25:28 +03:00
  • 4babb22f84
    Fix/Improve a bunch of things (#42) Blake Wyatt 2023-05-02 11:28:20 -04:00
  • 2f1a2846d1
    Verbose should always print special tokens in input (#1707) Carl Kenner 2023-05-02 13:54:56 +09:30
  • 0df0b2d0f9
    optimize stopping strings processing (#1625) Alex "mcmonkey" Goodwin 2023-05-01 21:21:54 -07:00
  • e6a78c00f2
    Update Docker.md oobabooga 2023-05-02 00:51:10 -03:00
  • 3c67fc0362
    Allow groupsize 1024, needed for larger models eg 30B to lower VRAM usage (#1660) Tom Jobbins 2023-05-02 04:46:26 +01:00
  • 78bd4d3a5c
    Update LLaMA-model.md (#1700) Lawrence M Stewart 2023-05-01 20:44:09 -07:00
  • f659415170
    fixed variable name "context" to "prompt" (#1716) Dhaladom 2023-05-02 05:43:40 +02:00
  • 280c2f285f
    Bump safetensors from 0.3.0 to 0.3.1 (#1720) dependabot[bot] 2023-05-02 00:42:39 -03:00
  • 56b13d5d48 Bump llama-cpp-python version oobabooga 2023-05-02 00:41:54 -03:00
  • ee68ec9079
    Update folder produced by download-model (#1601) Lőrinc Pap 2023-04-27 17:03:02 +02:00
  • 91745f63c3 Use Vicuna-v0 by default for Vicuna models oobabooga 2023-04-26 17:45:38 -03:00
  • 93e5c066ae Update RWKV Raven template oobabooga 2023-04-26 17:31:03 -03:00
  • c83210c460 Move the rstrips oobabooga 2023-04-26 17:17:22 -03:00
  • 1d8b8222e9 Revert #1579, apply the proper fix oobabooga 2023-04-26 16:47:50 -03:00
  • a941c19337
    Fixing Vicuna text generation (#1579) TiagoGF 2023-04-26 15:20:27 -04:00
  • d87ca8f2af LLaVA fixes oobabooga 2023-04-26 03:47:34 -03:00
  • 9c2e7c0fab Fix path on models.py oobabooga 2023-04-26 03:29:09 -03:00
  • a777c058af
    Precise prompts for instruct mode oobabooga 2023-04-26 03:21:53 -03:00
  • a8409426d7
    Fix bug in models.py oobabooga 2023-04-26 01:55:40 -03:00
  • 4c491aa142 Add Alpaca prompt with Input field oobabooga 2023-04-25 23:50:32 -03:00
  • 68ed73dd89 Make API extension print its exceptions oobabooga 2023-04-25 23:23:47 -03:00
  • f642135517 Make universal tokenizer, xformers, sdp-attention apply to monkey patch oobabooga 2023-04-25 23:18:11 -03:00
  • f39c99fa14 Load more than one LoRA with --lora, fix a bug oobabooga 2023-04-25 22:58:48 -03:00
  • 15940e762e Fix missing initial space for LlamaTokenizer oobabooga 2023-04-25 22:47:23 -03:00
  • 92cdb4f22b
    Seq2Seq support (including FLAN-T5) (#1535) Vincent Brouwers 2023-04-26 03:39:04 +02:00
  • 95aa43b9c2
    Update LLaMA download docs USBhost 2023-04-25 19:28:15 -05:00
  • 312cb7dda6
    LoRA trainer improvements part 5 (#1546) Alex "mcmonkey" Goodwin 2023-04-25 17:27:30 -07:00
  • 65beb51b0b
    fix returned dtypes for LLaVA (#1547) Wojtab 2023-04-26 02:25:34 +02:00
  • 9b272bc8e5 Monkey patch fixes oobabooga 2023-04-25 21:20:26 -03:00
  • da812600f4 Apply settings regardless of setup() function oobabooga 2023-04-25 01:16:23 -03:00
  • ebca3f86d5
    Apply the settings for extensions after import, but before setup() (#1484) da3dsoul 2023-04-24 23:23:11 -04:00
  • b0ce750d4e Add spaces oobabooga 2023-04-25 00:10:21 -03:00
  • 1a0c12c6f2
    Refactor text-generation.py a bit oobabooga 2023-04-24 19:24:12 -03:00
  • bcd5786a47
    Add files via upload oobabooga 2023-04-24 16:53:04 -03:00
  • d66059d95a
    Update INSTRUCTIONS.TXT oobabooga 2023-04-24 16:50:03 -03:00
  • a4f6724b88
    Add a comment oobabooga 2023-04-24 16:47:22 -03:00
  • 9a8487097b
    Remove --auto-devices oobabooga 2023-04-24 16:43:52 -03:00
  • 2f4f124132 Remove obsolete function oobabooga 2023-04-24 13:27:24 -03:00
  • b6af2e56a2 Add --character flag, add character to settings.json oobabooga 2023-04-24 13:19:42 -03:00
  • 0c32ae27cc Only load the default history if it's empty oobabooga 2023-04-24 11:50:51 -03:00
  • c86e9a3372
    fix websocket batching (#1511) MajdajkD 2023-04-24 08:51:32 +02:00
  • 78d1977ebf
    add n_batch support for llama.cpp (#1115) eiery 2023-04-24 02:46:18 -04:00
  • 2f6e2ddeac Bump llama-cpp-python version oobabooga 2023-04-24 03:42:03 -03:00
  • caaa556159 Move extensions block definition to the bottom oobabooga 2023-04-24 03:30:35 -03:00
  • b1ee674d75 Make interface state (mostly) persistent on page reload oobabooga 2023-04-24 03:05:47 -03:00
  • 47809e28aa Minor changes oobabooga 2023-04-24 01:04:48 -03:00
  • 435f8cc0e7
    Simplify some chat functions oobabooga 2023-04-24 00:47:40 -03:00
  • 04b98a8485
    Fix Continue for LLaVA (#1507) Wojtab 2023-04-24 03:58:15 +02:00
  • 12212cf6be
    LLaVA support (#1487) Wojtab 2023-04-24 01:32:22 +02:00
  • 9197d3fec8
    Update Extensions.md oobabooga 2023-04-23 16:11:17 -03:00
  • 654933c634
    New universal API with streaming/blocking endpoints (#990) Andy Salerno 2023-04-23 11:52:43 -07:00
  • dfbb18610f
    Update INSTRUCTIONS.TXT oobabooga 2023-04-23 12:58:14 -03:00
  • 459e725af9
    Lora trainer docs (#1493) Alex "mcmonkey" Goodwin 2023-04-23 08:54:41 -07:00
  • 7ff645899e Fix bug in api extension oobabooga 2023-04-22 17:33:36 -03:00
  • b992c9236a
    Prevent API extension responses from getting cut off with --chat enabled (#1467) AICatgirls 2023-04-22 12:06:43 -07:00
  • c0b5c09860 Minor change oobabooga 2023-04-22 15:15:31 -03:00
  • 47666c4d00
    Update GPTQ-models-(4-bit-mode).md oobabooga 2023-04-22 15:12:14 -03:00
  • fcb594b90e Don't require llama.cpp models to be placed in subfolders oobabooga 2023-04-22 14:56:48 -03:00
  • 06b6ff6c2e
    Update GPTQ-models-(4-bit-mode).md oobabooga 2023-04-22 12:49:00 -03:00
  • 2c6d43e60f
    Update GPTQ-models-(4-bit-mode).md oobabooga 2023-04-22 12:48:20 -03:00
  • 7438f4f6ba Change GPTQ triton default settings oobabooga 2023-04-22 12:27:30 -03:00
  • e03b873460
    Updating Using-LoRAs.md doc to clarify resuming training (#1474) InconsolableCellist 2023-04-22 00:35:36 -06:00
  • fe02281477
    Update README.md oobabooga 2023-04-22 03:05:00 -03:00
  • ef40b4e862
    Update README.md oobabooga 2023-04-22 03:03:39 -03:00
  • 408e172ad9
    Rename docker/README.md to docs/Docker.md oobabooga 2023-04-22 03:03:05 -03:00
  • 4d9ae44efd
    Update Spell-book.md oobabooga 2023-04-22 02:53:52 -03:00
  • 9508f207ba
    Update Using-LoRAs.md oobabooga 2023-04-22 02:53:01 -03:00
  • 6d4f131d0a
    Update Low-VRAM-guide.md oobabooga 2023-04-22 02:50:35 -03:00
  • f5c36cca40
    Update LLaMA-model.md oobabooga 2023-04-22 02:49:54 -03:00
  • 038fa3eb39
    Update README.md oobabooga 2023-04-22 02:46:07 -03:00
  • b5e5b9aeae
    Delete Home.md oobabooga 2023-04-22 02:40:20 -03:00
  • fe6e9ea986
    Update README.md oobabooga 2023-04-22 02:40:08 -03:00
  • 80ef7c7bcb
    Add files via upload oobabooga 2023-04-22 02:34:13 -03:00
  • 25b433990a
    Create README.md oobabooga 2023-04-22 02:33:32 -03:00
  • 505c2c73e8
    Update README.md oobabooga 2023-04-22 00:11:27 -03:00
  • 143e88694d
    SD_api_pictures: Modefix, +hires options, UI layout change (#1400) Φφ 2023-04-21 23:49:18 +03:00
  • 2dca8bb25e Sort imports oobabooga 2023-04-21 17:20:59 -03:00
  • c238ba9532 Add a 'Count tokens' button oobabooga 2023-04-21 17:18:34 -03:00