Commit graph

  • 6d1e911577
    Add support for logits processors in extensions (#3029) Morgan Schweers 2023-07-13 13:22:41 -07:00
  • 22341e948d Merge branch 'main' into dev oobabooga 2023-07-12 14:19:49 -07:00
  • 0e6295886d Fix lora download folder oobabooga 2023-07-12 14:19:33 -07:00
  • eb823fce96 Fix typo oobabooga 2023-07-12 13:55:19 -07:00
  • d0a626f32f Change reload screen color oobabooga 2023-07-12 13:54:43 -07:00
  • c592a9b740 Fix #3117 oobabooga 2023-07-12 13:33:44 -07:00
  • 6447b2eea6
    Merge pull request #3116 from oobabooga/dev oobabooga 2023-07-12 15:55:40 -03:00
  • 2463d7c098 Spaces oobabooga 2023-07-12 11:35:43 -07:00
  • e202190c4f lint oobabooga 2023-07-12 11:33:25 -07:00
  • 9b55d3a9f9
    More robust and error prone training (#3058) FartyPants 2023-07-12 14:29:43 -04:00
  • 30f37530d5 Add back .replace('\r', '') oobabooga 2023-07-12 09:52:20 -07:00
  • 987d0fe023
    Fix: Fixed the tokenization process of a raw dataset and improved its efficiency (#3035) Fernando Tarin Morales 2023-07-13 00:05:37 +09:00
  • 3f19e94c93
    Add Tensorboard/Weights and biases integration for training (#2624) kabachuha 2023-07-12 17:53:31 +03:00
  • 5d513eea22
    Add ability to load all text files from a subdirectory for training (#1997) kizinfo 2023-07-12 17:44:30 +03:00
  • 73a0def4af
    Add Feature to Log Sample of Training Dataset for Inspection (#1711) practicaldreamer 2023-07-12 09:26:45 -05:00
  • b6ba68eda9 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev oobabooga 2023-07-12 07:19:34 -07:00
  • a17b78d334 Disable wandb during training oobabooga 2023-07-12 07:19:12 -07:00
  • eedb3bf023
    Add low vram mode on llama cpp (#3076) Gabriel Pena 2023-07-12 11:05:13 -03:00
  • 180420d2c9 Fix send_pictures extension oobabooga 2023-07-11 20:56:01 -07:00
  • ad07839a7b
    Small bug, when arbitrary loading character.json that doesn't exist (#2643) original-subliminal-thought-criminal 2023-07-11 22:16:36 -05:00
  • d986c17c52
    Chat history download creates more detailed file names (#3051) Axiom Wolf 2023-07-12 05:10:36 +02:00
  • d9fabdde40
    Add context_instruct to API. Load default model instruction template … (#2688) atriantafy 2023-07-12 04:01:03 +01:00
  • 324e45b848
    [Fixed] wbits and groupsize values from model not shown (#2977) Salvador E. Tropea 2023-07-11 23:27:38 -03:00
  • e3810dff40 Style changes oobabooga 2023-07-11 18:49:06 -07:00
  • bfafd07f44 Change a message oobabooga 2023-07-11 18:29:20 -07:00
  • a12dae51b9 Bump bitsandbytes oobabooga 2023-07-11 18:29:08 -07:00
  • 37bffb2e1a
    Add reference to new pipeline in multimodal readme (#2947) Keith Kjer 2023-07-11 15:04:15 -07:00
  • 1fc0b5041e
    substitu superboog Beatiful Soup Parser (#2996) Juliano Henriquez 2023-07-11 18:02:49 -04:00
  • ab044a5a44
    Elevenlabs tts fixes (#2959) Salvador E. Tropea 2023-07-11 19:00:37 -03:00
  • 3708de2b1f
    respect model dir for downloads (#3077) (#3079) micsthepick 2023-07-12 07:55:46 +10:00
  • 3778816b8d
    models/config.yaml: +platypus/gplatty, +longchat, +vicuna-33b, +Redmond-Hermes-Coder, +wizardcoder, +more (#2928) matatonic 2023-07-11 17:53:48 -04:00
  • 3e9da5a27c
    Changed FormComponent to IOComponent (#3017) Ricardo Pinto 2023-07-11 22:52:16 +01:00
  • 3e7feb699c
    extensions/openai: Major openai extension updates & fixes (#3049) matatonic 2023-07-11 17:50:08 -04:00
  • 8db7e857b1
    Add token authorization for downloading model (#3067) Ahmad Fahadh Ilyas 2023-07-12 04:48:08 +07:00
  • 61102899cd
    google flan T5 download fix (#3080) FartyPants 2023-07-11 17:46:59 -04:00
  • fdd596f98f
    Bump bitsandbytes Windows wheel (#3097) jllllll 2023-07-11 16:41:24 -05:00
  • 987d522b55
    Fix API example for loading models (#3101) Vadim Peretokin 2023-07-11 23:40:55 +02:00
  • f4aa11cef6
    Add default environment variable values to docker compose file (#3102) Josh XT 2023-07-11 17:38:26 -04:00
  • a81cdd1367
    Bump cpp llama version (#3081) ofirkris 2023-07-11 01:36:15 +03:00
  • f8dbd7519b
    Bump exllama module version (#3087) jllllll 2023-07-10 17:35:59 -05:00
  • c7058afb40
    Add new possible bin file name regex (#3070) tianchen zhong 2023-07-09 13:22:56 -07:00
  • 161d984e80
    Bump llama-cpp-python version (#3072) ofirkris 2023-07-09 23:22:24 +03:00
  • 463aac2d65
    [Added] google_translate activate param (#2961) Salvador E. Tropea 2023-07-09 01:08:20 -03:00
  • 74ea7522a0
    Lora fixes for AutoGPTQ (#2818) Forkoz 2023-07-09 04:03:43 +00:00
  • 70b088843d
    fix for issue #2475: Streaming api deadlock (#3048) Chris Rude 2023-07-08 19:21:20 -07:00
  • 5ac4e4da8b Make --model work with argument like models/folder_name oobabooga 2023-07-08 10:22:54 -07:00
  • acf24ebb49
    Whisper_stt params for model, language, and auto_submit (#3031) Brandon McClure 2023-07-07 17:54:53 -06:00
  • 79679b3cfd Pin fastapi version (for #3042) oobabooga 2023-07-07 16:40:57 -07:00
  • bb79037ebd
    Fix wrong pytorch version on Linux+CPU oobabooga 2023-07-07 20:40:31 -03:00
  • 564a8c507f
    Don't launch chat mode by default oobabooga 2023-07-07 13:32:11 -03:00
  • b6643e5039 Add decode functions to llama.cpp/exllama oobabooga 2023-07-07 09:11:30 -07:00
  • 1ba2e88551 Add truncation to exllama oobabooga 2023-07-07 09:09:23 -07:00
  • c21b73ff37 Minor change to ui.py oobabooga 2023-07-07 09:09:14 -07:00
  • de994331a4 Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-07-06 22:25:43 -07:00
  • 9aee1064a3 Block a cloudfare request oobabooga 2023-07-06 22:24:52 -07:00
  • d7e14e1f78
    Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits (#3036) Fernando Tarin Morales 2023-07-07 14:24:07 +09:00
  • 1f540fa4f8
    Added the format to be able to finetune Vicuna1.1 models (#3037) Fernando Tarin Morales 2023-07-07 14:22:39 +09:00
  • ff45317032
    Update models.py (#3020) Xiaojian "JJ" Deng 2023-07-05 20:40:43 -04:00
  • b67c362735
    Bump llama-cpp-python (#3011) ofirkris 2023-07-05 17:33:28 +03:00
  • 88a747b5b9
    fix: Error when downloading model from UI (#3014) jeckyhl 2023-07-05 16:27:29 +02:00
  • e0a50fb77a
    Merge pull request #2922 from Honkware/main oobabooga 2023-07-04 23:47:21 -03:00
  • 8705eba830 Remove universal llama tokenizer support oobabooga 2023-07-04 19:43:19 -07:00
  • 84d6c93d0d Merge branch 'main' into Honkware-main oobabooga 2023-07-04 18:50:07 -07:00
  • 31c297d7e0 Various changes oobabooga 2023-07-04 18:50:01 -07:00
  • be4582be40
    Support specify retry times in download-model.py (#2908) AN Long 2023-07-05 09:26:30 +08:00
  • 70a4d5dbcf Update chat API (fixes #3006) oobabooga 2023-07-04 17:36:47 -07:00
  • 333075e726
    Fix #3003 oobabooga 2023-07-04 11:38:35 -03:00
  • 40c5722499
    Fix #2998 oobabooga 2023-07-04 11:35:25 -03:00
  • 463ddfffd0 Fix start_with oobabooga 2023-07-03 23:32:02 -07:00
  • 55457549cd Add information about presets to the UI oobabooga 2023-07-03 22:39:01 -07:00
  • 373555c4fb Fix loading some histories (thanks kaiokendev) oobabooga 2023-07-03 22:19:28 -07:00
  • 10c8c197bf
    Add Support for Static NTK RoPE scaling for exllama/exllama_hf (#2955) Panchovix 2023-07-04 00:13:16 -04:00
  • 1610d5ffb2
    Bump exllama module to 0.0.5 (#2993) jllllll 2023-07-03 22:15:55 -05:00
  • eb6112d5a2
    Update server.py - clear LORA after reload (#2952) FartyPants 2023-07-03 23:13:38 -04:00
  • 7e8340b14d Make greetings appear in --multi-user mode oobabooga 2023-07-03 20:08:14 -07:00
  • 4b1804a438
    Implement sessions + add basic multi-user support (#2991) oobabooga 2023-07-04 00:03:30 -03:00
  • 1f8cae14f9
    Update training.py - correct use of lora_names (#2988) FartyPants 2023-07-03 16:41:18 -04:00
  • c23c88ee4c
    Update LoRA.py - avoid potential error (#2953) FartyPants 2023-07-03 16:40:22 -04:00
  • 33f56fd41d
    Update models.py to clear LORA names after unload (#2951) FartyPants 2023-07-03 16:39:06 -04:00
  • 48b11f9c5b
    Training: added trainable parameters info (#2944) FartyPants 2023-07-03 16:38:36 -04:00
  • 847f70b694
    Update html_generator.py (#2954) Turamarth14 2023-07-02 06:43:58 +02:00
  • 3c076c3c80
    Disable half2 for ExLlama when using HIP (#2912) ardfork 2023-06-29 18:03:16 +00:00
  • ac0f96e785
    Some more character import tweaks. (#2921) missionfloyd 2023-06-29 11:56:25 -06:00
  • 5d2a8b31be Improve Parameters tab UI oobabooga 2023-06-29 14:33:47 -03:00
  • 79db629665 Minor bug fix oobabooga 2023-06-29 13:53:06 -03:00
  • 3443219cbc
    Add repetition penalty range parameter to transformers (#2916) oobabooga 2023-06-29 13:40:13 -03:00
  • b9a3d28177 Merge branch 'main' of https://github.com/Honkware/text-generation-webui Honkware 2023-06-29 01:33:00 -05:00
  • 3147f0b8f8 xgen config Honkware 2023-06-29 01:32:53 -05:00
  • 0a6a498383 Load xgen tokenizer Honkware 2023-06-29 01:32:44 -05:00
  • 1d03387f74
    Xgen instruction template Honkware 2023-06-29 01:31:33 -05:00
  • c6cae106e7 Bump llama-cpp-python oobabooga 2023-06-28 18:14:45 -03:00
  • 20740ab16e Revert "Fix exllama_hf gibbersh above 2048 context, and works >5000 context. (#2913)" oobabooga 2023-06-28 18:10:34 -03:00
  • 7b048dcf67
    Bump exllama module version to 0.0.4 (#2915) jllllll 2023-06-28 16:09:58 -05:00
  • 37a16d23a7
    Fix exllama_hf gibbersh above 2048 context, and works >5000 context. (#2913) Panchovix 2023-06-28 11:36:07 -04:00
  • 63770c0643 Update docs/Extensions.md oobabooga 2023-06-27 22:25:05 -03:00
  • da0ea9e0f3
    set +landmark, +superhot-8k to 8k length (#2903) matatonic 2023-06-27 21:05:52 -04:00
  • 5008daa0ff
    Add exception handler to load_checkpoint() (#2904) missionfloyd 2023-06-27 19:00:29 -06:00
  • c95009d2bd Merge remote-tracking branch 'refs/remotes/origin/main' oobabooga 2023-06-27 18:48:17 -03:00
  • 67a83f3ad9 Use DPM++ 2M Karras for Stable Diffusion oobabooga 2023-06-27 18:47:35 -03:00
  • ab1998146b
    Training update - backup the existing adapter before training on top of it (#2902) FartyPants 2023-06-27 17:24:04 -04:00