Add mirostat parameters for llama.cpp (#2287)

This commit is contained in:
oobabooga 2023-05-22 19:37:24 -03:00 committed by GitHub
parent ec7437f00a
commit c0fd7f3257
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
13 changed files with 80 additions and 15 deletions

View file

@ -0,0 +1,23 @@
# Generation parameters
For a description of the generation parameters provided by the transformers library, see this link: https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig
### llama.cpp
llama.cpp only uses the following parameters:
* temperature
* top_p
* top_k
* repetition_penalty
* mirostat_mode
* mirostat_tau
* mirostat_eta
### RWKV
RWKV only uses the following parameters:
* temperature
* top_p
* top_k

View file

@ -7,6 +7,7 @@
* [Using LoRAs](Using-LoRAs.md)
* [llama.cpp models](llama.cpp-models.md)
* [RWKV model](RWKV-model.md)
* [Generation parameters](Generation-parameters.md)
* [Extensions](Extensions.md)
* [Chat mode](Chat-mode.md)
* [DeepSpeed](DeepSpeed.md)