From 146505a16b8b8628615470bcd426b24b3c23c13e Mon Sep 17 00:00:00 2001 From: oobabooga <112222186+oobabooga@users.noreply.github.com> Date: Thu, 1 Jun 2023 12:04:58 -0300 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 12adca6..4dcfa1d 100644 --- a/README.md +++ b/README.md @@ -105,7 +105,7 @@ To use GPTQ models, the additional installation steps below are necessary: #### llama.cpp with GPU acceleration -Requires the additional compilation step described here: [GPU offloading](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-offloading). +Requires the additional compilation step described here: [GPU acceleration](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md#gpu-acceleration). #### bitsandbytes