From 3961f49524ac9b88f7572bf8790186a18bba4def Mon Sep 17 00:00:00 2001 From: practicaldreamer <78515588+practicaldreamer@users.noreply.github.com> Date: Mon, 17 Apr 2023 08:46:37 -0500 Subject: [PATCH] Add note about --no-fused_mlp ignoring --gpu-memory (#1301) --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index abda3fa..7d5c469 100644 --- a/README.md +++ b/README.md @@ -238,7 +238,7 @@ Optionally, you can use the following command-line flags: | `--pre_layer PRE_LAYER` | GPTQ: The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. | | `--no-quant_attn` | GPTQ: Disable quant attention for triton. If you encounter incoherent results try disabling this. | | `--no-warmup_autotune` | GPTQ: Disable warmup autotune for triton. | -| `--no-fused_mlp` | GPTQ: Disable fused mlp for triton. If you encounter "Unexpected mma -> mma layout conversion" try disabling this. | +| `--no-fused_mlp` | GPTQ: Disable fused mlp for triton. If you encounter "Unexpected mma -> mma layout conversion" try disabling this. Disabling may also help model splitting for multi-gpu setups.| | `--monkey-patch` | GPTQ: Apply the monkey patch for using LoRAs with quantized models. | #### FlexGen