diff --git a/docs/import.md b/docs/import.md
index 7abe39b2..f34f09ac 100644
--- a/docs/import.md
+++ b/docs/import.md
@@ -47,19 +47,13 @@ success
### Supported Quantizations
-
-Legacy Quantization
-
- `Q4_0`
- `Q4_1`
- `Q5_0`
- `Q5_1`
- `Q8_0`
-
-
-
-K-means Quantization
`
+#### K-means Quantizations
- `Q3_K_S`
- `Q3_K_M`
@@ -70,11 +64,6 @@ success
- `Q5_K_M`
- `Q6_K`
-
-
-> [!NOTE]
-> Activation-aware Weight Quantization (i.e. IQ) are not currently supported for automatic quantization however you can still import the quantized model into Ollama, see [Import GGUF](#import-gguf).
-
## Template Detection
> [!NOTE]