From f1ef3f9947ecf0a63cd7544d3c2d26c2ff5e0915 Mon Sep 17 00:00:00 2001 From: Matt Williams Date: Mon, 4 Dec 2023 20:58:10 -0800 Subject: [PATCH] remove mention of gpt-neox in import (#1381) Signed-off-by: Matt Williams --- docs/import.md | 4 ---- 1 file changed, 4 deletions(-) diff --git a/docs/import.md b/docs/import.md index db0a53cb..6c924892 100644 --- a/docs/import.md +++ b/docs/import.md @@ -43,7 +43,6 @@ Ollama supports a set of model architectures, with support for more coming soon: - Llama & Mistral - Falcon & RW -- GPT-NeoX - BigCode To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`). @@ -184,9 +183,6 @@ python convert.py # FalconForCausalLM python convert-falcon-hf-to-gguf.py -# GPTNeoXForCausalLM -python convert-gptneox-hf-to-gguf.py - # GPTBigCodeForCausalLM python convert-starcoder-hf-to-gguf.py ```