remove mention of gpt-neox in import (#1381)
Signed-off-by: Matt Williams <m@technovangelist.com>
This commit is contained in:
parent
bf704423c5
commit
f1ef3f9947
1 changed files with 0 additions and 4 deletions
|
@ -43,7 +43,6 @@ Ollama supports a set of model architectures, with support for more coming soon:
|
||||||
|
|
||||||
- Llama & Mistral
|
- Llama & Mistral
|
||||||
- Falcon & RW
|
- Falcon & RW
|
||||||
- GPT-NeoX
|
|
||||||
- BigCode
|
- BigCode
|
||||||
|
|
||||||
To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
|
To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`).
|
||||||
|
@ -184,9 +183,6 @@ python convert.py <path to model directory>
|
||||||
# FalconForCausalLM
|
# FalconForCausalLM
|
||||||
python convert-falcon-hf-to-gguf.py <path to model directory>
|
python convert-falcon-hf-to-gguf.py <path to model directory>
|
||||||
|
|
||||||
# GPTNeoXForCausalLM
|
|
||||||
python convert-gptneox-hf-to-gguf.py <path to model directory>
|
|
||||||
|
|
||||||
# GPTBigCodeForCausalLM
|
# GPTBigCodeForCausalLM
|
||||||
python convert-starcoder-hf-to-gguf.py <path to model directory>
|
python convert-starcoder-hf-to-gguf.py <path to model directory>
|
||||||
```
|
```
|
||||||
|
|
Loading…
Reference in a new issue