diff --git a/docs/import.md b/docs/import.md index b8f087c7..6885830a 100644 --- a/docs/import.md +++ b/docs/import.md @@ -1,17 +1,6 @@ # Import a model -This guide walks through importing a PyTorch, Safetensors or GGUF model. - -## Supported models - -Ollama supports a set of model architectures, with support for more coming soon: - -- Llama & Mistral -- Falcon & RW -- GPT-NeoX -- BigCode - -To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`). +This guide walks through importing a GGUF, PyTorch or Safetensors model. ## Importing (GGUF) @@ -48,6 +37,17 @@ ollama run example "What is your favourite condiment?" ## Importing (PyTorch & Safetensors) +### Supported models + +Ollama supports a set of model architectures, with support for more coming soon: + +- Llama & Mistral +- Falcon & RW +- GPT-NeoX +- BigCode + +To view a model's architecture, check the `config.json` file in its HuggingFace repo. You should see an entry under `architectures` (e.g. `LlamaForCausalLM`). + ### Step 1: Clone the HuggingFace repository (optional) If the model is currently hosted in a HuggingFace repository, first clone that repository to download the raw model.