logo
# Ollama Create, run, and share self-contained large language models (LLMs). Ollama bundles a model’s weights, configuration, prompts, and more into self-contained packages that run anywhere. > Note: Ollama is in early preview. Please report any issues you find. ## Download - [Download](https://ollama.ai/download) for macOS on Apple Silicon (Intel coming soon) - Download for Windows and Linux (coming soon) - Build [from source](#building) ## Examples ### Quickstart ``` ollama run llama2 >>> hi Hello! How can I help you today? ``` ### Creating a custom model Create a `Modelfile`: ``` FROM llama2 PROMPT """ You are Mario from Super Mario Bros. Answer as Mario, the assistant, only. User: {{ .Prompt }} Mario: """ ``` Next, create and run the model: ``` ollama create mario -f ./Modelfile ollama run mario >>> hi Hello! It's your friend Mario. ``` ## Model library Ollama includes a library of open-source, pre-trained models. More models are coming soon. You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models. | Model | Parameters | Size | Download | | ---------------------- | ---------- | ----- | --------------------------- | | Llama2 | 7B | 3.8GB | `ollama pull llama2` | | Llama2 13B | 13B | 7.3GB | `ollama pull llama2:13b` | | Orca Mini | 3B | 1.9GB | `ollama pull orca` | | Vicuna | 7B | 3.8GB | `ollama pull vicuna` | | Nous-Hermes | 13B | 7.3GB | `ollama pull nous-hermes` | | Wizard Vicuna Uncensored | 13B | 7.3GB | `ollama pull wizard-vicuna` | ## Building ``` go build . ``` To run it start the server: ``` ./ollama serve & ``` Finally, run a model! ``` ./ollama run llama2 ```