# Ollama
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
> Note: Ollama is in early preview. Please report any issues you find.
Run, create, and share large language models (LLMs).
## Download
- [Download](https://ollama.ai/download) for macOS on Apple Silicon (Intel coming soon)
- Download for Windows and Linux (coming soon)
- Build [from source](#building)
## Quickstart
To run and chat with [Llama 2](https://ai.meta.com/llama), the new model by Meta:
```
ollama run llama2
```
## Model library
`ollama` includes a library of open-source models:
| Model | Parameters | Size | Download |
| ------------------------ | ---------- | ----- | --------------------------- |
| Llama2 | 7B | 3.8GB | `ollama pull llama2` |
| Llama2 13B | 13B | 7.3GB | `ollama pull llama2:13b` |
| Orca Mini | 3B | 1.9GB | `ollama pull orca` |
| Vicuna | 7B | 3.8GB | `ollama pull vicuna` |
| Nous-Hermes | 13B | 7.3GB | `ollama pull nous-hermes` |
| Wizard Vicuna Uncensored | 13B | 7.3GB | `ollama pull wizard-vicuna` |
> Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
## Examples
### Run a model
```
ollama run llama2
>>> hi
Hello! How can I help you today?
```
### Create a custom model
Pull a base model:
```
ollama pull llama2
```
Create a `Modelfile`:
```
FROM llama2
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""
```
Next, create and run the model:
```
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
```
For more examples, see the [examples](./examples) directory.
### Pull a model from the registry
```
ollama pull orca
```
### Listing local models
```
ollama list
```
## Model packages
### Overview
Ollama bundles model weights, configuration, and data into a single package, defined by a [Modelfile](./docs/modelfile.md).
## Building
```
go build .
```
To run it start the server:
```
./ollama serve &
```
Finally, run a model!
```
./ollama run llama2
```