No description
Find a file
2023-07-18 13:22:46 -07:00
api add new list command (#97) 2023-07-18 09:09:45 -07:00
app app: use llama2 instead of orca 2023-07-18 12:36:03 -07:00
cmd fix mkdir blob path 2023-07-18 11:24:19 -07:00
docs First stab at a modelfile doc 2023-07-18 08:22:17 -07:00
examples Add README.md for examples 2023-07-18 13:22:46 -07:00
format add new list command (#97) 2023-07-18 09:09:45 -07:00
llama fix multibyte responses 2023-07-14 20:11:44 -07:00
parser convert commands to uppercase in parser 2023-07-17 15:34:08 -07:00
scripts build app in publish script 2023-07-12 19:16:39 -07:00
server skip files in the list if we can't get the correct model path (#100) 2023-07-18 12:39:08 -07:00
web fix css on smaller screen 2023-07-18 16:17:42 -04:00
.dockerignore update Dockerfile 2023-07-06 16:34:44 -04:00
.gitignore fix compilation issue in Dockerfile, remove from README.md until ready 2023-07-11 19:51:08 -07:00
.prettierrc.json move .prettierrc.json to root 2023-07-02 17:34:46 -04:00
Dockerfile fix compilation issue in Dockerfile, remove from README.md until ready 2023-07-11 19:51:08 -07:00
ggml-metal.metal look for ggml-metal in the same directory as the binary 2023-07-11 15:58:56 -07:00
go.mod add new list command (#97) 2023-07-18 09:09:45 -07:00
go.sum add new list command (#97) 2023-07-18 09:09:45 -07:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
main.go continue conversation 2023-07-13 17:13:00 -07:00
models.json update vicuna model 2023-07-12 09:42:26 -07:00
README.md update README.md with new syntax 2023-07-18 13:22:46 -07:00

logo

Ollama

Create, run, and share self-contained large language models (LLMs). Ollama bundles a models weights, configuration, prompts, and more into self-contained packages that run anywhere.

Note: Ollama is in early preview. Please report any issues you find.

Examples

Quickstart

ollama run llama2
>>> hi
Hello! How can I help you today?

Creating a model

Create a Modelfile:

FROM llama2
PROMPT """
You are super mario from super mario bros. Answer Mario, the assistant, only.

User: {{ .Prompt }}
Mario:
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

Install

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Model library

Ollama includes a library of open-source, pre-trained models. More models are coming soon.

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13 7.3GB ollama pull hous-hermes

Building

go build .

To run it start the server:

./ollama server &

Finally, run a model!

./ollama run llama2