No description
Find a file
2023-07-20 11:55:56 -07:00
api fix pull 0 bytes on completed layer 2023-07-18 19:38:11 -07:00
app update icons to have different images for bright and dark mode 2023-07-19 11:14:43 -04:00
cmd ctrl+c on empty line exits (#135) 2023-07-20 00:53:08 -07:00
docs new Modelfile syntax 2023-07-20 07:52:24 -07:00
examples new Modelfile syntax 2023-07-20 07:52:24 -07:00
format add new list command (#97) 2023-07-18 09:09:45 -07:00
library library: add echo for verify progress 2023-07-19 23:58:28 -07:00
llama update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
parser add prompt back to parser 2023-07-20 01:13:30 -07:00
progressbar vendor in progress bar and change to bytes instead of bibytes (#130) 2023-07-19 17:24:03 -07:00
scripts build app in publish script 2023-07-12 19:16:39 -07:00
server add prompt back to parser 2023-07-20 01:13:30 -07:00
web web: clean up remaining models.json usage 2023-07-20 07:51:46 -07:00
.dockerignore update Dockerfile 2023-07-06 16:34:44 -04:00
.gitignore fix compilation issue in Dockerfile, remove from README.md until ready 2023-07-11 19:51:08 -07:00
.prettierrc.json move .prettierrc.json to root 2023-07-02 17:34:46 -04:00
Dockerfile fix compilation issue in Dockerfile, remove from README.md until ready 2023-07-11 19:51:08 -07:00
ggml-metal.metal look for ggml-metal in the same directory as the binary 2023-07-11 15:58:56 -07:00
go.mod vendor in progress bar and change to bytes instead of bibytes (#130) 2023-07-19 17:24:03 -07:00
go.sum vendor in progress bar and change to bytes instead of bibytes (#130) 2023-07-19 17:24:03 -07:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
main.go continue conversation 2023-07-13 17:13:00 -07:00
README.md documentation on the model format 2023-07-20 09:03:41 -07:00

logo

Ollama

Discord

Note: Ollama is in early preview. Please report any issues you find.

Create, run, and share portable large language models (LLMs). Ollama bundles a models weights, configuration, prompts, and more into self-contained packages that can run on any machine.

Portable Large Language Models (LLMs)

Package models as a series of layers in a portable, easy to manage format.

The idea behind Ollama

  • Universal model format that can run anywhere: desktop, cloud servers & other devices.
  • Encapsulate everything a model needs to operate weights, configuration, and data into a single package.
  • Build custom models from base models like Meta's Llama 2
  • Share large models without having to transmit large amounts of data.
logo

This format is inspired by the image spec originally introduced by Docker for Linux containers. Ollama extends this format to package large language models.

Download

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Quickstart

To run and chat with Llama 2, the new model by Meta:

ollama run llama2

Model library

Ollama includes a library of open-source, pre-trained models. More models are coming soon. You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Llama2 13B 13B 7.3GB ollama pull llama2:13b
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes
Wizard Vicuna Uncensored 13B 7.3GB ollama pull wizard-vicuna

Examples

Run a model

ollama run llama2
>>> hi
Hello! How can I help you today?

Create a custom character model

Pull a base model:

ollama pull llama2

Create a Modelfile:

FROM llama2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory.

Pull a model from the registry

ollama pull orca

Building

go build .

To run it start the server:

./ollama serve &

Finally, run a model!

./ollama run llama2