No description
Find a file
Matt Williams 75eb28f574
Merge pull request #125 from jmorganca/matt/addlicensetomodelfiledoc
Updated modelfile doc to include license
2023-07-19 08:57:06 -07:00
api fix pull 0 bytes on completed layer 2023-07-18 19:38:11 -07:00
app update icons to have different images for bright and dark mode 2023-07-19 11:14:43 -04:00
cmd Merge pull request #110 from jmorganca/fix-pull-0-bytes 2023-07-18 19:38:59 -07:00
docs Updated modelfile doc to include license 2023-07-19 07:16:38 -07:00
examples get rid of latest 2023-07-19 07:40:40 -07:00
format add new list command (#97) 2023-07-18 09:09:45 -07:00
llama fix multibyte responses 2023-07-14 20:11:44 -07:00
parser add license layers to the parser (#116) 2023-07-18 22:49:38 -07:00
scripts build app in publish script 2023-07-12 19:16:39 -07:00
server dont consume reader when calculating digest 2023-07-19 00:47:55 -07:00
web fix page transitions flickering 2023-07-19 10:19:24 -04:00
.dockerignore update Dockerfile 2023-07-06 16:34:44 -04:00
.gitignore fix compilation issue in Dockerfile, remove from README.md until ready 2023-07-11 19:51:08 -07:00
.prettierrc.json move .prettierrc.json to root 2023-07-02 17:34:46 -04:00
Dockerfile fix compilation issue in Dockerfile, remove from README.md until ready 2023-07-11 19:51:08 -07:00
ggml-metal.metal look for ggml-metal in the same directory as the binary 2023-07-11 15:58:56 -07:00
go.mod add new list command (#97) 2023-07-18 09:09:45 -07:00
go.sum add new list command (#97) 2023-07-18 09:09:45 -07:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
main.go continue conversation 2023-07-13 17:13:00 -07:00
models.json update vicuna model 2023-07-12 09:42:26 -07:00
README.md add llama2:13b model to the readme (#126) 2023-07-19 08:21:28 -07:00

logo

Ollama

Create, run, and share self-contained large language models (LLMs). Ollama bundles a models weights, configuration, prompts, and more into self-contained packages that run anywhere.

Note: Ollama is in early preview. Please report any issues you find.

Download

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Examples

Quickstart

ollama run llama2
>>> hi
Hello! How can I help you today?

Creating a custom model

Create a Modelfile:

FROM llama2
PROMPT """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.

User: {{ .Prompt }}
Mario:
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

Model library

Ollama includes a library of open-source, pre-trained models. More models are coming soon. You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Llama2 13B 13B 7.3GB ollama pull llama2:13b
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes
Wizard Vicuna Uncensored 13B 7.3GB ollama pull wizard-vicuna

Building

go build .

To run it start the server:

./ollama serve &

Finally, run a model!

./ollama run llama2