No description
Find a file
Michael Chiang fe530423a5
Merge pull request #249 from sestinj/main
Add "Awesome projects built with Ollama" section to README, including Continue
2023-08-01 08:07:50 -07:00
api allow specifying stop conditions in modelfile 2023-07-28 11:02:04 -04:00
app build for universal architecture on macos 2023-07-28 12:18:11 -04:00
cmd allow multiline text using three double-quotes (#239) 2023-07-29 13:35:23 -07:00
docs add stop to docs 2023-07-28 12:30:27 -04:00
examples Update Modelfile 2023-07-24 16:43:11 -04:00
format add new list command (#97) 2023-07-18 09:09:45 -07:00
library remove colon from library modelfiles 2023-07-20 09:51:30 -07:00
llama remove -Werror 2023-07-31 21:45:56 -07:00
parser use max scan token size to hold large objects 2023-07-28 11:43:31 -07:00
progressbar vendor in progress bar and change to bytes instead of bibytes (#130) 2023-07-19 17:24:03 -07:00
scripts only build metal for darwin,arm target 2023-07-31 21:35:23 -04:00
server fix license 2023-07-31 21:46:48 -07:00
web Update discord invite link 2023-07-27 15:43:15 -04:00
.dockerignore update Dockerfile 2023-07-06 16:34:44 -04:00
.gitignore add ggml-metal.metal to .gitignore 2023-07-28 11:04:21 -04:00
.prettierrc.json move .prettierrc.json to root 2023-07-02 17:34:46 -04:00
Dockerfile fix compilation issue in Dockerfile, remove from README.md until ready 2023-07-11 19:51:08 -07:00
go.mod use gin-contrib/cors middleware 2023-07-22 09:39:08 -07:00
go.sum use gin-contrib/cors middleware 2023-07-22 09:39:08 -07:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
main.go continue conversation 2023-07-13 17:13:00 -07:00
README.md Update README.md 2023-07-31 13:59:39 -07:00

logo

Ollama

Discord

Note: Ollama is in early preview. Please report any issues you find.

Run, create, and share large language models (LLMs).

Download

  • Download for macOS on Apple Silicon (Intel coming soon)
  • Download for Windows and Linux (coming soon)
  • Build from source

Quickstart

To run and chat with Llama 2, the new model by Meta:

ollama run llama2

Model library

ollama includes a library of open-source models:

Model Parameters Size Download
Llama2 7B 3.8GB ollama pull llama2
Llama2 13B 13B 7.3GB ollama pull llama2:13b
Orca Mini 3B 1.9GB ollama pull orca
Vicuna 7B 3.8GB ollama pull vicuna
Nous-Hermes 13B 7.3GB ollama pull nous-hermes
Wizard Vicuna Uncensored 13B 7.3GB ollama pull wizard-vicuna

Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.

Examples

Run a model

ollama run llama2
>>> hi
Hello! How can I help you today?

Create a custom model

Pull a base model:

ollama pull llama2

Create a Modelfile:

FROM llama2

# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1

# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""

Next, create and run the model:

ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.

For more examples, see the examples directory.

For more information on creating a Modelfile, see the Modelfile documentation.

Pull a model from the registry

ollama pull orca

Listing local models

ollama list

Model packages

Overview

Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.

logo

Building

go build .

To run it start the server:

./ollama serve &

Finally, run a model!

./ollama run llama2

REST API

POST /api/generate

Generate text from a model.

curl -X POST http://localhost:11434/api/generate -d '{"model": "llama2", "prompt":"Why is the sky blue?"}'

POST /api/create

Create a model from a Modelfile.

curl -X POST http://localhost:11434/api/create -d '{"name": "my-model", "path": "/path/to/modelfile"}'

Awesome projects built with Ollama

Continue

Use Continue to embed Ollama inside of VS Code. The extension lets you highlight code to add to the prompt, ask questions in the sidebar, and generate code inline.