f4432e1dba
The `stop` option to the generate API is a list of sequences that should cause generation to stop. Although these are commonly called "stop tokens", they do not necessarily correspond to LLM tokens (per the LLM's tokenizer). For example, if the caller sends a generate request with `"stop":["\n"]`, then generation should stop on any token containing `\n` (and trim `\n` from the output), not just if the token exactly matches `\n`. If `stop` were interpreted strictly as LLM tokens, then it would require callers of the generate API to know the LLM's tokenizer and enumerate many tokens in the `stop` list. Fixes https://github.com/jmorganca/ollama/issues/295. |
||
---|---|---|
api | ||
app | ||
cmd | ||
docs | ||
examples | ||
format | ||
llm | ||
parser | ||
progressbar | ||
scripts | ||
server | ||
vector | ||
version | ||
.dockerignore | ||
.gitignore | ||
.prettierrc.json | ||
Dockerfile | ||
go.mod | ||
go.sum | ||
LICENSE | ||
main.go | ||
README.md |
Ollama
Run, create, and share large language models (LLMs).
Note: Ollama is in early preview. Please report any issues you find.
Download
- Download for macOS
- Download for Windows and Linux (coming soon)
- Build from source
Quickstart
To run and chat with Llama 2, the new model by Meta:
ollama run llama2
Model library
Ollama supports a list of open-source models available on ollama.ai/library
Here are some example open-source models that can be downloaded:
Model | Parameters | Size | Download |
---|---|---|---|
Llama2 | 7B | 3.8GB | ollama pull llama2 |
Llama2 13B | 13B | 7.3GB | ollama pull llama2:13b |
Llama2 70B | 70B | 39GB | ollama pull llama2:70b |
Llama2 Uncensored | 7B | 3.8GB | ollama pull llama2-uncensored |
Code Llama | 7B | 3.8GB | ollama pull codellama |
Orca Mini | 3B | 1.9GB | ollama pull orca-mini |
Vicuna | 7B | 3.8GB | ollama pull vicuna |
Nous-Hermes | 7B | 3.8GB | ollama pull nous-hermes |
Nous-Hermes 13B | 13B | 7.3GB | ollama pull nous-hermes:13b |
Wizard Vicuna Uncensored | 13B | 7.3GB | ollama pull wizard-vicuna |
Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models.
Examples
Run a model
ollama run llama2
>>> hi
Hello! How can I help you today?
For multiline input, you can wrap text with """
:
>>> """Hello,
... world!
... """
I'm a basic program that prints the famous "Hello, world!" message to the console.
Create a custom model
Pull a base model:
ollama pull llama2
To update a model to the latest version, run
ollama pull llama2
again. The model will be updated (if necessary).
Create a Modelfile
:
FROM llama2
# set the temperature to 1 [higher is more creative, lower is more coherent]
PARAMETER temperature 1
# set the system prompt
SYSTEM """
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
"""
Next, create and run the model:
ollama create mario -f ./Modelfile
ollama run mario
>>> hi
Hello! It's your friend Mario.
For more examples, see the examples directory. For more information on creating a Modelfile, see the Modelfile documentation.
Pull a model from the registry
ollama pull orca-mini
Listing local models
ollama list
Model packages
Overview
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.
Building
You will also need a C/C++ compiler such as GCC for MacOS and Linux or Mingw-w64 GCC for Windows.
go build .
To run it start the server:
./ollama serve &
Finally, run a model!
./ollama run llama2
REST API
See the API documentation for all endpoints.
Ollama has an API for running and managing models. For example to generate text from a model:
curl -X POST http://localhost:11434/api/generate -d '{
"model": "llama2",
"prompt":"Why is the sky blue?"
}'
Tools using Ollama
- LangChain and LangChain.js with a question-answering example.
- Continue - embeds Ollama inside Visual Studio Code. The extension lets you highlight code to add to the prompt, ask questions in the sidebar, and generate code inline.
- LiteLLM a lightweight python package to simplify LLM API calls
- Discord AI Bot - interact with Ollama as a chatbot on Discord.
- Raycast Ollama - Raycast extension to use Ollama for local llama inference on Raycast.
- Simple HTML UI for Ollama
- Emacs client for Ollama