358 lines
12 KiB
Markdown
358 lines
12 KiB
Markdown
<div align="center">
|
|
<img alt="ollama" height="200px" src="https://github.com/jmorganca/ollama/assets/3325447/0d0b44e2-8f4a-4e99-9b52-a5c1c741c8f7">
|
|
</div>
|
|
|
|
# Ollama
|
|
|
|
[![Discord](https://dcbadge.vercel.app/api/server/ollama?style=flat&compact=true)](https://discord.gg/ollama)
|
|
|
|
Get up and running with large language models locally.
|
|
|
|
### macOS
|
|
|
|
[Download](https://ollama.com/download/Ollama-darwin.zip)
|
|
|
|
### Windows preview
|
|
|
|
[Download](https://ollama.com/download/OllamaSetup.exe)
|
|
|
|
### Linux
|
|
|
|
```
|
|
curl -fsSL https://ollama.com/install.sh | sh
|
|
```
|
|
|
|
[Manual install instructions](https://github.com/jmorganca/ollama/blob/main/docs/linux.md)
|
|
|
|
### Docker
|
|
|
|
The official [Ollama Docker image](https://hub.docker.com/r/ollama/ollama) `ollama/ollama` is available on Docker Hub.
|
|
|
|
### Libraries
|
|
|
|
- [ollama-python](https://github.com/ollama/ollama-python)
|
|
- [ollama-js](https://github.com/ollama/ollama-js)
|
|
|
|
## Quickstart
|
|
|
|
To run and chat with [Llama 2](https://ollama.com/library/llama2):
|
|
|
|
```
|
|
ollama run llama2
|
|
```
|
|
|
|
## Model library
|
|
|
|
Ollama supports a list of models available on [ollama.com/library](https://ollama.com/library 'ollama model library')
|
|
|
|
Here are some example models that can be downloaded:
|
|
|
|
| Model | Parameters | Size | Download |
|
|
| ------------------ | ---------- | ----- | ------------------------------ |
|
|
| Llama 2 | 7B | 3.8GB | `ollama run llama2` |
|
|
| Mistral | 7B | 4.1GB | `ollama run mistral` |
|
|
| Dolphin Phi | 2.7B | 1.6GB | `ollama run dolphin-phi` |
|
|
| Phi-2 | 2.7B | 1.7GB | `ollama run phi` |
|
|
| Neural Chat | 7B | 4.1GB | `ollama run neural-chat` |
|
|
| Starling | 7B | 4.1GB | `ollama run starling-lm` |
|
|
| Code Llama | 7B | 3.8GB | `ollama run codellama` |
|
|
| Llama 2 Uncensored | 7B | 3.8GB | `ollama run llama2-uncensored` |
|
|
| Llama 2 13B | 13B | 7.3GB | `ollama run llama2:13b` |
|
|
| Llama 2 70B | 70B | 39GB | `ollama run llama2:70b` |
|
|
| Orca Mini | 3B | 1.9GB | `ollama run orca-mini` |
|
|
| Vicuna | 7B | 3.8GB | `ollama run vicuna` |
|
|
| LLaVA | 7B | 4.5GB | `ollama run llava` |
|
|
| Gemma | 2B | 1.4GB | `ollama run gemma:2b` |
|
|
| Gemma | 7B | 4.8GB | `ollama run gemma:7b` |
|
|
|
|
> Note: You should have at least 8 GB of RAM available to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
|
|
|
|
## Customize a model
|
|
|
|
### Import from GGUF
|
|
|
|
Ollama supports importing GGUF models in the Modelfile:
|
|
|
|
1. Create a file named `Modelfile`, with a `FROM` instruction with the local filepath to the model you want to import.
|
|
|
|
```
|
|
FROM ./vicuna-33b.Q4_0.gguf
|
|
```
|
|
|
|
2. Create the model in Ollama
|
|
|
|
```
|
|
ollama create example -f Modelfile
|
|
```
|
|
|
|
3. Run the model
|
|
|
|
```
|
|
ollama run example
|
|
```
|
|
|
|
### Import from PyTorch or Safetensors
|
|
|
|
See the [guide](docs/import.md) on importing models for more information.
|
|
|
|
### Customize a prompt
|
|
|
|
Models from the Ollama library can be customized with a prompt. For example, to customize the `llama2` model:
|
|
|
|
```
|
|
ollama pull llama2
|
|
```
|
|
|
|
Create a `Modelfile`:
|
|
|
|
```
|
|
FROM llama2
|
|
|
|
# set the temperature to 1 [higher is more creative, lower is more coherent]
|
|
PARAMETER temperature 1
|
|
|
|
# set the system message
|
|
SYSTEM """
|
|
You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.
|
|
"""
|
|
```
|
|
|
|
Next, create and run the model:
|
|
|
|
```
|
|
ollama create mario -f ./Modelfile
|
|
ollama run mario
|
|
>>> hi
|
|
Hello! It's your friend Mario.
|
|
```
|
|
|
|
For more examples, see the [examples](examples) directory. For more information on working with a Modelfile, see the [Modelfile](docs/modelfile.md) documentation.
|
|
|
|
## CLI Reference
|
|
|
|
### Create a model
|
|
|
|
`ollama create` is used to create a model from a Modelfile.
|
|
|
|
```
|
|
ollama create mymodel -f ./Modelfile
|
|
```
|
|
|
|
### Pull a model
|
|
|
|
```
|
|
ollama pull llama2
|
|
```
|
|
|
|
> This command can also be used to update a local model. Only the diff will be pulled.
|
|
|
|
### Remove a model
|
|
|
|
```
|
|
ollama rm llama2
|
|
```
|
|
|
|
### Copy a model
|
|
|
|
```
|
|
ollama cp llama2 my-llama2
|
|
```
|
|
|
|
### Multiline input
|
|
|
|
For multiline input, you can wrap text with `"""`:
|
|
|
|
```
|
|
>>> """Hello,
|
|
... world!
|
|
... """
|
|
I'm a basic program that prints the famous "Hello, world!" message to the console.
|
|
```
|
|
|
|
### Multimodal models
|
|
|
|
```
|
|
>>> What's in this image? /Users/jmorgan/Desktop/smile.png
|
|
The image features a yellow smiley face, which is likely the central focus of the picture.
|
|
```
|
|
|
|
### Pass in prompt as arguments
|
|
|
|
```
|
|
$ ollama run llama2 "Summarize this file: $(cat README.md)"
|
|
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
|
|
```
|
|
|
|
### List models on your computer
|
|
|
|
```
|
|
ollama list
|
|
```
|
|
|
|
### Start Ollama
|
|
|
|
`ollama serve` is used when you want to start ollama without running the desktop application.
|
|
|
|
## Building
|
|
|
|
Install `cmake` and `go`:
|
|
|
|
```
|
|
brew install cmake go
|
|
```
|
|
|
|
Then generate dependencies:
|
|
|
|
```
|
|
go generate ./...
|
|
```
|
|
|
|
Then build the binary:
|
|
|
|
```
|
|
go build .
|
|
```
|
|
|
|
More detailed instructions can be found in the [developer guide](https://github.com/jmorganca/ollama/blob/main/docs/development.md)
|
|
|
|
### Running local builds
|
|
|
|
Next, start the server:
|
|
|
|
```
|
|
./ollama serve
|
|
```
|
|
|
|
Finally, in a separate shell, run a model:
|
|
|
|
```
|
|
./ollama run llama2
|
|
```
|
|
|
|
## REST API
|
|
|
|
Ollama has a REST API for running and managing models.
|
|
|
|
### Generate a response
|
|
|
|
```
|
|
curl http://localhost:11434/api/generate -d '{
|
|
"model": "llama2",
|
|
"prompt":"Why is the sky blue?"
|
|
}'
|
|
```
|
|
|
|
### Chat with a model
|
|
|
|
```
|
|
curl http://localhost:11434/api/chat -d '{
|
|
"model": "mistral",
|
|
"messages": [
|
|
{ "role": "user", "content": "why is the sky blue?" }
|
|
]
|
|
}'
|
|
```
|
|
|
|
See the [API documentation](./docs/api.md) for all endpoints.
|
|
|
|
## Community Integrations
|
|
|
|
### Web & Desktop
|
|
|
|
- [Bionic GPT](https://github.com/bionic-gpt/bionic-gpt)
|
|
- [Enchanted (macOS native)](https://github.com/AugustDev/enchanted)
|
|
- [HTML UI](https://github.com/rtcfirefly/ollama-ui)
|
|
- [Chatbot UI](https://github.com/ivanfioravanti/chatbot-ollama)
|
|
- [Typescript UI](https://github.com/ollama-interface/Ollama-Gui?tab=readme-ov-file)
|
|
- [Minimalistic React UI for Ollama Models](https://github.com/richawo/minimal-llm-ui)
|
|
- [Open WebUI](https://github.com/open-webui/open-webui)
|
|
- [Ollamac](https://github.com/kevinhermawan/Ollamac)
|
|
- [big-AGI](https://github.com/enricoros/big-AGI/blob/main/docs/config-local-ollama.md)
|
|
- [Cheshire Cat assistant framework](https://github.com/cheshire-cat-ai/core)
|
|
- [Amica](https://github.com/semperai/amica)
|
|
- [chatd](https://github.com/BruceMacD/chatd)
|
|
- [Ollama-SwiftUI](https://github.com/kghandour/Ollama-SwiftUI)
|
|
- [MindMac](https://mindmac.app)
|
|
- [NextJS Web Interface for Ollama](https://github.com/jakobhoeg/nextjs-ollama-llm-ui)
|
|
- [Msty](https://msty.app)
|
|
- [Chatbox](https://github.com/Bin-Huang/Chatbox)
|
|
- [WinForm Ollama Copilot](https://github.com/tgraupmann/WinForm_Ollama_Copilot)
|
|
- [NextChat](https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web) with [Get Started Doc](https://docs.nextchat.dev/models/ollama)
|
|
- [Odin Runes](https://github.com/leonid20000/OdinRunes)
|
|
- [LLM-X: Progressive Web App](https://github.com/mrdjohnson/llm-x)
|
|
|
|
### Terminal
|
|
|
|
- [oterm](https://github.com/ggozad/oterm)
|
|
- [Ellama Emacs client](https://github.com/s-kostyaev/ellama)
|
|
- [Emacs client](https://github.com/zweifisch/ollama)
|
|
- [gen.nvim](https://github.com/David-Kunz/gen.nvim)
|
|
- [ollama.nvim](https://github.com/nomnivore/ollama.nvim)
|
|
- [ollama-chat.nvim](https://github.com/gerazov/ollama-chat.nvim)
|
|
- [ogpt.nvim](https://github.com/huynle/ogpt.nvim)
|
|
- [gptel Emacs client](https://github.com/karthink/gptel)
|
|
- [Oatmeal](https://github.com/dustinblackman/oatmeal)
|
|
- [cmdh](https://github.com/pgibler/cmdh)
|
|
- [tenere](https://github.com/pythops/tenere)
|
|
- [llm-ollama](https://github.com/taketwo/llm-ollama) for [Datasette's LLM CLI](https://llm.datasette.io/en/stable/).
|
|
- [ShellOracle](https://github.com/djcopley/ShellOracle)
|
|
|
|
### Database
|
|
|
|
- [MindsDB](https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/handlers/ollama_handler/README.md)
|
|
|
|
### Package managers
|
|
|
|
- [Pacman](https://archlinux.org/packages/extra/x86_64/ollama/)
|
|
- [Helm Chart](https://artifacthub.io/packages/helm/ollama-helm/ollama)
|
|
|
|
### Libraries
|
|
|
|
- [LangChain](https://python.langchain.com/docs/integrations/llms/ollama) and [LangChain.js](https://js.langchain.com/docs/modules/model_io/models/llms/integrations/ollama) with [example](https://js.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
|
|
- [LangChainGo](https://github.com/tmc/langchaingo/) with [example](https://github.com/tmc/langchaingo/tree/main/examples/ollama-completion-example)
|
|
- [LangChain4j](https://github.com/langchain4j/langchain4j) with [example](https://github.com/langchain4j/langchain4j-examples/tree/main/ollama-examples/src/main/java)
|
|
- [LlamaIndex](https://gpt-index.readthedocs.io/en/stable/examples/llm/ollama.html)
|
|
- [LangChain4j](https://github.com/langchain4j/langchain4j/tree/main/langchain4j-ollama)
|
|
- [LiteLLM](https://github.com/BerriAI/litellm)
|
|
- [OllamaSharp for .NET](https://github.com/awaescher/OllamaSharp)
|
|
- [Ollama for Ruby](https://github.com/gbaptista/ollama-ai)
|
|
- [Ollama-rs for Rust](https://github.com/pepperoni21/ollama-rs)
|
|
- [Ollama4j for Java](https://github.com/amithkoujalgi/ollama4j)
|
|
- [ModelFusion Typescript Library](https://modelfusion.dev/integration/model-provider/ollama)
|
|
- [OllamaKit for Swift](https://github.com/kevinhermawan/OllamaKit)
|
|
- [Ollama for Dart](https://github.com/breitburg/dart-ollama)
|
|
- [Ollama for Laravel](https://github.com/cloudstudio/ollama-laravel)
|
|
- [LangChainDart](https://github.com/davidmigloz/langchain_dart)
|
|
- [Semantic Kernel - Python](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai/ollama)
|
|
- [Haystack](https://github.com/deepset-ai/haystack-integrations/blob/main/integrations/ollama.md)
|
|
- [Elixir LangChain](https://github.com/brainlid/langchain)
|
|
- [Ollama for R - rollama](https://github.com/JBGruber/rollama)
|
|
- [Ollama-ex for Elixir](https://github.com/lebrunel/ollama-ex)
|
|
- [Ollama Connector for SAP ABAP](https://github.com/b-tocs/abap_btocs_ollama)
|
|
|
|
### Mobile
|
|
|
|
- [Enchanted](https://github.com/AugustDev/enchanted)
|
|
- [Maid](https://github.com/Mobile-Artificial-Intelligence/maid)
|
|
|
|
### Extensions & Plugins
|
|
|
|
- [Raycast extension](https://github.com/MassimilianoPasquini97/raycast_ollama)
|
|
- [Discollama](https://github.com/mxyng/discollama) (Discord bot inside the Ollama discord channel)
|
|
- [Continue](https://github.com/continuedev/continue)
|
|
- [Obsidian Ollama plugin](https://github.com/hinterdupfinger/obsidian-ollama)
|
|
- [Logseq Ollama plugin](https://github.com/omagdy7/ollama-logseq)
|
|
- [NotesOllama](https://github.com/andersrex/notesollama) (Apple Notes Ollama plugin)
|
|
- [Dagger Chatbot](https://github.com/samalba/dagger-chatbot)
|
|
- [Discord AI Bot](https://github.com/mekb-turtle/discord-ai-bot)
|
|
- [Ollama Telegram Bot](https://github.com/ruecat/ollama-telegram)
|
|
- [Hass Ollama Conversation](https://github.com/ej52/hass-ollama-conversation)
|
|
- [Rivet plugin](https://github.com/abrenneke/rivet-plugin-ollama)
|
|
- [Llama Coder](https://github.com/ex3ndr/llama-coder) (Copilot alternative using Ollama)
|
|
- [Obsidian BMO Chatbot plugin](https://github.com/longy2k/obsidian-bmo-chatbot)
|
|
- [Copilot for Obsidian plugin](https://github.com/logancyang/obsidian-copilot)
|
|
- [Obsidian Local GPT plugin](https://github.com/pfrankov/obsidian-local-gpt)
|
|
- [Open Interpreter](https://docs.openinterpreter.com/language-model-setup/local-models/ollama)
|
|
- [twinny](https://github.com/rjmacarthy/twinny) (Copilot and Copilot chat alternative using Ollama)
|
|
- [Wingman-AI](https://github.com/RussellCanfield/wingman-ai) (Copilot code and chat alternative using Ollama and HuggingFace)
|
|
- [Page Assist](https://github.com/n4ze3m/page-assist) (Chrome Extension)
|