2023-06-27 16:08:52 +00:00
# Ollama
2023-06-22 16:45:31 +00:00
2023-06-29 22:25:02 +00:00
Ollama is a tool for running any large language model on any machine. It's designed to be easy to use and fast, supporting the largest number of models possible by using the fastest loader available for your platform and model.
2023-06-27 21:13:07 +00:00
2023-06-30 15:39:24 +00:00
> _Note: this project is a work in progress. Certain models that can be run with `ollama` are intended for research and/or non-commercial use only._
2023-06-28 13:57:36 +00:00
## Install
2023-06-22 16:45:31 +00:00
2023-06-30 16:39:25 +00:00
Using `pip` :
2023-06-22 16:45:31 +00:00
```
2023-06-27 16:08:52 +00:00
pip install ollama
2023-06-22 16:45:31 +00:00
```
2023-06-30 16:39:25 +00:00
Using `docker` :
2023-06-30 16:31:00 +00:00
```
docker run ollama/ollama
```
2023-06-28 13:57:36 +00:00
## Quickstart
2023-06-25 17:08:03 +00:00
2023-06-29 22:25:02 +00:00
To run a model, use `ollama run` :
```
ollama run orca-mini-3b
2023-06-25 17:08:03 +00:00
```
2023-06-29 22:25:02 +00:00
You can also run models from hugging face:
2023-06-27 16:08:52 +00:00
2023-06-29 22:25:02 +00:00
```
ollama run huggingface.co/TheBloke/orca_mini_3B-GGML
```
2023-06-28 13:57:36 +00:00
2023-06-29 22:25:02 +00:00
Or directly via downloaded model files:
```
ollama run ~/Downloads/orca-mini-13b.ggmlv3.q4_0.bin
2023-06-28 13:57:36 +00:00
```
## Python SDK
### Example
2023-06-27 16:08:52 +00:00
```python
2023-06-28 13:57:36 +00:00
import ollama
2023-06-28 14:19:07 +00:00
ollama.generate("orca-mini-3b", "hi")
2023-06-25 17:08:03 +00:00
```
2023-06-28 13:57:36 +00:00
### `ollama.generate(model, message)`
2023-06-25 17:08:03 +00:00
2023-06-28 13:57:36 +00:00
Generate a completion
2023-06-25 17:08:03 +00:00
2023-06-27 16:08:52 +00:00
```python
2023-06-28 13:57:36 +00:00
ollama.generate("./llama-7b-ggml.bin", "hi")
2023-06-25 17:08:03 +00:00
```
2023-06-27 16:51:36 +00:00
### `ollama.models()`
2023-06-25 17:08:03 +00:00
2023-06-27 16:44:12 +00:00
List available local models
2023-06-27 16:08:52 +00:00
2023-06-28 14:57:18 +00:00
```python
2023-06-27 16:08:52 +00:00
models = ollama.models()
2023-06-25 17:08:03 +00:00
```
2023-06-28 13:57:36 +00:00
### `ollama.load(model)`
Manually a model for generation
```python
ollama.load("model")
```
### `ollama.unload(model)`
Unload a model
```python
ollama.unload("model")
```
2023-06-27 21:36:02 +00:00
### `ollama.pull(model)`
Download a model
2023-06-27 16:08:52 +00:00
```python
2023-06-27 21:36:02 +00:00
ollama.pull("huggingface.co/thebloke/llama-7b-ggml")
2023-06-27 16:08:52 +00:00
```
2023-06-25 17:08:03 +00:00
2023-06-28 18:39:43 +00:00
## Coming Soon
2023-06-28 16:13:13 +00:00
2023-06-27 16:51:36 +00:00
### `ollama.search("query")`
2023-06-25 18:29:26 +00:00
2023-06-27 16:08:52 +00:00
Search for compatible models that Ollama can run
2023-06-25 18:29:26 +00:00
2023-06-27 16:08:52 +00:00
```python
ollama.search("llama-7b")
```
2023-06-25 17:08:03 +00:00
2023-06-27 17:46:46 +00:00
## Documentation
- [Development ](docs/development.md )