No description
Find a file
2023-07-06 16:18:40 -04:00
app auto updater for macos 2023-07-06 00:04:06 -04:00
docs Move python docs to separate file 2023-07-01 17:54:29 -04:00
ollama upgrade fuzzy search library 2023-07-05 11:13:45 -07:00
web fix auto update route 2023-07-06 16:18:40 -04:00
.gitignore add templates to prompt command 2023-06-26 13:41:16 -04:00
.prettierrc.json move .prettierrc.json to root 2023-07-02 17:34:46 -04:00
Dockerfile add basic Dockerfile 2023-06-30 12:19:04 -04:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
models.json format models.json 2023-07-02 20:33:23 -04:00
poetry.lock upgrade fuzzy search library 2023-07-05 11:13:45 -07:00
pyproject.toml upgrade fuzzy search library 2023-07-05 11:13:45 -07:00
README.md Move python docs to separate file 2023-07-01 17:54:29 -04:00
requirements.txt upgrade fuzzy search library 2023-07-05 11:13:45 -07:00

Ollama

Ollama is a tool for running large language models. It's designed to be easy to use and fast.

Note: this project is a work in progress. Certain models that can be run with ollama are intended for research and/or non-commercial use only.

Install

Using pip:

pip install ollama

Using docker:

docker run ollama/ollama

Quickstart

To run a model, use ollama run:

ollama run orca-mini-3b

You can also run models from hugging face:

ollama run huggingface.co/TheBloke/orca_mini_3B-GGML

Or directly via downloaded model files:

ollama run ~/Downloads/orca-mini-13b.ggmlv3.q4_0.bin

Documentation