No description
Find a file
2023-06-27 22:08:53 -04:00
desktop desktop: capture text from last chunk of response 2023-06-27 22:08:53 -04:00
docs simplify loading 2023-06-27 14:50:30 -04:00
ollama add function 2023-06-27 17:36:02 -04:00
.gitignore add templates to prompt command 2023-06-26 13:41:16 -04:00
build.py desktop: fixes for initial publish 2023-06-27 14:34:56 -04:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
ollama.py refactor 2023-06-27 13:36:30 -07:00
README.md fix readme download link 2023-06-27 22:05:41 -04:00
requirements.txt add cors 2023-06-27 17:26:49 -04:00
template.py Update template.py 2023-06-27 16:20:40 -04:00

Ollama

The easiest way to run ai models.

Download

  • macOS (Apple Silicon)
  • macOS (Intel Coming soon)
  • Windows (Coming soon)
  • Linux (Coming soon)

Python SDK

pip install ollama

Python SDK quickstart

import ollama
ollama.generate("./llama-7b-ggml.bin", "hi")

ollama.generate(model, message)

Generate a completion

ollama.generate("./llama-7b-ggml.bin", "hi")

ollama.load(model)

Load a model for generation

ollama.load("model")

ollama.models()

List available local models

models = ollama.models()

ollama.serve()

Serve the ollama http server

ollama.add(filepath)

Add a model by importing from a file

ollama.add("./path/to/model")

Cooming Soon

ollama.pull(model)

Download a model

ollama.pull("huggingface.co/thebloke/llama-7b-ggml")

ollama.search("query")

Search for compatible models that Ollama can run

ollama.search("llama-7b")

Future CLI

In the future, there will be an ollama CLI for running models on servers, in containers or for local development environments.

ollama generate huggingface.co/thebloke/llama-7b-ggml "hi"
> Downloading [================>          ] 66.67% (2/3) 30.2MB/s

Documentation