No description
Find a file
Bruce MacDonald 4d8b0414f7 take all args as one prompt
- parse all run arguments into one prompt
- do not echo prompt back on one-shot
- example of summarizing a document
2023-07-07 16:14:58 -04:00
api fix run generate 2023-07-07 11:36:29 -07:00
app add app to open at login by default 2023-07-07 13:49:42 -04:00
cmd take all args as one prompt 2023-07-07 16:14:58 -04:00
docs add publish script 2023-07-07 12:59:45 -04:00
llama pass model and predict options 2023-07-07 09:34:05 -07:00
scripts add publish script 2023-07-07 12:59:45 -04:00
server update directory url 2023-07-07 15:13:41 -04:00
web update download links to the releases page until we have a better download url 2023-07-07 15:21:40 -04:00
.dockerignore update Dockerfile 2023-07-06 16:34:44 -04:00
.gitignore use Makefile for dependency building instead of go generate 2023-07-06 16:34:44 -04:00
.prettierrc.json move .prettierrc.json to root 2023-07-02 17:34:46 -04:00
Dockerfile fix dockerfile 2023-07-06 16:34:44 -04:00
go.mod progress 2023-07-06 17:07:40 -07:00
go.sum progress 2023-07-06 17:07:40 -07:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
main.go add llama.cpp go bindings 2023-07-06 16:34:44 -04:00
Makefile add publish script 2023-07-07 12:59:45 -04:00
models.json remove replit example which does not run currently 2023-07-07 12:39:42 -04:00
README.md take all args as one prompt 2023-07-07 16:14:58 -04:00

ollama

Ollama

Run large language models with llama.cpp.

Note: certain models that can be run with this project are intended for research and/or non-commercial use only.

Features

  • Download and run popular large language models
  • Switch between multiple models on the fly
  • Hardware acceleration where available (Metal, CUDA)
  • Fast inference server written in Go, powered by llama.cpp
  • REST API to use with your application (python, typescript SDKs coming soon)

Install

  • Download for macOS
  • Download for Windows (coming soon)
  • Docker: docker run -p 11434:11434 ollama/ollama

You can also build the binary from source.

Quickstart

Run a fast and simple model.

ollama run orca

Example models

💬 Chat

Have a conversation.

ollama run vicuna "Why is the sky blue?"

🗺️ Instructions

Ask questions. Get answers.

ollama run orca "Write an email to my boss."

🔎 Ask questions about documents

Send the contents of a document and ask questions about it.

ollama run nous-hermes "$(cat input.txt)", please summarize this story

📖 Storytelling

Venture into the unknown.

ollama run nous-hermes "Once upon a time"

Advanced usage

Run a local model

ollama run ~/Downloads/vicuna-7b-v1.3.ggmlv3.q4_1.bin

Building

make

To run it start the server:

./ollama server &

Finally, run a model!

./ollama run ~/Downloads/vicuna-7b-v1.3.ggmlv3.q4_1.bin

API Reference

POST /api/pull

Download a model

curl -X POST http://localhost:11343/api/pull -d '{"model": "orca"}'

POST /api/generate

Complete a prompt

curl -X POST http://localhost:11434/api/generate -d '{"model": "orca", "prompt": "hello!", "stream": true}'