ollama/llama/runner
Jesse Gross a103dae01e runner.go: Only allocate 1 element embedding batches for mllama
Mllama has large embeddings (100 MB per image) and each embedding is
represented as 1 token when passed to llama.cpp. Batches are pre-
allocated for the size of the tokens times the batch size, so this
results in allocations of over 50 GB at the default batch size.
On some systems, these mallocs will fail.

Since an image is represented as a single token and mllama doesn't
support more than 1 image per request, we only need to allocate a
batch size of 1, which is much more reasonable. In addition, for
non-multimodal models, we don't need to allocate the embedding
batches at all.

Fixes #7464
2024-11-02 13:37:55 -07:00
..
cache.go runner.go: Better abstract vision model integration 2024-10-30 14:53:43 -07:00
cache_test.go runner.go: Better abstract vision model integration 2024-10-30 14:53:43 -07:00
image.go runner.go: Only allocate 1 element embedding batches for mllama 2024-11-02 13:37:55 -07:00
image_test.go runner.go: Better abstract vision model integration 2024-10-30 14:53:43 -07:00
README.md Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
requirements.go Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
runner.go runner.go: Only allocate 1 element embedding batches for mllama 2024-11-02 13:37:55 -07:00
stop.go runner.go: Handle truncation of tokens for stop sequences 2024-10-09 20:39:04 -07:00
stop_test.go runner.go: Handle truncation of tokens for stop sequences 2024-10-09 20:39:04 -07:00

runner

Note: this is a work in progress

A minimial runner for loading a model and running inference via a http web server.

./runner -model <model binary>

Completion

curl -X POST -H "Content-Type: application/json" -d '{"prompt": "hi"}' http://localhost:8080/completion

Embeddings

curl -X POST -H "Content-Type: application/json" -d '{"prompt": "turn me into an embedding"}' http://localhost:8080/embeddings