34b9db5afc
This change adds support for multiple concurrent requests, as well as loading multiple models by spawning multiple runners. The default settings are currently set at 1 concurrent request per model and only 1 loaded model at a time, but these can be adjusted by setting OLLAMA_NUM_PARALLEL and OLLAMA_MAX_LOADED_MODELS. |
||
---|---|---|
.. | ||
ext_server | ||
generate | ||
llama.cpp@7593639ce3 | ||
patches | ||
ggla.go | ||
ggml.go | ||
gguf.go | ||
llm.go | ||
llm_darwin_amd64.go | ||
llm_darwin_arm64.go | ||
llm_linux.go | ||
llm_windows.go | ||
memory.go | ||
payload.go | ||
server.go | ||
status.go |