ollama/.github/workflows
Daniel Hiltgen 58d95cc9bd Switch back to subprocessing for llama.cpp
This should resolve a number of memory leak and stability defects by allowing
us to isolate llama.cpp in a separate process and shutdown when idle, and
gracefully restart if it has problems.  This also serves as a first step to be
able to run multiple copies to support multiple models concurrently.
2024-04-01 16:48:18 -07:00
..
latest.yaml CI automation for tagging latest images 2024-03-28 16:07:37 -07:00
release.yaml CI windows gpu builds 2024-03-28 14:39:10 -07:00
test.yaml Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00