ollama/.github
Daniel Hiltgen 58d95cc9bd Switch back to subprocessing for llama.cpp
This should resolve a number of memory leak and stability defects by allowing
us to isolate llama.cpp in a separate process and shutdown when idle, and
gracefully restart if it has problems.  This also serves as a first step to be
able to run multiple copies to support multiple models concurrently.
2024-04-01 16:48:18 -07:00
..
ISSUE_TEMPLATE Update 90_bug_report.yml 2024-03-29 10:11:17 -04:00
workflows Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00