58d95cc9bd
This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently. |
||
---|---|---|
.. | ||
auth.go | ||
download.go | ||
fixblobs.go | ||
fixblobs_test.go | ||
images.go | ||
layers.go | ||
manifests.go | ||
modelpath.go | ||
modelpath_test.go | ||
prompt.go | ||
prompt_test.go | ||
routes.go | ||
routes_test.go | ||
upload.go |