58d95cc9bd
This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently. |
||
---|---|---|
.. | ||
getstarted_nonwindows.go | ||
getstarted_windows.go | ||
lifecycle.go | ||
logging.go | ||
logging_nonwindows.go | ||
logging_windows.go | ||
paths.go | ||
server.go | ||
server_unix.go | ||
server_windows.go | ||
updater.go | ||
updater_nonwindows.go | ||
updater_windows.go |