This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
a0ea067b63
ollama
/
llm
/
llm_linux.go
8 lines
88 B
Go
Raw
Normal View
History
Unescape
Escape
Switch back to subprocessing for llama.cpp This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
2024-03-14 17:24:13 +00:00
package
llm
Enable windows error dialog for subprocess startup Make sure if something goes wrong spawning the process, the user gets enough info to be able to try to self correct, or at least file a bug with details so we can fix it. Once the process starts, we immediately change back to the recommended setting to prevent the blocking dialog. This ensures if the model fails to load (OOM, unsupported model type, etc.) the process will exit quickly and we can scan the stdout/stderr of the subprocess for the reason to report via API.
2024-07-15 16:25:56 +00:00
import
(
"syscall"
)
Switch back to subprocessing for llama.cpp This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
2024-03-14 17:24:13 +00:00
Enable windows error dialog for subprocess startup Make sure if something goes wrong spawning the process, the user gets enough info to be able to try to self correct, or at least file a bug with details so we can fix it. Once the process starts, we immediately change back to the recommended setting to prevent the blocking dialog. This ensures if the model fails to load (OOM, unsupported model type, etc.) the process will exit quickly and we can scan the stdout/stderr of the subprocess for the reason to report via API.
2024-07-15 16:25:56 +00:00
var
LlamaServerSysProcAttr
=
&
syscall
.
SysProcAttr
{
}
Reference in a new issue
Copy permalink