This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
35096a7eff
ollama
/
llm
/
llm_darwin.go
8 lines
88 B
Go
Raw
Normal View
History
Unescape
Escape
Always dynamically load the llm server library This switches darwin to dynamic loading, and refactors the code now that no static linking of the library is used on any platform
2024-01-10 04:29:58 +00:00
package
llm
import
(
Enable windows error dialog for subprocess startup Make sure if something goes wrong spawning the process, the user gets enough info to be able to try to self correct, or at least file a bug with details so we can fix it. Once the process starts, we immediately change back to the recommended setting to prevent the blocking dialog. This ensures if the model fails to load (OOM, unsupported model type, etc.) the process will exit quickly and we can scan the stdout/stderr of the subprocess for the reason to report via API.
2024-07-15 16:25:56 +00:00
"syscall"
Always dynamically load the llm server library This switches darwin to dynamic loading, and refactors the code now that no static linking of the library is used on any platform
2024-01-10 04:29:58 +00:00
)
Enable windows error dialog for subprocess startup Make sure if something goes wrong spawning the process, the user gets enough info to be able to try to self correct, or at least file a bug with details so we can fix it. Once the process starts, we immediately change back to the recommended setting to prevent the blocking dialog. This ensures if the model fails to load (OOM, unsupported model type, etc.) the process will exit quickly and we can scan the stdout/stderr of the subprocess for the reason to report via API.
2024-07-15 16:25:56 +00:00
var
LlamaServerSysProcAttr
=
&
syscall
.
SysProcAttr
{
}
Reference in a new issue
Copy permalink