This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
37096790a7
ollama
/
llm
/
llm_darwin_amd64.go
9 lines
95 B
Go
Raw
Normal View
History
Unescape
Escape
Always dynamically load the llm server library This switches darwin to dynamic loading, and refactors the code now that no static linking of the library is used on any platform
2024-01-10 04:29:58 +00:00
package
llm
import
(
"embed"
)
Switch back to subprocessing for llama.cpp This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
2024-03-14 17:24:13 +00:00
//go:embed build/darwin/x86_64/*/bin/*
Always dynamically load the llm server library This switches darwin to dynamic loading, and refactors the code now that no static linking of the library is used on any platform
2024-01-10 04:29:58 +00:00
var
libEmbed
embed
.
FS
Reference in a new issue
Copy permalink