This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
26ab67732b
ollama
/
llm
/
llm_windows.go
7 lines
72 B
Go
Raw
Normal View
History
Unescape
Escape
Switch back to subprocessing for llama.cpp This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
2024-03-14 17:24:13 +00:00
package
llm
import
"embed"
Move nested payloads to installer and zip file on windows Now that the llm runner is an executable and not just a dll, more users are facing problems with security policy configurations on windows that prevent users writing to directories and then executing binaries from the same location. This change removes payloads from the main executable on windows and shifts them over to be packaged in the installer and discovered based on the executables location. This also adds a new zip file for people who want to "roll their own" installation model.
2024-04-23 19:19:17 +00:00
// unused on windows
Switch back to subprocessing for llama.cpp This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
2024-03-14 17:24:13 +00:00
var
libEmbed
embed
.
FS
Reference in a new issue
Copy permalink