This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
cb03fc9571
ollama
/
llm
/
generate
/
generate_darwin.go
4 lines
53 B
Go
Raw
Normal View
History
Unescape
Escape
Code shuffle to clean up the llm dir
2024-01-04 17:40:15 +00:00
package
generate
Add cgo implementation for llama.cpp Run the server.cpp directly inside the Go runtime via cgo while retaining the LLM Go abstractions.
2023-11-14 01:20:34 +00:00
Switch back to subprocessing for llama.cpp This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
2024-03-14 17:24:13 +00:00
//go:generate bash ./gen_darwin.sh
Reference in a new issue
Copy permalink