58d95cc9bd
This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
8 lines
95 B
Go
8 lines
95 B
Go
package llm
|
|
|
|
import (
|
|
"embed"
|
|
)
|
|
|
|
//go:embed build/darwin/x86_64/*/bin/*
|
|
var libEmbed embed.FS
|