e9ce91e9a6
On linux, we link the CPU library in to the Go app and fall back to it when no GPU match is found. On windows we do not link in the CPU library so that we can better control our dependencies for the CLI. This fixes the logic so we correctly fallback to the dynamic CPU library on windows.
12 lines
466 B
Go
12 lines
466 B
Go
package llm
|
|
|
|
import (
|
|
"github.com/jmorganca/ollama/api"
|
|
)
|
|
|
|
func newDefaultExtServer(model string, adapters, projectors []string, numLayers int64, opts api.Options) (extServer, error) {
|
|
// On windows we always load the llama.cpp libraries dynamically to avoid startup DLL dependencies
|
|
// This ensures we can update the PATH at runtime to get everything loaded
|
|
|
|
return newDynamicShimExtServer(AvailableShims["cpu"], model, adapters, projectors, numLayers, opts)
|
|
}
|