8da7bef05f
In some cases we may want multiple variants for a given GPU type or CPU. This adds logic to have an optional Variant which we can use to select an optimal library, but also allows us to try multiple variants in case some fail to load. This can be useful for scenarios such as ROCm v5 vs v6 incompatibility or potentially CPU features.
15 lines
540 B
Go
15 lines
540 B
Go
package llm
|
|
|
|
import (
|
|
"fmt"
|
|
|
|
"github.com/jmorganca/ollama/api"
|
|
)
|
|
|
|
func newDefaultExtServer(model string, adapters, projectors []string, opts api.Options) (extServer, error) {
|
|
// On windows we always load the llama.cpp libraries dynamically to avoid startup DLL dependencies
|
|
// This ensures we can update the PATH at runtime to get everything loaded
|
|
|
|
// This should never happen as we'll always try to load one or more cpu dynamic libaries before hitting default
|
|
return nil, fmt.Errorf("no available default llm library on windows")
|
|
}
|