7555ea44f8
This switches the default llama.cpp to be CPU based, and builds the GPU variants as dynamically loaded libraries which we can select at runtime. This also bumps the ROCm library to version 6 given 5.7 builds don't work on the latest ROCm library that just shipped.
11 lines
358 B
Go
11 lines
358 B
Go
package gpu
|
|
|
|
// Beginning of an `ollama info` command
|
|
type GpuInfo struct {
|
|
Driver string `json:"driver,omitempty"`
|
|
Library string `json:"library,omitempty"`
|
|
TotalMemory uint64 `json:"total_memory,omitempty"`
|
|
FreeMemory uint64 `json:"free_memory,omitempty"`
|
|
|
|
// TODO add other useful attributes about the card here for discovery information
|
|
}
|