ollama/gpu
Daniel Hiltgen 7555ea44f8 Revamp the dynamic library shim
This switches the default llama.cpp to be CPU based, and builds the GPU variants
as dynamically loaded libraries which we can select at runtime.

This also bumps the ROCm library to version 6 given 5.7 builds don't work
on the latest ROCm library that just shipped.
2023-12-20 14:45:57 -08:00
..
gpu.go Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00
gpu_darwin.go Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00
gpu_info.h Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_info_cpu.c Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_info_cuda.c Additional nvidial-ml path to check 2023-12-19 15:52:34 -08:00
gpu_info_cuda.h Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_info_rocm.c Refine build to support CPU only 2023-12-19 09:05:46 -08:00
gpu_info_rocm.h Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_test.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
types.go Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00