ollama/gpu
2023-12-19 09:05:46 -08:00
..
gpu.go Refine build to support CPU only 2023-12-19 09:05:46 -08:00
gpu_darwin.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_info.h Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_info_cpu.c Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_info_cuda.c Add WSL2 path to nvidia-ml.so library 2023-12-19 09:05:46 -08:00
gpu_info_cuda.h Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_info_rocm.c Refine build to support CPU only 2023-12-19 09:05:46 -08:00
gpu_info_rocm.h Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gpu_test.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
types.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00