ollama/gpu
Daniel Hiltgen 283948c83b Adjust windows ROCm discovery
The v5 hip library returns unsupported GPUs which wont enumerate at
inference time in the runner so this makes sure we align discovery.  The
gfx906 cards are no longer supported so we shouldn't compile with that
GPU type as it wont enumerate at runtime.
2024-07-20 15:17:50 -07:00
..
amd_common.go Bump ROCm on windows to 6.1.2 2024-07-10 11:01:22 -07:00
amd_hip_windows.go Adjust windows ROCm discovery 2024-07-20 15:17:50 -07:00
amd_linux.go Merge pull request #4875 from dhiltgen/rocm_gfx900_workaround 2024-06-15 07:38:58 -07:00
amd_windows.go Adjust windows ROCm discovery 2024-07-20 15:17:50 -07:00
assets.go err!=nil check 2024-06-20 09:30:59 -07:00
cpu_common.go review comments and coverage 2024-06-14 14:55:50 -07:00
cuda_common.go lint linux 2024-06-04 11:13:30 -07:00
gpu.go llm: avoid loading model if system memory is too small (#5637) 2024-07-11 16:42:57 -07:00
gpu_darwin.go llm: avoid loading model if system memory is too small (#5637) 2024-07-11 16:42:57 -07:00
gpu_info.h Reintroduce nvidia nvml library for windows 2024-06-14 14:51:40 -07:00
gpu_info_cudart.c Fix bad symbol load detection 2024-06-19 08:39:07 -07:00
gpu_info_cudart.h Refine GPU discovery to bootstrap once 2024-06-14 14:51:40 -07:00
gpu_info_darwin.h gpu: report system free memory instead of 0 (#5521) 2024-07-06 19:35:04 -04:00
gpu_info_darwin.m gpu: report system free memory instead of 0 (#5521) 2024-07-06 19:35:04 -04:00
gpu_info_nvcuda.c Better nvidia GPU discovery logging 2024-07-03 10:50:40 -07:00
gpu_info_nvcuda.h Better nvidia GPU discovery logging 2024-07-03 10:50:40 -07:00
gpu_info_nvml.c Fix bad symbol load detection 2024-06-19 08:39:07 -07:00
gpu_info_nvml.h Reintroduce nvidia nvml library for windows 2024-06-14 14:51:40 -07:00
gpu_info_oneapi.c get real func ptr. 2024-06-19 09:00:51 +08:00
gpu_info_oneapi.h review comments and coverage 2024-06-14 14:55:50 -07:00
gpu_linux.go llm: avoid loading model if system memory is too small (#5637) 2024-07-11 16:42:57 -07:00
gpu_oneapi.go support ollama run on Intel GPUs 2024-05-24 11:18:27 +08:00
gpu_test.go lint 2024-06-04 11:13:30 -07:00
gpu_windows.go llm: avoid loading model if system memory is too small (#5637) 2024-07-11 16:42:57 -07:00
types.go llm: avoid loading model if system memory is too small (#5637) 2024-07-11 16:42:57 -07:00