f457d63400
If the system has multiple numa nodes, enable numa support in llama.cpp If we detect numactl in the path, use that, else use the basic "distribute" mode. |
||
---|---|---|
.. | ||
amd_common.go | ||
amd_hip_windows.go | ||
amd_linux.go | ||
amd_windows.go | ||
assets.go | ||
cpu_common.go | ||
cuda_common.go | ||
gpu.go | ||
gpu_darwin.go | ||
gpu_info.h | ||
gpu_info_cudart.c | ||
gpu_info_cudart.h | ||
gpu_info_darwin.h | ||
gpu_info_darwin.m | ||
gpu_info_nvcuda.c | ||
gpu_info_nvcuda.h | ||
gpu_info_nvml.c | ||
gpu_info_nvml.h | ||
gpu_info_oneapi.c | ||
gpu_info_oneapi.h | ||
gpu_linux.go | ||
gpu_oneapi.go | ||
gpu_test.go | ||
gpu_windows.go | ||
types.go |