ollama/llama/make
Daniel Hiltgen 1618700c5a
Workaround buggy P2P ROCm copy on windows (#7466)
This enables the workaround code only for windows which should help windows users with muliple AMD GPUs
2024-11-07 14:26:31 -08:00
..
common-defs.make Soften windows clang requirement (#7428) 2024-10-30 12:28:36 -07:00
cuda.make Improve dependency gathering logic (#7345) 2024-10-24 09:51:53 -07:00
gpu.make Be explicit for gpu library link dir (#7560) 2024-11-07 09:20:40 -08:00
Makefile.cuda_v11 Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
Makefile.cuda_v12 Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
Makefile.default Improve dependency gathering logic (#7345) 2024-10-24 09:51:53 -07:00
Makefile.rocm Workaround buggy P2P ROCm copy on windows (#7466) 2024-11-07 14:26:31 -08:00
Makefile.sync Remove submodule and shift to Go server - 0.4.0 (#7157) 2024-10-30 10:34:28 -07:00