ollama/llm/generate
Daniel Hiltgen c3d321d405
llm: Remove GGML_CUDA_NO_PEER_COPY for ROCm (#7174)
This workaround logic in llama.cpp is causing crashes for users with less system memory than VRAM.
2024-10-12 09:56:49 -07:00
..
gen_common.sh Fix build leakages (#7141) 2024-10-08 13:04:59 -07:00
gen_darwin.sh Re-introduce the llama package (#5034) 2024-10-08 08:53:54 -07:00
gen_linux.sh llm: Remove GGML_CUDA_NO_PEER_COPY for ROCm (#7174) 2024-10-12 09:56:49 -07:00
gen_windows.ps1 llm: Remove GGML_CUDA_NO_PEER_COPY for ROCm (#7174) 2024-10-12 09:56:49 -07:00
generate_darwin.go Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) 2024-04-09 15:57:45 -07:00
generate_linux.go Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) 2024-04-09 15:57:45 -07:00
generate_windows.go Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) 2024-04-09 15:57:45 -07:00