ollama/llm/llama.cpp
Daniel Hiltgen 1b991d0ba9 Refine build to support CPU only
If someone checks out the ollama repo and doesn't install the CUDA
library, this will ensure they can build a CPU only version
2023-12-19 09:05:46 -08:00
..
gguf@328b83de23 Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
patches Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
gen_common.sh Build linux using ubuntu 20.04 2023-12-19 09:05:46 -08:00
gen_darwin.sh Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
gen_linux.sh Refine build to support CPU only 2023-12-19 09:05:46 -08:00
gen_windows.ps1 Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
generate_darwin.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_linux.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
generate_windows.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00