ollama/llm/llama.cpp
Daniel Hiltgen ddbfa6fe31 Fix CPU only builds
Go embed doesn't like when there's no matching files, so put
a dummy placeholder in to allow building without any GPU support
If no "server" library is found, it's safely ignored at runtime.
2024-01-03 16:08:34 -08:00
..
gguf@328b83de23 Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
CMakeLists.txt Rename the ollama cmakefile 2024-01-02 15:36:16 -08:00
ext_server.cpp Get rid of one-line llama.log 2024-01-02 15:36:16 -08:00
ext_server.h Refactor how we augment llama.cpp 2024-01-02 15:35:55 -08:00
gen_common.sh Rename the ollama cmakefile 2024-01-02 15:36:16 -08:00
gen_darwin.sh Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
gen_linux.sh Fix CPU only builds 2024-01-03 16:08:34 -08:00
gen_windows.ps1 Rename the ollama cmakefile 2024-01-02 15:36:16 -08:00
generate_darwin.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_linux.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
generate_windows.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00