ollama/llm/llama.cpp
Daniel Hiltgen e9ce91e9a6 Load dynamic cpu lib on windows
On linux, we link the CPU library in to the Go app and fall back to it
when no GPU match is found. On windows we do not link in the CPU library
so that we can better control our dependencies for the CLI.  This fixes
the logic so we correctly fallback to the dynamic CPU library
on windows.
2024-01-04 08:41:41 -08:00
..
gguf@328b83de23 Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
CMakeLists.txt Rename the ollama cmakefile 2024-01-02 15:36:16 -08:00
ext_server.cpp Get rid of one-line llama.log 2024-01-02 15:36:16 -08:00
ext_server.h Refactor how we augment llama.cpp 2024-01-02 15:35:55 -08:00
gen_common.sh update cmake flags for amd64 macOS (#1780) 2024-01-03 19:22:15 -05:00
gen_darwin.sh update cmake flags for amd64 macOS (#1780) 2024-01-03 19:22:15 -05:00
gen_linux.sh update cmake flags for amd64 macOS (#1780) 2024-01-03 19:22:15 -05:00
gen_windows.ps1 Load dynamic cpu lib on windows 2024-01-04 08:41:41 -08:00
generate_darwin.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_linux.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
generate_windows.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00