ollama/llm/llama.cpp
Daniel Hiltgen 325d74985b Fix CPU performance on hyperthreaded systems
The default thread count logic was broken and resulted in 2x the number
of threads as it should on a hyperthreading CPU
resulting in thrashing and poor performance.
2023-12-21 16:23:36 -08:00
..
gguf@328b83de23 Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
patches Fix CPU performance on hyperthreaded systems 2023-12-21 16:23:36 -08:00
gen_common.sh Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00
gen_darwin.sh Fix darwin intel build 2023-12-19 13:32:24 -08:00
gen_linux.sh Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00
gen_windows.ps1 Revive windows build 2023-12-20 17:21:54 -08:00
generate_darwin.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_linux.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
generate_windows.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00