ollama/llm/llama.cpp
Daniel Hiltgen d9cd3d9667 Revive windows build
The windows native setup still needs some more work, but this gets it building
again and if you set the PATH properly, you can run the resulting exe on a cuda system.
2023-12-20 17:21:54 -08:00
..
gguf@328b83de23 Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
patches Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
gen_common.sh Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00
gen_darwin.sh Fix darwin intel build 2023-12-19 13:32:24 -08:00
gen_linux.sh Revamp the dynamic library shim 2023-12-20 14:45:57 -08:00
gen_windows.ps1 Revive windows build 2023-12-20 17:21:54 -08:00
generate_darwin.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_linux.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
generate_windows.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00