ollama/llm/llama.cpp
Bruce MacDonald 66003e1d05
subprocess improvements (#524)
* subprocess improvements

- increase start-up timeout
- when runner fails to start fail rather than timing out
- try runners in order rather than choosing 1 runner
- embed metal runner in metal dir rather than gpu
- refactor logging and error messages

* Update llama.go

* Update llama.go

* simplify by using glob
2023-09-18 15:16:32 -04:00
..
ggml@9e232f0234 subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
ggml_patch fix ggml arm64 cuda build (#520) 2023-09-12 17:06:48 -04:00
gguf@53885d7256 GGUF support (#441) 2023-09-07 13:55:37 -04:00
generate.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
generate_darwin_amd64.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
generate_darwin_arm64.go subprocess improvements (#524) 2023-09-18 15:16:32 -04:00
generate_linux.go support for packaging in multiple cuda runners (#509) 2023-09-14 15:08:13 -04:00