ollama/llm/llama.cpp
2023-09-20 17:58:16 +01:00
..
ggml@9e232f0234 subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
ggml_patch fix ggml arm64 cuda build (#520) 2023-09-12 17:06:48 -04:00
gguf@53885d7256 GGUF support (#441) 2023-09-07 13:55:37 -04:00
generate.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
generate_darwin_amd64.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
generate_darwin_arm64.go subprocess improvements (#524) 2023-09-18 15:16:32 -04:00
generate_linux.go use cuda_version 2023-09-20 17:58:16 +01:00