ollama/llm/llama.cpp
Bruce MacDonald f221637053
first pass at linux gpu support (#454)
* linux gpu support
* handle multiple gpus
* add cuda docker image (#488)
---------

Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-12 11:04:35 -04:00
..
ggml@9e232f0234 subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
ggml_patch metal: add missing barriers for mul-mat (#469) 2023-09-05 19:37:13 -04:00
gguf@53885d7256 GGUF support (#441) 2023-09-07 13:55:37 -04:00
generate.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
generate_darwin_amd64.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
generate_darwin_arm64.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
generate_linux.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00