ollama/llm
Bruce MacDonald f221637053
first pass at linux gpu support (#454)
* linux gpu support
* handle multiple gpus
* add cuda docker image (#488)
---------

Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-12 11:04:35 -04:00
..
llama.cpp first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
ggml.go GGUF support (#441) 2023-09-07 13:55:37 -04:00
gguf.go GGUF support (#441) 2023-09-07 13:55:37 -04:00
llama.go first pass at linux gpu support (#454) 2023-09-12 11:04:35 -04:00
llm.go GGUF support (#441) 2023-09-07 13:55:37 -04:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00