ollama/llm
2023-09-20 17:58:16 +01:00
..
llama.cpp use cuda_version 2023-09-20 17:58:16 +01:00
falcon.go fix: add falcon.go 2023-09-13 14:47:37 -07:00
ggml.go subprocess improvements (#524) 2023-09-18 15:16:32 -04:00
gguf.go subprocess improvements (#524) 2023-09-18 15:16:32 -04:00
llama.go pack in cuda libs 2023-09-20 17:40:42 +01:00
llm.go fix falcon decode 2023-09-12 12:34:53 -07:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00