ollama/llm
2023-10-06 14:39:54 -07:00
..
llama.cpp llm: fix build on amd64 2023-10-06 14:39:54 -07:00
falcon.go starcoder 2023-10-02 19:56:51 -07:00
ggml.go starcoder 2023-10-02 19:56:51 -07:00
gguf.go starcoder 2023-10-02 19:56:51 -07:00
llama.go rename server subprocess (#700) 2023-10-06 10:15:42 -04:00
llm.go enable q8, q5, 5_1, and f32 for linux gpu (#699) 2023-10-05 12:53:47 -04:00
starcoder.go starcoder 2023-10-02 19:56:51 -07:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00