ollama/llm
2023-11-26 15:59:49 -05:00
..
llama.cpp add back f16c instructions on intel mac 2023-11-26 15:59:49 -05:00
falcon.go starcoder 2023-10-02 19:56:51 -07:00
ggml.go ggufv3 2023-10-23 09:35:49 -07:00
gguf.go fix: gguf int type 2023-11-22 11:40:30 -08:00
llama.go windows CUDA support (#1262) 2023-11-24 17:16:36 -05:00
llm.go recent llama.cpp update added kernels for fp32, q5_0, and q5_1 2023-11-20 13:44:31 -08:00
starcoder.go starcoder 2023-10-02 19:56:51 -07:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00