ollama/llm
2023-12-13 17:15:10 -05:00
..
llama.cpp Update runner to support mixtral and mixture of experts (MoE) (#1475) 2023-12-13 17:15:10 -05:00
ggml.go seek to end of file when decoding older model formats 2023-12-09 21:14:35 -05:00
gguf.go remove per-model types 2023-12-11 09:40:21 -08:00
llama.go exponential back-off (#1484) 2023-12-12 12:33:02 -05:00
llm.go load projectors 2023-12-05 14:36:12 -08:00
utils.go partial decode ggml bin for more info 2023-08-10 09:23:10 -07:00