ollama/llm
2024-04-13 10:43:02 -07:00
..
ext_server Apply 01-cache.diff 2024-04-01 16:48:18 -07:00
generate Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) 2024-04-09 15:57:45 -07:00
llama.cpp@4bd0f93e4a update llama.cpp submodule to 4bd0f93 (#3627) 2024-04-13 10:43:02 -07:00
patches Bump to b2581 2024-04-02 11:53:07 -07:00
ggla.go refactor tensor query 2024-04-10 11:37:20 -07:00
ggml.go mixtral mem 2024-04-11 11:10:41 -07:00
gguf.go refactor tensor query 2024-04-10 11:37:20 -07:00
llm.go cgo quantize 2024-04-08 15:31:08 -07:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
payload.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
server.go partial offloading 2024-04-10 11:37:20 -07:00
status.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00