ollama/llm
Daniel Hiltgen c5ff443b9f Handle very slow model loads
During testing, we're seeing some models take over 3 minutes.
2024-04-09 16:35:10 -07:00
..
ext_server Apply 01-cache.diff 2024-04-01 16:48:18 -07:00
generate Revert "build.go: introduce a friendlier way to build Ollama (#3548)" (#3564) 2024-04-09 15:57:45 -07:00
llama.cpp@1b67731e18 update llama.cpp submodule to 1b67731 (#3561) 2024-04-09 15:10:17 -04:00
patches Bump to b2581 2024-04-02 11:53:07 -07:00
ggla.go refactor model parsing 2024-04-01 13:16:15 -07:00
ggml.go add command-r graph estimate 2024-04-04 14:07:24 -07:00
gguf.go refactor model parsing 2024-04-01 13:16:15 -07:00
llm.go cgo quantize 2024-04-08 15:31:08 -07:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
payload.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
server.go Handle very slow model loads 2024-04-09 16:35:10 -07:00
status.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00