ollama/llm
2024-04-04 09:51:26 -07:00
..
ext_server Apply 01-cache.diff 2024-04-01 16:48:18 -07:00
generate Fail fast if mingw missing on windows 2024-04-04 09:51:26 -07:00
llama.cpp@37e7854c10 Bump to b2581 2024-04-02 11:53:07 -07:00
patches Bump to b2581 2024-04-02 11:53:07 -07:00
ggla.go refactor model parsing 2024-04-01 13:16:15 -07:00
ggml.go update graph size estimate 2024-04-03 13:34:12 -07:00
gguf.go refactor model parsing 2024-04-01 13:16:15 -07:00
llm.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
payload.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
server.go update graph size estimate 2024-04-03 13:34:12 -07:00
status.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00