ollama/llm
2024-04-01 16:48:18 -07:00
..
ext_server Apply 01-cache.diff 2024-04-01 16:48:18 -07:00
generate Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llama.cpp@ad3a0505e3 Bump llama.cpp to b2527 2024-03-25 13:47:44 -07:00
patches Bump llama.cpp to b2474 2024-03-23 09:54:56 +01:00
ggla.go refactor model parsing 2024-04-01 13:16:15 -07:00
ggml.go update memory calcualtions 2024-04-01 13:16:32 -07:00
gguf.go refactor model parsing 2024-04-01 13:16:15 -07:00
llm.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
payload.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
server.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
status.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00