ollama/llm
2024-04-30 17:38:44 -04:00
..
ext_server llm: add back check for empty token cache 2024-04-30 17:38:44 -04:00
generate Do not build AVX runners on ARM64 2024-04-26 23:55:32 -06:00
llama.cpp@952d03dbea update llama.cpp commit to 952d03d 2024-04-30 17:31:20 -04:00
patches Fix clip log import 2024-04-26 09:43:46 -07:00
ggla.go refactor tensor query 2024-04-10 11:37:20 -07:00
ggml.go fix: mixtral graph 2024-04-22 17:19:44 -07:00
gguf.go fixes for gguf (#3863) 2024-04-23 20:57:20 -07:00
llm.go Add import declaration for windows,arm64 to llm.go 2024-04-26 23:23:53 -06:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Move nested payloads to installer and zip file on windows 2024-04-23 16:14:47 -07:00
memory.go fix gemma, command-r layer weights 2024-04-26 15:00:55 -07:00
payload.go Move nested payloads to installer and zip file on windows 2024-04-23 16:14:47 -07:00
server.go llm: dont cap context window limit to training context window (#3988) 2024-04-29 10:07:30 -04:00
status.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00