ollama/llm
Blake Mizerany fccf3eecaa
build.go: introduce a friendlier way to build Ollama (#3548)
This commit introduces a more friendly way to build Ollama dependencies
and the binary without abusing `go generate` and removing the
unnecessary extra steps it brings with it.

This script also provides nicer feedback to the user about what is
happening during the build process.

At the end, it prints a helpful message to the user about what to do
next (e.g. run the new local Ollama).
2024-04-09 14:18:47 -07:00
..
ext_server Apply 01-cache.diff 2024-04-01 16:48:18 -07:00
generate build.go: introduce a friendlier way to build Ollama (#3548) 2024-04-09 14:18:47 -07:00
llama.cpp@1b67731e18 update llama.cpp submodule to 1b67731 (#3561) 2024-04-09 15:10:17 -04:00
patches Bump to b2581 2024-04-02 11:53:07 -07:00
ggla.go refactor model parsing 2024-04-01 13:16:15 -07:00
ggml.go add command-r graph estimate 2024-04-04 14:07:24 -07:00
gguf.go refactor model parsing 2024-04-01 13:16:15 -07:00
llm.go cgo quantize 2024-04-08 15:31:08 -07:00
llm_darwin_amd64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_darwin_arm64.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_linux.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
llm_windows.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
payload.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
server.go no rope parameters 2024-04-05 18:05:27 -07:00
status.go Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00