fccf3eecaa
This commit introduces a more friendly way to build Ollama dependencies and the binary without abusing `go generate` and removing the unnecessary extra steps it brings with it. This script also provides nicer feedback to the user about what is happening during the build process. At the end, it prints a helpful message to the user about what to do next (e.g. run the new local Ollama). |
||
---|---|---|
.. | ||
ext_server | ||
generate | ||
llama.cpp@1b67731e18 | ||
patches | ||
ggla.go | ||
ggml.go | ||
gguf.go | ||
llm.go | ||
llm_darwin_amd64.go | ||
llm_darwin_arm64.go | ||
llm_linux.go | ||
llm_windows.go | ||
payload.go | ||
server.go | ||
status.go |