90ca84172c
* Fix embeddings memory corruption The patch was leading to a buffer overrun corruption. Once removed though, parallism in server.cpp lead to hitting an assert due to slot/seq IDs being >= token count. To work around this, only use slot 0 for embeddings. * Fix embed integration test assumption The token eval count has changed with recent llama.cpp bumps (0.3.5+) |
||
---|---|---|
.. | ||
ext_server | ||
generate | ||
llama.cpp@1e6f6554aa | ||
patches | ||
filetype.go | ||
ggla.go | ||
ggml.go | ||
ggml_test.go | ||
gguf.go | ||
llm.go | ||
llm_darwin_amd64.go | ||
llm_darwin_arm64.go | ||
llm_linux.go | ||
llm_windows.go | ||
memory.go | ||
memory_test.go | ||
payload.go | ||
server.go | ||
status.go |