ext_server
|
Bump llama.cpp to b1999
|
2024-01-30 16:52:12 -08:00 |
generate
|
Merge pull request #1849 from mraiser/main
|
2024-02-05 16:01:16 -08:00 |
llama.cpp@f57fadc009
|
Bump llama.cpp to b2081
|
2024-02-06 12:06:43 -08:00 |
patches
|
patch: always add token to cache_tokens (#2459)
|
2024-02-12 08:10:16 -08:00 |
dyn_ext_server.c
|
Switch to local dlopen symbols
|
2024-01-19 11:37:02 -08:00 |
dyn_ext_server.go
|
use llm.ImageData
|
2024-01-31 19:13:48 -08:00 |
ggml.go
|
add max context length check
|
2024-01-12 14:54:07 -08:00 |
gguf.go
|
refactor tensor read
|
2024-01-24 10:48:31 -08:00 |
llama.go
|
use llm.ImageData
|
2024-01-31 19:13:48 -08:00 |
llm.go
|
Ensure the libraries are present
|
2024-02-07 17:27:49 -08:00 |
payload_common.go
|
use gzip for runner embedding (#2067)
|
2024-01-19 13:23:03 -05:00 |
payload_linux.go
|
Add multiple CPU variants for Intel Mac
|
2024-01-17 15:08:54 -08:00 |
payload_test.go
|
Fix up the CPU fallback selection
|
2024-01-11 15:27:06 -08:00 |
utils.go
|
partial decode ggml bin for more info
|
2023-08-10 09:23:10 -07:00 |