..
ext_server
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
generate
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
llama.cpp@ 328b83de23
revert submodule back to 328b83de23b33240e28f4e74900d1d06726f5eb1
2024-01-10 18:42:39 -05:00
dynamic_shim.c
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
dynamic_shim.h
Refactor how we augment llama.cpp
2024-01-02 15:35:55 -08:00
ext_server_common.go
Offload layers to GPU based on new model size estimates ( #1850 )
2024-01-08 16:42:00 -05:00
ext_server_default.go
Offload layers to GPU based on new model size estimates ( #1850 )
2024-01-08 16:42:00 -05:00
ext_server_windows.go
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
ggml.go
Offload layers to GPU based on new model size estimates ( #1850 )
2024-01-08 16:42:00 -05:00
gguf.go
Offload layers to GPU based on new model size estimates ( #1850 )
2024-01-08 16:42:00 -05:00
llama.go
Offload layers to GPU based on new model size estimates ( #1850 )
2024-01-08 16:42:00 -05:00
llm.go
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
shim.go
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
shim_darwin.go
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
shim_ext_server.go
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
shim_ext_server_linux.go
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
shim_ext_server_windows.go
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
shim_test.go
Support multiple variants for a given llm lib type
2024-01-10 17:27:51 -08:00
utils.go
partial decode ggml bin for more info
2023-08-10 09:23:10 -07:00