.. |
ext_server
|
add -DCMAKE_SYSTEM_NAME=Darwin cmake flag (#1832)
|
2024-01-07 00:46:17 -05:00 |
generate
|
only build for metal on arm64
|
2024-01-09 13:51:08 -05:00 |
llama.cpp@328b83de23
|
Init submodule with new path
|
2024-01-04 13:00:13 -08:00 |
dynamic_shim.c
|
Switch windows build to fully dynamic
|
2024-01-02 15:36:16 -08:00 |
dynamic_shim.h
|
Refactor how we augment llama.cpp
|
2024-01-02 15:35:55 -08:00 |
ext_server_common.go
|
Offload layers to GPU based on new model size estimates (#1850)
|
2024-01-08 16:42:00 -05:00 |
ext_server_default.go
|
Offload layers to GPU based on new model size estimates (#1850)
|
2024-01-08 16:42:00 -05:00 |
ext_server_windows.go
|
fix windows build
|
2024-01-08 20:04:01 -05:00 |
ggml.go
|
Offload layers to GPU based on new model size estimates (#1850)
|
2024-01-08 16:42:00 -05:00 |
gguf.go
|
Offload layers to GPU based on new model size estimates (#1850)
|
2024-01-08 16:42:00 -05:00 |
llama.go
|
Offload layers to GPU based on new model size estimates (#1850)
|
2024-01-08 16:42:00 -05:00 |
llm.go
|
use runner if cuda alloc won't fit
|
2024-01-09 00:44:34 -05:00 |
shim_darwin.go
|
Offload layers to GPU based on new model size estimates (#1850)
|
2024-01-08 16:42:00 -05:00 |
shim_ext_server.go
|
Offload layers to GPU based on new model size estimates (#1850)
|
2024-01-08 16:42:00 -05:00 |
shim_ext_server_linux.go
|
Code shuffle to clean up the llm dir
|
2024-01-04 12:12:05 -08:00 |
shim_ext_server_windows.go
|
Code shuffle to clean up the llm dir
|
2024-01-04 12:12:05 -08:00 |
utils.go
|
partial decode ggml bin for more info
|
2023-08-10 09:23:10 -07:00 |