ollama/llama
2023-07-21 23:05:15 -07:00
..
ggml-cuda.cu update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
ggml-cuda.h update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
ggml-metal.h add llama.cpp mpi, opencl files 2023-07-20 14:19:55 -07:00
ggml-metal.m add llama.cpp mpi, opencl files 2023-07-20 14:19:55 -07:00
ggml-metal.metal add llama.cpp mpi, opencl files 2023-07-20 14:19:55 -07:00
ggml-mpi.c add llama.cpp mpi, opencl files 2023-07-20 14:19:55 -07:00
ggml-mpi.h add llama.cpp mpi, opencl files 2023-07-20 14:19:55 -07:00
ggml-opencl.cpp add llama.cpp mpi, opencl files 2023-07-20 14:19:55 -07:00
ggml-opencl.h add llama.cpp mpi, opencl files 2023-07-20 14:19:55 -07:00
ggml.c update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
ggml.h update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
k_quants.c update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
k_quants.h update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
llama-util.h update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
llama.cpp update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
llama.go allocate a large enough tokens slice 2023-07-21 23:05:15 -07:00
llama.h update llama.cpp to e782c9e735f93ab4767ffc37462c523b73a17ddc 2023-07-20 11:55:56 -07:00
update-llama-cpp.sh add llama.cpp mpi, opencl files 2023-07-20 14:19:55 -07:00
utils.go call llama.cpp directly from go 2023-07-11 11:59:18 -07:00