ollama/llm/llama.cpp
Daniel Hiltgen 16f4603b67 Improve maintainability of Radeon card list
This moves the list of AMD GPUs to an easier to maintain list which
should make it easier to update over time.
2024-01-03 15:16:56 -08:00
..
gguf@328b83de23 Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
CMakeLists.txt Rename the ollama cmakefile 2024-01-02 15:36:16 -08:00
ext_server.cpp Get rid of one-line llama.log 2024-01-02 15:36:16 -08:00
ext_server.h Refactor how we augment llama.cpp 2024-01-02 15:35:55 -08:00
gen_common.sh Rename the ollama cmakefile 2024-01-02 15:36:16 -08:00
gen_darwin.sh Switch windows build to fully dynamic 2024-01-02 15:36:16 -08:00
gen_linux.sh Improve maintainability of Radeon card list 2024-01-03 15:16:56 -08:00
gen_windows.ps1 Rename the ollama cmakefile 2024-01-02 15:36:16 -08:00
generate_darwin.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_linux.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
generate_windows.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00