ollama/llm/llama.cpp
Jeffrey Morgan 29340c2e62
update cmake flags for amd64 macOS (#1780)
* update cmake flags for intel macOS

* remove `LLAMA_K_QUANTS`

* put back `CMAKE_OSX_DEPLOYMENT_TARGET` and disable `LLAMA_F16C`
2024-01-03 19:22:15 -05:00
..
gguf@328b83de23 Bump llama.cpp to b1662 and set n_parallel=1 2023-12-19 09:05:46 -08:00
CMakeLists.txt Rename the ollama cmakefile 2024-01-02 15:36:16 -08:00
ext_server.cpp Get rid of one-line llama.log 2024-01-02 15:36:16 -08:00
ext_server.h Refactor how we augment llama.cpp 2024-01-02 15:35:55 -08:00
gen_common.sh update cmake flags for amd64 macOS (#1780) 2024-01-03 19:22:15 -05:00
gen_darwin.sh update cmake flags for amd64 macOS (#1780) 2024-01-03 19:22:15 -05:00
gen_linux.sh update cmake flags for amd64 macOS (#1780) 2024-01-03 19:22:15 -05:00
gen_windows.ps1 update cmake flags for amd64 macOS (#1780) 2024-01-03 19:22:15 -05:00
generate_darwin.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00
generate_linux.go Adapted rocm support to cgo based llama.cpp 2023-12-19 09:05:46 -08:00
generate_windows.go Add cgo implementation for llama.cpp 2023-12-19 09:05:46 -08:00