Daniel Hiltgen
|
1b991d0ba9
|
Refine build to support CPU only
If someone checks out the ollama repo and doesn't install the CUDA
library, this will ensure they can build a CPU only version
|
2023-12-19 09:05:46 -08:00 |
|
Daniel Hiltgen
|
9adca7f711
|
Bump llama.cpp to b1662 and set n_parallel=1
|
2023-12-19 09:05:46 -08:00 |
|
Daniel Hiltgen
|
35934b2e05
|
Adapted rocm support to cgo based llama.cpp
|
2023-12-19 09:05:46 -08:00 |
|
Daniel Hiltgen
|
d4cd695759
|
Add cgo implementation for llama.cpp
Run the server.cpp directly inside the Go runtime via cgo
while retaining the LLM Go abstractions.
|
2023-12-19 09:05:46 -08:00 |
|