7c26775
* llm: update llama.cpp submodule to `7c26775` * disable `LLAMA_BLAS` for now * `-DLLAMA_OPENMP=off`
This doesn't expose a UX yet, but wires the initial server portion of progress reporting during load