llama.cpp/llama_cpp/server
2023-04-05 17:44:53 -04:00
..
__main__.py Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048. 2023-04-05 17:44:53 -04:00