ollama/llm/ext_server
royjhan 3b5a4a77f3
Return Correct Prompt Eval Count Regardless of Cache Prompt (#5371)
* openai compatibility

* Revert "openai compatibility"

This reverts commit d3f98a811e00fc497d889c8c45b0cfec5b64690c.

* remove erroneous subtraction of prompt cache
2024-07-03 13:46:23 -07:00
..
CMakeLists.txt Switch back to subprocessing for llama.cpp 2024-04-01 16:48:18 -07:00
httplib.h Import server.cpp as of b2356 2024-03-12 13:58:06 -07:00
json.hpp Import server.cpp as of b2356 2024-03-12 13:58:06 -07:00
server.cpp Return Correct Prompt Eval Count Regardless of Cache Prompt (#5371) 2024-07-03 13:46:23 -07:00
utils.hpp log clean up 2024-05-09 14:55:36 -07:00