llama.cpp/llama_cpp/server
Lucas Doyle a5aa6c1478 llama_cpp server: add missing top_k param to CreateChatCompletionRequest
`llama.create_chat_completion` definitely has a `top_k` argument, but its missing from `CreateChatCompletionRequest`. decision: add it
2023-05-01 15:38:19 -07:00
..
__init__.py llama_cpp server: app is now importable, still runnable as a module 2023-04-29 11:41:25 -07:00
__main__.py llama_cpp server: slight refactor to init_llama function 2023-04-29 11:42:23 -07:00
app.py llama_cpp server: add missing top_k param to CreateChatCompletionRequest 2023-05-01 15:38:19 -07:00