llama.cpp/llama_cpp/server
2023-12-22 13:41:06 -05:00
..
__init__.py llama_cpp server: app is now importable, still runnable as a module 2023-04-29 11:41:25 -07:00
__main__.py [Feat] Multi model support (#931) 2023-12-22 05:51:25 -05:00
app.py [Feat] Multi model support (#931) 2023-12-22 05:51:25 -05:00
cli.py [Feat] Multi model support (#931) 2023-12-22 05:51:25 -05:00
errors.py Check if completion_tokens is none in error handler. 2023-12-22 13:41:06 -05:00
model.py [Feat] Multi model support (#931) 2023-12-22 05:51:25 -05:00
settings.py [Feat] Multi model support (#931) 2023-12-22 05:51:25 -05:00
types.py [Feat] Multi model support (#931) 2023-12-22 05:51:25 -05:00