llama.cpp/llama_cpp/server
Lucas Doyle e40fcb0575 llama_cpp server: mark model as required
`model` is ignored, but currently marked "optional"... on the one hand could mark "required" to make it explicit in case the server supports multiple llama's at the same time, but also could delete it since its ignored. decision: mark it required for the sake of openai api compatibility.

I think out of all parameters, `model` is probably the most important one for people to keep using even if its ignored for now.
2023-05-01 15:38:19 -07:00
..
__init__.py llama_cpp server: app is now importable, still runnable as a module 2023-04-29 11:41:25 -07:00
__main__.py llama_cpp server: slight refactor to init_llama function 2023-04-29 11:42:23 -07:00
app.py llama_cpp server: mark model as required 2023-05-01 15:38:19 -07:00