llama.cpp/llama_cpp
Lucas Doyle e40fcb0575 llama_cpp server: mark model as required
`model` is ignored, but currently marked "optional"... on the one hand could mark "required" to make it explicit in case the server supports multiple llama's at the same time, but also could delete it since its ignored. decision: mark it required for the sake of openai api compatibility.

I think out of all parameters, `model` is probably the most important one for people to keep using even if its ignored for now.
2023-05-01 15:38:19 -07:00
..
server llama_cpp server: mark model as required 2023-05-01 15:38:19 -07:00
__init__.py Black formatting 2023-03-24 14:59:29 -04:00
llama.py Fix logprob calculation. Fixes #134 2023-05-01 17:45:08 -04:00
llama_cpp.py Update sampling api 2023-05-01 14:47:55 -04:00
llama_types.py Bugfix for Python3.7 2023-04-05 04:37:33 -04:00