llama.cpp/llama_cpp
Lucas Doyle dbbfc4ba2f llama_cpp server: fix to ChatCompletionRequestMessage
When I generate a client, it breaks because it fails to process the schema of ChatCompletionRequestMessage

These fix that:
- I think `Union[Literal["user"], Literal["channel"], ...]` is the same as Literal["user", "channel", ...]
- Turns out default value `Literal["user"]` isn't JSON serializable, so replace with "user"
2023-05-01 15:38:19 -07:00
..
server llama_cpp server: fix to ChatCompletionRequestMessage 2023-05-01 15:38:19 -07:00
__init__.py Black formatting 2023-03-24 14:59:29 -04:00
llama.py Fix logprob calculation. Fixes #134 2023-05-01 17:45:08 -04:00
llama_cpp.py Update sampling api 2023-05-01 14:47:55 -04:00
llama_types.py llama_cpp server: fix to ChatCompletionRequestMessage 2023-05-01 15:38:19 -07:00