llama.cpp/llama_cpp/server
Lucas Doyle dbbfc4ba2f llama_cpp server: fix to ChatCompletionRequestMessage
When I generate a client, it breaks because it fails to process the schema of ChatCompletionRequestMessage

These fix that:
- I think `Union[Literal["user"], Literal["channel"], ...]` is the same as Literal["user", "channel", ...]
- Turns out default value `Literal["user"]` isn't JSON serializable, so replace with "user"
2023-05-01 15:38:19 -07:00
..
__init__.py llama_cpp server: app is now importable, still runnable as a module 2023-04-29 11:41:25 -07:00
__main__.py llama_cpp server: slight refactor to init_llama function 2023-04-29 11:42:23 -07:00
app.py llama_cpp server: fix to ChatCompletionRequestMessage 2023-05-01 15:38:19 -07:00