f165048a69
* add KV cache quantization options https://github.com/abetlen/llama-cpp-python/discussions/1220 https://github.com/abetlen/llama-cpp-python/issues/1305 * Add ggml_type * Use ggml_type instead of string for quantization * Add server support --------- Co-authored-by: Andrei Betlen <abetlen@gmail.com> |
||
---|---|---|
.. | ||
server | ||
__init__.py | ||
_internals.py | ||
_logger.py | ||
_utils.py | ||
llama.py | ||
llama_cache.py | ||
llama_chat_format.py | ||
llama_cpp.py | ||
llama_grammar.py | ||
llama_speculative.py | ||
llama_tokenizer.py | ||
llama_types.py | ||
llava_cpp.py | ||
py.typed |