4924455dec
* State load/save changes - Only store up to `n_tokens` logits instead of full `(n_ctx, n_vocab)` sized array. - Difference between ~350MB and ~1500MB for example prompt with ~300 tokens (makes sense lol) - Auto-formatting changes * Back out formatting changes |
||
---|---|---|
.. | ||
server | ||
__init__.py | ||
_internals.py | ||
_logger.py | ||
_utils.py | ||
llama.py | ||
llama_cache.py | ||
llama_chat_format.py | ||
llama_cpp.py | ||
llama_grammar.py | ||
llama_speculative.py | ||
llama_tokenizer.py | ||
llama_types.py | ||
llava_cpp.py | ||
py.typed |