d7a67917ba
* handle batched embeddings * fix normalization issue * fix type hints, ensure no breaking changes to embed * Clear kv cache / reset internal state after embedding complete --------- Co-authored-by: Andrei <abetlen@gmail.com> |
||
---|---|---|
.. | ||
server | ||
__init__.py | ||
_internals.py | ||
_logger.py | ||
_utils.py | ||
llama.py | ||
llama_cache.py | ||
llama_chat_format.py | ||
llama_cpp.py | ||
llama_grammar.py | ||
llama_speculative.py | ||
llama_tokenizer.py | ||
llama_types.py | ||
llava_cpp.py | ||
py.typed |