llama.cpp/llama_cpp
Felipe Lorenz c139f8b5d5
feat: Add endpoints for tokenize, detokenize and count tokens (#1136)
* Add endpoint to count tokens

* Add tokenize and detokenize endpoints

* Change response key to tokens for tokenize endpoint

* Fix dependency bug

* Cleanup

* Remove example added by mistake

* Move tokenize, detokenize, and count to Extras namespace. Tag existing endpoints

---------

Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2024-03-08 21:09:00 -05:00
..
server feat: Add endpoints for tokenize, detokenize and count tokens (#1136) 2024-03-08 21:09:00 -05:00
__init__.py chore: Bump version 2024-03-02 22:46:40 -05:00
_internals.py fix: Remove deprecated cfg sampling functions 2024-02-28 14:37:07 -05:00
_logger.py fix: Use llama_log_callback to avoid suppress_stdout_stderr 2024-02-05 21:52:12 -05:00
_utils.py Revert "Fix: fileno error google colab (#729) (#1156)" (#1157) 2024-02-02 12:18:55 -05:00
llama.py feat: Switch embed to llama_get_embeddings_seq (#1263) 2024-03-08 20:59:35 -05:00
llama_cache.py Move cache classes to llama_cache submodule. 2024-01-17 09:09:12 -05:00
llama_chat_format.py fix: Check for existence of clip model path (#1264) 2024-03-08 21:00:10 -05:00
llama_cpp.py feat: Update llama.cpp 2024-03-08 20:58:50 -05:00
llama_grammar.py feat: support minItems/maxItems in JSON grammar converter (by @nopperl) 2024-02-22 00:17:06 -05:00
llama_speculative.py Add speculative decoding (#1120) 2024-01-31 14:08:14 -05:00
llama_tokenizer.py fix: LlamaHFTokenizer now receives pre_tokens 2024-02-23 12:23:24 -05:00
llama_types.py feat: Generic chatml Function Calling (#957) 2024-02-12 15:56:07 -05:00
llava_cpp.py misc: llava_cpp use ctypes function decorator for binding 2024-02-26 11:07:33 -05:00
py.typed Add py.typed 2023-08-11 09:58:48 +02:00