llama.cpp/llama_cpp
2024-02-25 21:15:42 -05:00
..
server fix: remove prematurely commited change 2024-02-25 21:00:37 -05:00
__init__.py chore: Bump version 2024-02-25 21:15:42 -05:00
_internals.py feat: Update llama.cpp 2024-02-25 20:52:14 -05:00
_logger.py fix: Use llama_log_callback to avoid suppress_stdout_stderr 2024-02-05 21:52:12 -05:00
_utils.py Revert "Fix: fileno error google colab (#729) (#1156)" (#1157) 2024-02-02 12:18:55 -05:00
llama.py feat: Update llama.cpp 2024-02-25 16:53:58 -05:00
llama_cache.py Move cache classes to llama_cache submodule. 2024-01-17 09:09:12 -05:00
llama_chat_format.py feat: Auto detect Mixtral's slightly different format (#1214) 2024-02-23 11:27:38 -05:00
llama_cpp.py feat: Update llama.cpp 2024-02-25 20:52:14 -05:00
llama_grammar.py feat: support minItems/maxItems in JSON grammar converter (by @nopperl) 2024-02-22 00:17:06 -05:00
llama_speculative.py Add speculative decoding (#1120) 2024-01-31 14:08:14 -05:00
llama_tokenizer.py fix: LlamaHFTokenizer now receives pre_tokens 2024-02-23 12:23:24 -05:00
llama_types.py feat: Generic chatml Function Calling (#957) 2024-02-12 15:56:07 -05:00
llava_cpp.py feat(low-level-api): Improve API static type-safety and performance (#1205) 2024-02-21 16:25:38 -05:00
py.typed Add py.typed 2023-08-11 09:58:48 +02:00