llama.cpp/llama_cpp
2024-02-15 15:17:30 -05:00
..
server fix: broken import 2024-02-08 01:13:28 -05:00
__init__.py Bump version 2024-02-14 04:31:42 -05:00
_internals.py feat: Support batch embeddings (#1186) 2024-02-14 04:26:09 -05:00
_logger.py fix: Use llama_log_callback to avoid suppress_stdout_stderr 2024-02-05 21:52:12 -05:00
_utils.py Revert "Fix: fileno error google colab (#729) (#1156)" (#1157) 2024-02-02 12:18:55 -05:00
llama.py fix: Incorporate embedding pooling layer fixes (#1194) 2024-02-15 15:16:30 -05:00
llama_cache.py Move cache classes to llama_cache submodule. 2024-01-17 09:09:12 -05:00
llama_chat_format.py fix: Update openbuddy prompt format. Closes #1155 2024-02-13 23:57:10 -05:00
llama_cpp.py feat: Update llama.cpp 2024-02-15 15:17:30 -05:00
llama_grammar.py fix: Don't change order of json schema object properties unless prop_order is passed, Closes #1180 2024-02-13 02:44:00 -05:00
llama_speculative.py Add speculative decoding (#1120) 2024-01-31 14:08:14 -05:00
llama_tokenizer.py fix: Circular dependancy preventing early Llama object free (#1176) 2024-02-11 13:57:57 -05:00
llama_types.py feat: Generic chatml Function Calling (#957) 2024-02-12 15:56:07 -05:00
llava_cpp.py Update llama.cpp 2024-02-14 03:47:21 -05:00
py.typed Add py.typed 2023-08-11 09:58:48 +02:00