llama.cpp/llama_cpp
Andrei da003d8768
Automatically set chat format from gguf (#1110)
* Use jinja formatter to load chat format from gguf

* Fix off-by-one error in metadata loader

* Implement chat format auto-detection
2024-01-29 14:22:23 -05:00
..
server Automatically set chat format from gguf (#1110) 2024-01-29 14:22:23 -05:00
__init__.py Bump version 2024-01-29 10:46:04 -05:00
_internals.py Automatically set chat format from gguf (#1110) 2024-01-29 14:22:23 -05:00
_utils.py feat: Add ability to load chat format from huggingface autotokenizer or tokenizer_config.json files. 2024-01-18 21:21:37 -05:00
llama.py Automatically set chat format from gguf (#1110) 2024-01-29 14:22:23 -05:00
llama_cache.py Move cache classes to llama_cache submodule. 2024-01-17 09:09:12 -05:00
llama_chat_format.py Automatically set chat format from gguf (#1110) 2024-01-29 14:22:23 -05:00
llama_cpp.py Update llama.cpp 2024-01-26 11:45:48 -05:00
llama_grammar.py fix: from_json_schema oneof/anyof bug. Closes #1097 2024-01-21 19:06:53 -05:00
llama_types.py Add json schema mode (#1122) 2024-01-27 16:52:18 -05:00
llava_cpp.py Make building llava optional 2023-11-28 04:55:21 -05:00
py.typed Add py.typed 2023-08-11 09:58:48 +02:00