llama.cpp/llama_cpp
Andrei fe2da09538
feat: Generic Chat Formats, Tool Calling, and Huggingface Pull Support for Multimodal Models (Obsidian, LLaVA1.6, Moondream) (#1147)
* Test dummy image tags in chat templates

* Format and improve  types for llava_cpp.py

* Add from_pretrained support to llava chat format.

* Refactor llava chat format to use a jinja2

* Revert chat format test

* Add moondream support (wip)

* Update moondream chat format

* Update moondream chat format

* Update moondream prompt

* Add function calling support

* Cache last image embed

* Add Llava1.6 support

* Add nanollava support

* Add obisidian support

* Remove unnecessary import

* Re-order multimodal chat formats

* Logits all no longer required for multi-modal models

* Update README.md

* Update docs

* Update README

* Fix typo

* Update README

* Fix typo
2024-04-30 01:35:38 -04:00
..
server feat: Generic Chat Formats, Tool Calling, and Huggingface Pull Support for Multimodal Models (Obsidian, LLaVA1.6, Moondream) (#1147) 2024-04-30 01:35:38 -04:00
__init__.py chore: Bump version 2024-04-26 10:11:31 -04:00
_internals.py feat: Allow for possibly non-pooled embeddings (#1380) 2024-04-25 21:32:44 -04:00
_logger.py fix: Use llama_log_callback to avoid suppress_stdout_stderr 2024-02-05 21:52:12 -05:00
_utils.py Revert "Fix: fileno error google colab (#729) (#1156)" (#1157) 2024-02-02 12:18:55 -05:00
llama.py feat: Add support for str type kv_overrides 2024-04-27 23:42:19 -04:00
llama_cache.py Move cache classes to llama_cache submodule. 2024-01-17 09:09:12 -05:00
llama_chat_format.py feat: Generic Chat Formats, Tool Calling, and Huggingface Pull Support for Multimodal Models (Obsidian, LLaVA1.6, Moondream) (#1147) 2024-04-30 01:35:38 -04:00
llama_cpp.py feat: Update llama.cpp 2024-04-29 23:34:55 -04:00
llama_grammar.py feat: update grammar schema converter to match llama.cpp (#1353) 2024-04-18 01:36:25 -04:00
llama_speculative.py Add speculative decoding (#1120) 2024-01-31 14:08:14 -05:00
llama_tokenizer.py fix: LlamaHFTokenizer now receives pre_tokens 2024-02-23 12:23:24 -05:00
llama_types.py feat: Allow for possibly non-pooled embeddings (#1380) 2024-04-25 21:32:44 -04:00
llava_cpp.py feat: Generic Chat Formats, Tool Calling, and Huggingface Pull Support for Multimodal Models (Obsidian, LLaVA1.6, Moondream) (#1147) 2024-04-30 01:35:38 -04:00
py.typed Add py.typed 2023-08-11 09:58:48 +02:00