Chat templates are rendered with ImmutableSandboxedEnvironment in transformers so no need to do otherwise here.
Co-authored-by: Andrei <abetlen@gmail.com>
* Support multiple chat templates - step 1
As a first step, allow user to to select template from metadata with chat_format parameter in the form of `chat_template.name`.
* register chat templates to self.chat_formats instead of globally
* Don't expose internal chat handlers yet
---------
Co-authored-by: Andrei <abetlen@gmail.com>
* Proper fill-in-middle support
Use prefix/middle/suffix tokens when metadata is present in GGUF, like f.ex. in [this](https://huggingface.co/CISCai/CodeQwen1.5-7B-Chat-SOTA-GGUF) one.
* fall back to internal prefix/middle/suffix id
In some cases llama.cpp will make a guess at fim tokens, use them if there's no metadata.
* typo--
* don't insert special tokens that are not there in suffix
Note: add_bos is misnamed, it's actually add_special and can cause several special tokens to be added to the token list (the special parameter is actually parse_special).
* don't add/parse any special tokens when using fim
I've left original behavior when no fim tokens are found, but this should perhaps be re-evaluated.
* don't append suffix to prompt_tokens unless fim tokens are detected
* make sure we only do this for fim
---------
Co-authored-by: Andrei <abetlen@gmail.com>
* set up streaming for v2
* assert v2 streaming, fix tool_call vs function_call
* fix streaming with tool_choice/function_call
* make functions return 1 function call only when 'auto'
* fix
---------
Co-authored-by: Andrei <abetlen@gmail.com>
* Test dummy image tags in chat templates
* Format and improve types for llava_cpp.py
* Add from_pretrained support to llava chat format.
* Refactor llava chat format to use a jinja2
* Revert chat format test
* Add moondream support (wip)
* Update moondream chat format
* Update moondream chat format
* Update moondream prompt
* Add function calling support
* Cache last image embed
* Add Llava1.6 support
* Add nanollava support
* Add obisidian support
* Remove unnecessary import
* Re-order multimodal chat formats
* Logits all no longer required for multi-modal models
* Update README.md
* Update docs
* Update README
* Fix typo
* Update README
* Fix typo
* fix completion tokens tracking, prompt forming
* fix 'function_call' and 'tool_calls' depending on 'functions' and 'tools', incompatibility with python 3.8
* Updated README
* fix for openai server compatibility
---------
Co-authored-by: Andrei <abetlen@gmail.com>