llama.cpp/llama_cpp
Damian Stewart aab74f0b2b
Multimodal Support (Llava 1.5) (#821)
* llava v1.5 integration

* Point llama.cpp to fork

* Add llava shared library target

* Fix type

* Update llama.cpp

* Add llava api

* Revert changes to llama and llama_cpp

* Update llava example

* Add types for new gpt-4-vision-preview api

* Fix typo

* Update llama.cpp

* Update llama_types to match OpenAI v1 API

* Update ChatCompletionFunction type

* Reorder request parameters

* More API type fixes

* Even More Type Updates

* Add parameter for custom chat_handler to Llama class

* Fix circular import

* Convert to absolute imports

* Fix

* Fix pydantic Jsontype bug

* Accept list of prompt tokens in create_completion

* Add llava1.5 chat handler

* Add Multimodal notebook

* Clean up examples

* Add server docs

---------

Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2023-11-07 22:48:51 -05:00
..
server Multimodal Support (Llava 1.5) (#821) 2023-11-07 22:48:51 -05:00
__init__.py Bump version 2023-11-06 09:37:55 -05:00
_utils.py Clean up stdout / stderr suppression 2023-11-03 13:02:15 -04:00
llama.py Multimodal Support (Llava 1.5) (#821) 2023-11-07 22:48:51 -05:00
llama_chat_format.py Multimodal Support (Llava 1.5) (#821) 2023-11-07 22:48:51 -05:00
llama_cpp.py Update llama.cpp 2023-11-05 16:57:10 -05:00
llama_grammar.py Multimodal Support (Llava 1.5) (#821) 2023-11-07 22:48:51 -05:00
llama_types.py Multimodal Support (Llava 1.5) (#821) 2023-11-07 22:48:51 -05:00
llava_cpp.py Multimodal Support (Llava 1.5) (#821) 2023-11-07 22:48:51 -05:00
py.typed Add py.typed 2023-08-11 09:58:48 +02:00