Andrei Betlen
8c3aa7858b
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-11-24 00:15:36 -05:00
Andrei Betlen
19e02f1f87
docs: Add link to function calling notebook
2023-11-24 00:15:02 -05:00
Andrei Betlen
de2e2bc083
misc fix verbose printing in functionary model
2023-11-23 20:14:23 -05:00
Andrei Betlen
36048d46af
Update llama.cpp
2023-11-23 16:26:00 -05:00
mrfakename
d68fc07b1b
Add Zephyr format ( #937 )
2023-11-23 01:20:08 -05:00
caiyesd
4184835078
Add chat format to support baichuan ( #938 )
...
Signed-off-by: caiyesd <caiyesd@gmail.com>
2023-11-23 01:19:50 -05:00
Andrei Betlen
4474157949
ci: tag built docker images with current version
2023-11-23 01:06:47 -05:00
Andrei Betlen
21abefa488
docs: Add grammar and types to api reference
2023-11-23 00:27:41 -05:00
Andrei Betlen
6aab77de04
docs: Fix module import bug
2023-11-23 00:27:22 -05:00
Andrei Betlen
c647f01609
Add from_json_schema to LlamaGrammar
2023-11-23 00:27:00 -05:00
Andrei Betlen
be1f64d569
docs: Add docstrings from llama.cpp
2023-11-23 00:26:26 -05:00
Andrei Betlen
31cf0ec680
docs: Fix mkdocstrings heading level
2023-11-22 23:45:19 -05:00
Andrei Betlen
e349f314b4
docs: Fix API Reference page
2023-11-22 23:45:02 -05:00
Andrei Betlen
b6bb7ac76a
docs: Add Llama class example
2023-11-22 23:10:04 -05:00
Andrei Betlen
c5173b0fb3
docs: Configure mkdocstrings
2023-11-22 23:09:42 -05:00
Andrei Betlen
3303ebe92b
docs: Add dark mode and pymarkdown extensions
2023-11-22 22:47:22 -05:00
Andrei Betlen
abb1976ad7
docs: Add n_ctx not for multimodal models
2023-11-22 21:07:00 -05:00
Andrei Betlen
36679a58ef
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-11-22 19:49:59 -05:00
Andrei Betlen
bd43fb2bfe
docs: Update high-level python api examples in README to include chat formats, function calling, and multi-modal models.
2023-11-22 19:49:56 -05:00
Andrei Betlen
d977b44d82
docs: Add links to server functionality
2023-11-22 18:21:02 -05:00
Andrei Betlen
aa815d580c
docs: Link to langchain docs
2023-11-22 18:17:49 -05:00
Andrei Betlen
357e4dd69f
docs: Use nav for better site layout control
2023-11-22 18:16:30 -05:00
Andrei Betlen
602ea64ddd
docs: Fix whitespace
2023-11-22 18:09:31 -05:00
Andrei Betlen
971864ce92
docs: Watch README for changes during docs development
2023-11-22 18:08:17 -05:00
Andrei Betlen
f336eebb2f
docs: fix 404 to macos installation guide. Closes #861
2023-11-22 18:07:30 -05:00
Andrei Betlen
1ff2c92720
docs: minor indentation fix
2023-11-22 18:04:18 -05:00
Andrei Betlen
68238b7883
docs: setting n_gqa is no longer required
2023-11-22 18:01:54 -05:00
Andrei Betlen
198178225c
docs: Remove stale warning
2023-11-22 17:59:16 -05:00
Juraj Bednar
5a9770a56b
Improve documentation for server chat formats ( #934 )
2023-11-22 06:10:03 -05:00
caiyesd
b8f29f4bf0
Add baichuan-2 chat format ( #936 )
...
Signed-off-by: caiyesd <caiyesd@gmail.com>
2023-11-22 06:08:06 -05:00
Andrei Betlen
9515467439
tests: add mock_kv_cache placeholder functions
2023-11-22 06:02:21 -05:00
Andrei Betlen
0ea244499e
tests: avoid constantly reallocating logits
2023-11-22 04:31:05 -05:00
Andrei Betlen
0a7e05bc10
tests: don't mock sampling functions
2023-11-22 04:12:32 -05:00
Andrei Betlen
d7388f1ffb
Use mock_llama for all tests
2023-11-21 18:13:19 -05:00
Andrei Betlen
dbfaf53fe0
Update llama.cpp
2023-11-21 18:12:38 -05:00
Andrei Betlen
8b6ca22846
Fix type warnings for json schema grammar converter
2023-11-21 13:32:00 -05:00
Andrei Betlen
230fc8b535
Bump version
2023-11-21 05:04:55 -05:00
Andrei Betlen
128dc4731f
Fix #569
2023-11-21 04:39:05 -05:00
Andrei Betlen
7a3f87846b
Format
2023-11-21 04:02:20 -05:00
Andrei Betlen
422ebc89ce
Fix: Add logit_bias to all completion api methods
2023-11-21 04:01:36 -05:00
Andrei Betlen
79efc85206
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-11-21 03:59:49 -05:00
Andrei Betlen
07e47f55ba
Add support for logit_bias outside of server api. Closes #827
2023-11-21 03:59:46 -05:00
James Braza
23a221999f
Documenting server usage ( #768 )
2023-11-21 00:24:22 -05:00
Maarten ter Huurne
c21edb6908
Do not set grammar
to None
for new LlamaGrammar
objects ( #834 )
...
* Do not set `grammar` to `None` for new `LlamaGrammar` objects
The `grammar` attribute is written by `init()`, but that method always
returns `None`, so `__init__()` would then discard the previously
written object.
* Add minimal test for grammar parsing
2023-11-21 00:23:18 -05:00
mrfakename
ef65fc5ff4
Add MistralLite, Intel, and OpenChat prompt formats ( #927 )
...
* Add MistralLite format
* Update llama_chat_format.py
* Update llama_chat_format.py
2023-11-21 00:19:25 -05:00
Andrei Betlen
9d7c8307cd
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-11-20 23:23:20 -05:00
Andrei Betlen
3dc21b2557
tests: Improve llama.cpp mock
2023-11-20 23:23:18 -05:00
TK-Master
b8438f70b5
Added support for min_p ( #921 )
...
* Added support for min_p
My small contribution to this great project.
Ref: https://github.com/ggerganov/llama.cpp/pull/3841
Closes: https://github.com/abetlen/llama-cpp-python/issues/911
* Fix for negative temp (sample_softmax)
2023-11-20 23:21:33 -05:00
Andrei Betlen
63fe1370ed
Update llama.cpp
2023-11-20 23:19:47 -05:00
Andrei Betlen
a34d480141
Fix #929
2023-11-20 22:50:59 -05:00