Andrei Betlen
bd43fb2bfe
docs: Update high-level python api examples in README to include chat formats, function calling, and multi-modal models.
2023-11-22 19:49:56 -05:00
Andrei Betlen
d977b44d82
docs: Add links to server functionality
2023-11-22 18:21:02 -05:00
Andrei Betlen
aa815d580c
docs: Link to langchain docs
2023-11-22 18:17:49 -05:00
Andrei Betlen
357e4dd69f
docs: Use nav for better site layout control
2023-11-22 18:16:30 -05:00
Andrei Betlen
602ea64ddd
docs: Fix whitespace
2023-11-22 18:09:31 -05:00
Andrei Betlen
971864ce92
docs: Watch README for changes during docs development
2023-11-22 18:08:17 -05:00
Andrei Betlen
f336eebb2f
docs: fix 404 to macos installation guide. Closes #861
2023-11-22 18:07:30 -05:00
Andrei Betlen
1ff2c92720
docs: minor indentation fix
2023-11-22 18:04:18 -05:00
Andrei Betlen
68238b7883
docs: setting n_gqa is no longer required
2023-11-22 18:01:54 -05:00
Andrei Betlen
198178225c
docs: Remove stale warning
2023-11-22 17:59:16 -05:00
Juraj Bednar
5a9770a56b
Improve documentation for server chat formats ( #934 )
2023-11-22 06:10:03 -05:00
caiyesd
b8f29f4bf0
Add baichuan-2 chat format ( #936 )
...
Signed-off-by: caiyesd <caiyesd@gmail.com>
2023-11-22 06:08:06 -05:00
Andrei Betlen
9515467439
tests: add mock_kv_cache placeholder functions
2023-11-22 06:02:21 -05:00
Andrei Betlen
0ea244499e
tests: avoid constantly reallocating logits
2023-11-22 04:31:05 -05:00
Andrei Betlen
0a7e05bc10
tests: don't mock sampling functions
2023-11-22 04:12:32 -05:00
Andrei Betlen
d7388f1ffb
Use mock_llama for all tests
2023-11-21 18:13:19 -05:00
Andrei Betlen
dbfaf53fe0
Update llama.cpp
2023-11-21 18:12:38 -05:00
Andrei Betlen
8b6ca22846
Fix type warnings for json schema grammar converter
2023-11-21 13:32:00 -05:00
Andrei Betlen
230fc8b535
Bump version
2023-11-21 05:04:55 -05:00
Andrei Betlen
128dc4731f
Fix #569
2023-11-21 04:39:05 -05:00
Andrei Betlen
7a3f87846b
Format
2023-11-21 04:02:20 -05:00
Andrei Betlen
422ebc89ce
Fix: Add logit_bias to all completion api methods
2023-11-21 04:01:36 -05:00
Andrei Betlen
79efc85206
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-11-21 03:59:49 -05:00
Andrei Betlen
07e47f55ba
Add support for logit_bias outside of server api. Closes #827
2023-11-21 03:59:46 -05:00
James Braza
23a221999f
Documenting server usage ( #768 )
2023-11-21 00:24:22 -05:00
Maarten ter Huurne
c21edb6908
Do not set grammar
to None
for new LlamaGrammar
objects ( #834 )
...
* Do not set `grammar` to `None` for new `LlamaGrammar` objects
The `grammar` attribute is written by `init()`, but that method always
returns `None`, so `__init__()` would then discard the previously
written object.
* Add minimal test for grammar parsing
2023-11-21 00:23:18 -05:00
mrfakename
ef65fc5ff4
Add MistralLite, Intel, and OpenChat prompt formats ( #927 )
...
* Add MistralLite format
* Update llama_chat_format.py
* Update llama_chat_format.py
2023-11-21 00:19:25 -05:00
Andrei Betlen
9d7c8307cd
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-11-20 23:23:20 -05:00
Andrei Betlen
3dc21b2557
tests: Improve llama.cpp mock
2023-11-20 23:23:18 -05:00
TK-Master
b8438f70b5
Added support for min_p ( #921 )
...
* Added support for min_p
My small contribution to this great project.
Ref: https://github.com/ggerganov/llama.cpp/pull/3841
Closes: https://github.com/abetlen/llama-cpp-python/issues/911
* Fix for negative temp (sample_softmax)
2023-11-20 23:21:33 -05:00
Andrei Betlen
63fe1370ed
Update llama.cpp
2023-11-20 23:19:47 -05:00
Andrei Betlen
a34d480141
Fix #929
2023-11-20 22:50:59 -05:00
Andrei Betlen
2c2afa320f
Update llama.cpp
2023-11-20 14:11:33 -05:00
zocainViken
6dde6bd09c
bug fixing ( #925 )
2023-11-20 12:31:52 -05:00
ZisisTsatsas
f3117c0cf6
fix: openblas_simple missing pkg-config to build ( #920 )
2023-11-20 12:31:02 -05:00
Andrei Betlen
55efc9e6b2
Update llama.cpp
2023-11-16 18:51:55 -05:00
Andrei Betlen
96a377648c
Merge tag 'v0.2.18' into main
2023-11-14 15:32:15 -05:00
Andrei Betlen
ca30d898e9
Merge tag 'v0.2.17' into main
2023-11-14 15:32:12 -05:00
Andrei Betlen
3af167d8db
Merge tag 'v0.2.16' into main
2023-11-14 15:32:08 -05:00
Andrei Betlen
cc0fe43849
Disable opencl test
2023-11-14 14:59:08 -05:00
Andrei Betlen
f2901d840e
Bump version
2023-11-14 14:10:00 -05:00
Andrei Betlen
a14f46a720
Update llama.cpp
2023-11-14 14:08:52 -05:00
Andrei Betlen
020945049a
Update llama.cpp
2023-11-13 22:20:19 -05:00
Andrei Betlen
01846a76b9
Bump version
2023-11-10 16:36:12 -05:00
Andrei Betlen
7e3e85b53d
Update llama.cpp
2023-11-10 16:33:55 -05:00
Andrei Betlen
4388f33414
Set CUDA_ARCHITECTURES=OFF for windows
2023-11-10 16:32:36 -05:00
Andrei Betlen
74167bdfb2
Update Functions notebook
2023-11-10 13:02:30 -05:00
Andrei Betlen
85ead98a3e
Update Functions notebook example
2023-11-10 12:49:14 -05:00
Andrei Betlen
b7e60b66f4
Bump version
2023-11-10 06:21:24 -05:00
Andrei Betlen
fb743f6c87
Update llama.cpp
2023-11-10 06:21:14 -05:00