Andrei Betlen
|
f99b2385ee
|
Update llama.cpp
|
2023-11-27 19:03:10 -05:00 |
|
Andrei Betlen
|
396dbf0b2b
|
docs: Improve low-level docstrings
|
2023-11-27 19:03:02 -05:00 |
|
Andrei Betlen
|
9c68b1804a
|
docs: Add api reference links in README
|
2023-11-27 18:54:07 -05:00 |
|
Andrei Betlen
|
174ef3ddf6
|
docs: Add headings to API reference
|
2023-11-27 18:42:15 -05:00 |
|
Andrei Betlen
|
41428244f0
|
docs: Fix README indentation
|
2023-11-27 18:29:13 -05:00 |
|
Andrei Betlen
|
1539146a5e
|
docs: Fix README docs link
|
2023-11-27 18:21:00 -05:00 |
|
Andrei Betlen
|
a928893d03
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-11-26 15:57:13 -05:00 |
|
Andrei Betlen
|
6308f21d5e
|
docs: Update Llama docs
|
2023-11-26 15:56:40 -05:00 |
|
Anton Vice
|
aa5a7a1880
|
Update README.md (#940)
.ccp >> .cpp
|
2023-11-26 15:39:38 -05:00 |
|
Gardner Bickford
|
c2d63a7148
|
fix: Typo in the Open Orca chat format #874 (#947)
|
2023-11-26 15:39:18 -05:00 |
|
Andrei Betlen
|
f03a38e62a
|
Update llama.cpp
|
2023-11-26 15:38:22 -05:00 |
|
Andrei Betlen
|
1a7bf2037b
|
docs: Update openapi endpoint names
|
2023-11-24 03:39:29 -05:00 |
|
Andrei Betlen
|
4026166e68
|
docs: Update completion and chat_completion parameter docstrings
|
2023-11-24 03:24:19 -05:00 |
|
Andrei Betlen
|
945e20fa2c
|
docs: update link
|
2023-11-24 00:18:32 -05:00 |
|
Andrei Betlen
|
e6a36b840e
|
docs: edit function calling docs
|
2023-11-24 00:17:54 -05:00 |
|
Andrei Betlen
|
8c3aa7858b
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-11-24 00:15:36 -05:00 |
|
Andrei Betlen
|
19e02f1f87
|
docs: Add link to function calling notebook
|
2023-11-24 00:15:02 -05:00 |
|
Andrei Betlen
|
de2e2bc083
|
misc fix verbose printing in functionary model
|
2023-11-23 20:14:23 -05:00 |
|
Andrei Betlen
|
36048d46af
|
Update llama.cpp
|
2023-11-23 16:26:00 -05:00 |
|
mrfakename
|
d68fc07b1b
|
Add Zephyr format (#937)
|
2023-11-23 01:20:08 -05:00 |
|
caiyesd
|
4184835078
|
Add chat format to support baichuan (#938)
Signed-off-by: caiyesd <caiyesd@gmail.com>
|
2023-11-23 01:19:50 -05:00 |
|
Andrei Betlen
|
4474157949
|
ci: tag built docker images with current version
|
2023-11-23 01:06:47 -05:00 |
|
Andrei Betlen
|
21abefa488
|
docs: Add grammar and types to api reference
|
2023-11-23 00:27:41 -05:00 |
|
Andrei Betlen
|
6aab77de04
|
docs: Fix module import bug
|
2023-11-23 00:27:22 -05:00 |
|
Andrei Betlen
|
c647f01609
|
Add from_json_schema to LlamaGrammar
|
2023-11-23 00:27:00 -05:00 |
|
Andrei Betlen
|
be1f64d569
|
docs: Add docstrings from llama.cpp
|
2023-11-23 00:26:26 -05:00 |
|
Andrei Betlen
|
31cf0ec680
|
docs: Fix mkdocstrings heading level
|
2023-11-22 23:45:19 -05:00 |
|
Andrei Betlen
|
e349f314b4
|
docs: Fix API Reference page
|
2023-11-22 23:45:02 -05:00 |
|
Andrei Betlen
|
b6bb7ac76a
|
docs: Add Llama class example
|
2023-11-22 23:10:04 -05:00 |
|
Andrei Betlen
|
c5173b0fb3
|
docs: Configure mkdocstrings
|
2023-11-22 23:09:42 -05:00 |
|
Andrei Betlen
|
3303ebe92b
|
docs: Add dark mode and pymarkdown extensions
|
2023-11-22 22:47:22 -05:00 |
|
Andrei Betlen
|
abb1976ad7
|
docs: Add n_ctx not for multimodal models
|
2023-11-22 21:07:00 -05:00 |
|
Andrei Betlen
|
36679a58ef
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-11-22 19:49:59 -05:00 |
|
Andrei Betlen
|
bd43fb2bfe
|
docs: Update high-level python api examples in README to include chat formats, function calling, and multi-modal models.
|
2023-11-22 19:49:56 -05:00 |
|
Andrei Betlen
|
d977b44d82
|
docs: Add links to server functionality
|
2023-11-22 18:21:02 -05:00 |
|
Andrei Betlen
|
aa815d580c
|
docs: Link to langchain docs
|
2023-11-22 18:17:49 -05:00 |
|
Andrei Betlen
|
357e4dd69f
|
docs: Use nav for better site layout control
|
2023-11-22 18:16:30 -05:00 |
|
Andrei Betlen
|
602ea64ddd
|
docs: Fix whitespace
|
2023-11-22 18:09:31 -05:00 |
|
Andrei Betlen
|
971864ce92
|
docs: Watch README for changes during docs development
|
2023-11-22 18:08:17 -05:00 |
|
Andrei Betlen
|
f336eebb2f
|
docs: fix 404 to macos installation guide. Closes #861
|
2023-11-22 18:07:30 -05:00 |
|
Andrei Betlen
|
1ff2c92720
|
docs: minor indentation fix
|
2023-11-22 18:04:18 -05:00 |
|
Andrei Betlen
|
68238b7883
|
docs: setting n_gqa is no longer required
|
2023-11-22 18:01:54 -05:00 |
|
Andrei Betlen
|
198178225c
|
docs: Remove stale warning
|
2023-11-22 17:59:16 -05:00 |
|
Juraj Bednar
|
5a9770a56b
|
Improve documentation for server chat formats (#934)
|
2023-11-22 06:10:03 -05:00 |
|
caiyesd
|
b8f29f4bf0
|
Add baichuan-2 chat format (#936)
Signed-off-by: caiyesd <caiyesd@gmail.com>
|
2023-11-22 06:08:06 -05:00 |
|
Andrei Betlen
|
9515467439
|
tests: add mock_kv_cache placeholder functions
|
2023-11-22 06:02:21 -05:00 |
|
Andrei Betlen
|
0ea244499e
|
tests: avoid constantly reallocating logits
|
2023-11-22 04:31:05 -05:00 |
|
Andrei Betlen
|
0a7e05bc10
|
tests: don't mock sampling functions
|
2023-11-22 04:12:32 -05:00 |
|
Andrei Betlen
|
d7388f1ffb
|
Use mock_llama for all tests
|
2023-11-21 18:13:19 -05:00 |
|
Andrei Betlen
|
dbfaf53fe0
|
Update llama.cpp
|
2023-11-21 18:12:38 -05:00 |
|