Andrei Betlen
|
186626d58e
|
Update llama.cpp
|
2023-09-01 14:26:13 -04:00 |
|
Andrei Betlen
|
47de3ab104
|
Update llama.cpp
|
2023-08-29 07:36:20 -04:00 |
|
Andrei Betlen
|
3f76e1de52
|
cjk pr minor cleanup
|
2023-08-29 07:21:59 -04:00 |
|
Andrei
|
bae44ec8bf
|
Merge pull request #309 from MeouSker77/fix-CJK
Fix CJK and emoji stream output
|
2023-08-29 06:58:10 -04:00 |
|
Andrei Betlen
|
e0dcbc28a1
|
Update llama.cpp
|
2023-08-28 10:33:45 -04:00 |
|
Andrei Betlen
|
4887973c22
|
Update llama.cpp
|
2023-08-27 12:59:20 -04:00 |
|
Andrei Betlen
|
3a29d65f45
|
Update llama.cpp
|
2023-08-26 23:36:24 -04:00 |
|
Andrei Betlen
|
5de8009706
|
Add copilot-codex completions endpoint for drop-in copilot usage
|
2023-08-25 17:49:14 -04:00 |
|
Andrei Betlen
|
ac47d55577
|
Merge branch 'main' into v0.2-wip
|
2023-08-25 15:45:22 -04:00 |
|
Andrei Betlen
|
ef23d1e545
|
Update llama.cpp
|
2023-08-25 14:35:53 -04:00 |
|
Andrei Betlen
|
48cf43b427
|
Use _with_model variants for tokenization
|
2023-08-25 13:43:16 -04:00 |
|
Andrei Betlen
|
8ac59465b9
|
Strip leading space when de-tokenizing.
|
2023-08-25 04:56:48 -04:00 |
|
Andrei Betlen
|
c2d1deaa8a
|
Update llama.cpp
|
2023-08-24 18:01:42 -04:00 |
|
Andrei Betlen
|
db982a861f
|
Fix
|
2023-08-24 01:01:12 -04:00 |
|
Andrei Betlen
|
4ed632c4b3
|
Remove deprecated params
|
2023-08-24 01:01:05 -04:00 |
|
Andrei Betlen
|
cf405f6764
|
Merge branch 'main' into v0.2-wip
|
2023-08-24 00:30:51 -04:00 |
|
Andrei Betlen
|
bbbf0f4fc4
|
Update llama.cpp
|
2023-08-24 00:17:00 -04:00 |
|
Andrei Betlen
|
e632c59fa0
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-08-17 20:53:04 -04:00 |
|
c0sogi
|
a240aa6b25
|
Fix typos in llama_grammar
|
2023-08-17 21:00:44 +09:00 |
|
Andrei Betlen
|
620cd2fd69
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-08-14 22:41:47 -04:00 |
|
Andrei Betlen
|
5788f1f2b2
|
Remove unnused import
|
2023-08-14 22:41:37 -04:00 |
|
Andrei
|
6dfb98117e
|
Merge pull request #600 from Vuizur/main
Add py.typed to conform with PEP 561
|
2023-08-14 22:40:41 -04:00 |
|
Andrei
|
b99e758045
|
Merge pull request #604 from aliencaocao/main-1
Add doc string for n_gpu_layers argument and make -1 offload all layers
|
2023-08-14 22:40:10 -04:00 |
|
Andrei Betlen
|
b345d60987
|
Update llama.cpp
|
2023-08-14 22:33:30 -04:00 |
|
Billy Cao
|
c471871d0b
|
make n_gpu_layers=-1 offload all layers
|
2023-08-13 11:21:28 +08:00 |
|
Billy Cao
|
d018c7b01d
|
Add doc string for n_gpu_layers argument
|
2023-08-12 18:41:47 +08:00 |
|
Hannes Krumbiegel
|
17dd7fa8e0
|
Add py.typed
|
2023-08-11 09:58:48 +02:00 |
|
MeouSker77
|
88184ed217
|
fix CJK output again
|
2023-08-09 22:04:35 +08:00 |
|
Andrei Betlen
|
66fb0345e8
|
Move grammar to function call argument
|
2023-08-08 15:08:54 -04:00 |
|
Andrei Betlen
|
1e844d3238
|
fix
|
2023-08-08 15:07:28 -04:00 |
|
Andrei Betlen
|
843b7ccd90
|
Merge branch 'main' into c0sogi/main
|
2023-08-08 14:43:02 -04:00 |
|
Andrei Betlen
|
d015bdb4f8
|
Add mul_mat_q option
|
2023-08-08 14:35:06 -04:00 |
|
Andrei Betlen
|
f6a7850e1a
|
Update llama.cpp
|
2023-08-08 14:30:58 -04:00 |
|
c0sogi
|
0d7d2031a9
|
prevent memory access error by llama_grammar_free
|
2023-08-07 17:02:33 +09:00 |
|
c0sogi
|
b07713cb9f
|
reset grammar for every generation
|
2023-08-07 15:16:25 +09:00 |
|
c0sogi
|
418aa83b01
|
Added grammar based sampling
|
2023-08-07 02:21:37 +09:00 |
|
c0sogi
|
ac188a21f3
|
Added low level grammar API
|
2023-08-05 14:43:35 +09:00 |
|
Andrei Betlen
|
ce57920e60
|
Suppress llama.cpp output when loading model.
|
2023-07-28 14:45:18 -04:00 |
|
Andrei Betlen
|
a9b9f0397c
|
Format
|
2023-07-28 01:53:08 -04:00 |
|
Andrei Betlen
|
abc538fcd5
|
fix: annoying bug where attribute exceptions were droining out file not found exceptions
|
2023-07-28 01:43:00 -04:00 |
|
Shouyi Wang
|
426dbfe3f4
|
Change tensor_split from array to pointer
|
2023-07-25 18:29:59 +10:00 |
|
Andrei Betlen
|
078902a6fe
|
Add llama_grammar_accept_token
|
2023-07-24 15:55:26 -04:00 |
|
Andrei Betlen
|
bf901773b0
|
Add llama_sample_grammar
|
2023-07-24 15:42:31 -04:00 |
|
Andrei Betlen
|
1b6997d69f
|
Convert constants to python types and allow python types in low-level api
|
2023-07-24 15:42:07 -04:00 |
|
Andrei Betlen
|
343480364f
|
Merge branch 'main' into v0.2-wip
|
2023-07-24 15:26:08 -04:00 |
|
Andrei Betlen
|
11dd2bf382
|
Add temporary rms_norm_eps parameter
|
2023-07-24 14:09:24 -04:00 |
|
Andrei Betlen
|
8cd64d4ac3
|
Add rms_eps_norm
|
2023-07-24 13:52:12 -04:00 |
|
bretello
|
0f09f10e8c
|
add support for llama2 70b
|
2023-07-24 19:38:24 +02:00 |
|
Andrei Betlen
|
77c9f496b0
|
Merge branch 'main' into v0.2-wip
|
2023-07-24 13:19:54 -04:00 |
|
Andrei Betlen
|
401309d11c
|
Revert "Merge pull request #521 from bretello/main"
This reverts commit 07f0f3a386 , reversing
changes made to d8a3ddbb1c .
|
2023-07-24 13:11:10 -04:00 |
|
Andrei
|
07f0f3a386
|
Merge pull request #521 from bretello/main
raise exception when `llama_load_model_from_file` fails
|
2023-07-24 13:09:28 -04:00 |
|
Andrei Betlen
|
d8a3ddbb1c
|
Update llama.cpp
|
2023-07-24 13:08:06 -04:00 |
|
Andrei Betlen
|
985d559971
|
Update llama.cpp
|
2023-07-24 13:04:34 -04:00 |
|
bretello
|
8be7d67f7e
|
raise exception when llama_load_model_from_file fails
|
2023-07-24 14:42:37 +02:00 |
|
Andrei Betlen
|
436036aa67
|
Merge branch 'main' into v0.2-wip
|
2023-07-21 12:42:38 -04:00 |
|
Andrei Betlen
|
b83728ad1e
|
Update llama.cpp
|
2023-07-21 12:33:27 -04:00 |
|
Andrei Betlen
|
0538ba1dab
|
Merge branch 'main' into v0.2-wip
|
2023-07-20 19:06:26 -04:00 |
|
Andrei Betlen
|
01435da740
|
Update llama.cpp
|
2023-07-20 18:54:25 -04:00 |
|
Andrei Betlen
|
28a111704b
|
Fix compatibility with older python versions
|
2023-07-20 18:52:10 -04:00 |
|
Andrei Betlen
|
d10ce62714
|
Revert ctypes argtype change
|
2023-07-20 18:51:53 -04:00 |
|
Andrei
|
365d9a4367
|
Merge pull request #481 from c0sogi/main
Added `RouteErrorHandler` for server
|
2023-07-20 17:41:42 -04:00 |
|
Vinicius
|
a8551477f5
|
Update llama_cpp.py - Fix c_char_p to Array[c_char_p] and c_float to Array[c_float]
|
2023-07-20 17:29:11 -03:00 |
|
Carlos Tejada
|
0756a2d3fb
|
Now the last token sent when stream=True
|
2023-07-19 22:47:14 -04:00 |
|
Andrei Betlen
|
0b121a7456
|
Format
|
2023-07-19 03:48:27 -04:00 |
|
Andrei Betlen
|
b43917c144
|
Add functions parameters
|
2023-07-19 03:48:20 -04:00 |
|
Andrei Betlen
|
19ba9d3845
|
Use numpy arrays for logits_processors and stopping_criteria. Closes #491
|
2023-07-18 19:27:41 -04:00 |
|
shutup
|
5ed8bf132f
|
expose RoPE param to server start
|
2023-07-18 16:34:36 +08:00 |
|
c0sogi
|
1551ba10bd
|
Added RouteErrorHandler for server
|
2023-07-16 14:57:39 +09:00 |
|
Andrei Betlen
|
8ab098e49d
|
Re-order Llama class params
|
2023-07-15 15:35:08 -04:00 |
|
Andrei Betlen
|
e4f9db37db
|
Fix context_params struct layout
|
2023-07-15 15:34:55 -04:00 |
|
Andrei Betlen
|
f0797a6054
|
Merge branch main into custom_rope
|
2023-07-15 15:11:01 -04:00 |
|
randoentity
|
3f8f276f9f
|
Add bindings for custom_rope
|
2023-07-10 17:37:46 +02:00 |
|
Andrei Betlen
|
a86bfdf0a5
|
bugfix: truncate completion max_tokens to fit context length by default
|
2023-07-09 18:13:29 -04:00 |
|
Andrei Betlen
|
6f70cc4b7d
|
bugfix: pydantic settings missing / changed fields
|
2023-07-09 18:03:31 -04:00 |
|
Andrei
|
5d756de314
|
Merge branch 'main' into add_unlimited_max_tokens
|
2023-07-08 02:37:38 -04:00 |
|
Andrei
|
b8e0bed295
|
Merge pull request #453 from wu-qing-157/main
Fix incorrect token_logprobs (due to indexing after sorting)
|
2023-07-08 02:31:52 -04:00 |
|
Andrei Betlen
|
d6e6aad927
|
bugfix: fix compatibility bug with openai api on last token
|
2023-07-08 00:06:11 -04:00 |
|
Andrei Betlen
|
4f2b5d0b53
|
Format
|
2023-07-08 00:05:10 -04:00 |
|
Andrei Betlen
|
34c505edf2
|
perf: convert pointer to byref
|
2023-07-07 22:54:07 -04:00 |
|
Andrei Betlen
|
52753b77f5
|
Upgrade fastapi to 0.100.0 and pydantic v2
|
2023-07-07 21:38:46 -04:00 |
|
Andrei Betlen
|
11eae75211
|
perf: avoid allocating new buffers during sampling
|
2023-07-07 19:28:53 -04:00 |
|
Andrei Betlen
|
a14d8a9b3f
|
perf: assign to candidates data structure instead
|
2023-07-07 18:58:43 -04:00 |
|
wu-qing-157
|
9e61661518
|
fix indexing token_logprobs after sorting
|
2023-07-07 10:18:49 +00:00 |
|
Andrei Betlen
|
57d8ec3899
|
Add setting to control request interruption
|
2023-07-07 03:37:23 -04:00 |
|
Andrei Betlen
|
4c7cdcca00
|
Add interruptible streaming requests for llama-cpp-python server. Closes #183
|
2023-07-07 03:04:17 -04:00 |
|
Andrei Betlen
|
98ae4e58a3
|
Update llama.cpp
|
2023-07-06 17:57:56 -04:00 |
|
Andrei Betlen
|
b994296c75
|
Update llama.cpp
|
2023-07-05 01:00:14 -04:00 |
|
Andrei Betlen
|
c67f786360
|
Update llama.cpp
|
2023-06-29 01:08:15 -04:00 |
|
Andrei Betlen
|
e34f4414cf
|
Hotfix: logits_all bug
|
2023-06-29 00:57:27 -04:00 |
|
Andrei Betlen
|
a2ede37bd5
|
Load logits directly into scores buffer
|
2023-06-29 00:45:46 -04:00 |
|
Andrei Betlen
|
b95b0ffbeb
|
Use pre-allocated buffers to store input_ids and scores
|
2023-06-29 00:40:47 -04:00 |
|
Andrei Betlen
|
a5e059c053
|
Free model when llama is unloaded. Closes #434
|
2023-06-28 23:58:55 -04:00 |
|
Andrei Betlen
|
3379dc40a1
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-06-26 08:50:48 -04:00 |
|
Andrei Betlen
|
952228407e
|
Update llama.cpp
|
2023-06-26 08:50:38 -04:00 |
|
Andrei Betlen
|
b4a3db3e54
|
Update type signature
|
2023-06-26 08:50:30 -04:00 |
|
Andrei
|
5eb4ebb041
|
Merge branch 'main' into fix-state-pickle
|
2023-06-26 08:45:02 -04:00 |
|
samfundev
|
d788fb49bf
|
Only concatenate after all batches are done
|
2023-06-24 15:51:46 -04:00 |
|
Andrei
|
877ca6d016
|
Merge branch 'main' into fix-state-pickle
|
2023-06-23 15:13:07 -04:00 |
|
Alexey
|
282698b6d3
|
server: pass seed param from command line to llama
|
2023-06-23 00:19:24 +04:00 |
|
Andrei Betlen
|
e37798777e
|
Update llama.cpp
|
2023-06-20 11:25:10 -04:00 |
|
Andrei Betlen
|
d410f12fae
|
Update docs. Closes #386
|
2023-06-17 13:38:48 -04:00 |
|
Andrei Betlen
|
9f528f4715
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-06-17 13:37:17 -04:00 |
|
Andrei Betlen
|
d7153abcf8
|
Update llama.cpp
|
2023-06-16 23:11:14 -04:00 |
|
imaprogrammer
|
fd9f294b3a
|
Update llama.py: Added how many input tokens in ValueError exception
|
2023-06-16 14:11:57 +05:30 |
|
Andrei Betlen
|
1e20be6d0c
|
Add low_vram to server settings
|
2023-06-14 22:13:42 -04:00 |
|
Andrei Betlen
|
44b83cada5
|
Add low_vram parameter
|
2023-06-14 22:12:33 -04:00 |
|
Andrei Betlen
|
f7c5cfaf50
|
Format server options
|
2023-06-14 22:08:28 -04:00 |
|
Andrei Betlen
|
9c41a3e990
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-06-14 21:50:43 -04:00 |
|
Andrei
|
f568baeef1
|
Merge pull request #351 from player1537-forks/th/add-logits-bias-parameter
Add support for `logit_bias` and `logit_bias_type` parameters
|
2023-06-14 21:49:56 -04:00 |
|
Andrei Betlen
|
f27393ab7e
|
Add additional verbose logs for cache
|
2023-06-14 21:46:48 -04:00 |
|
Andrei Betlen
|
4cefb70cd0
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-06-14 21:40:19 -04:00 |
|
Andrei Betlen
|
715f98c591
|
Update llama.cpp
|
2023-06-14 21:40:13 -04:00 |
|
Okabintaro
|
10b0cb727b
|
fix: Make LLamaState pickable for disk cache
I fixed the issue by making the saved state a bytes object instead of the ctypes one which can't be pickled.
|
2023-06-13 12:03:31 +02:00 |
|
Gabor
|
3129a0e7e5
|
correction to add back environment variable support <3 docker
|
2023-06-11 01:11:24 +01:00 |
|
Gabor
|
3ea31930e5
|
fixes abetlen/llama-cpp-python #358
|
2023-06-11 00:58:08 +01:00 |
|
Andrei Betlen
|
21acd7901f
|
Re-enable cache
|
2023-06-10 12:22:31 -04:00 |
|
Andrei Betlen
|
6639371407
|
Update llama.cpp
|
2023-06-10 12:17:38 -04:00 |
|
Tanner Hobson
|
eb7645b3ba
|
Add support for logit_bias and logit_bias_type parameters
|
2023-06-09 13:13:08 -04:00 |
|
Andrei Betlen
|
0da655b3be
|
Temporarily disable cache until save state bug is fixed.
|
2023-06-09 11:10:24 -04:00 |
|
Andrei Betlen
|
556c7edf47
|
Truncate max_tokens if it exceeds context length
|
2023-06-09 10:57:36 -04:00 |
|
Andrei Betlen
|
0c42168508
|
Fix cache implementation breaking changes
|
2023-06-08 13:19:23 -04:00 |
|
Andrei Betlen
|
607d217caa
|
Allow both .so and .dylib extensions for macos
|
2023-06-08 00:27:19 -04:00 |
|
Andrei
|
0f0b447fa4
|
Merge pull request #289 from Maximilian-Winter/main
Diskcache implementation for llama state.
|
2023-06-06 17:03:03 -04:00 |
|
Andrei
|
d508573fb4
|
Merge pull request #328 from spirilis/mirostat
Added mirostat support for completions, chat completions API
|
2023-06-06 16:58:23 -04:00 |
|
Andrei Betlen
|
aad4b17f52
|
Update llama.cpp
|
2023-06-06 16:23:55 -04:00 |
|
Andrei Betlen
|
8b4968ea67
|
Fix resize issue. Closes #330
|
2023-06-06 11:37:57 -04:00 |
|
Eric B
|
9b1c9e902c
|
Added mirostat support for completions, chat completions API
|
2023-06-05 22:37:11 -04:00 |
|
Andrei Betlen
|
7b57420ea9
|
Update llama.cpp
|
2023-06-05 18:17:29 -04:00 |
|
Maximilian-Winter
|
29f9c9cca3
|
Added both LlamaChache classes Disk and RAM.
|
2023-05-31 22:33:56 +02:00 |
|
Maximilian Winter
|
9ea7a379d3
|
Merge branch 'abetlen:main' into main
|
2023-05-31 12:55:51 +02:00 |
|
Andrei
|
49fe9395a1
|
Merge pull request #277 from abetlen/add-numpy-support
Use numpy for internal buffers
|
2023-05-29 20:59:30 -04:00 |
|
Maximilian-Winter
|
719c3eae0a
|
Diskcache implementation for llama state.
|
2023-05-28 15:56:38 +02:00 |
|
Andrei Betlen
|
80066f0b80
|
Use async routes
|
2023-05-27 09:12:58 -04:00 |
|
Andrei Betlen
|
c2b59a5f59
|
Import unnused import
|
2023-05-26 22:59:29 -04:00 |
|
Andrei Betlen
|
8f2b4456ad
|
Format
|
2023-05-26 22:04:31 -04:00 |
|
Andrei Betlen
|
84e313bd6e
|
Align dtype to match c structs
|
2023-05-26 22:02:16 -04:00 |
|
Andrei Betlen
|
66bcb8d70d
|
Merge branch 'main' into add-numpy-support
|
2023-05-26 20:25:03 -04:00 |
|
Andrei Betlen
|
8f35bddd7e
|
Fix stop sequence performance bug.
|
2023-05-26 20:23:49 -04:00 |
|
Andrei Betlen
|
7fc7bc30e7
|
Remove usage of eval_tokens for cache check
|
2023-05-26 20:12:05 -04:00 |
|
Andrei Betlen
|
fe331ec589
|
Replace eval_logits and eval_tokens with numpy arrays
|
2023-05-26 20:03:31 -04:00 |
|
Andrei Betlen
|
8eb9769f78
|
Add support for numpy
|
2023-05-26 16:12:45 -04:00 |
|
Andrei Betlen
|
4c1b7f7a76
|
Bugfix for logits_processor and stopping_criteria
|
2023-05-26 10:25:28 -04:00 |
|
Andrei Betlen
|
433a2e3e8a
|
Add extra logits_processor and stopping_criteria
|
2023-05-26 03:13:24 -04:00 |
|
Andrei Betlen
|
f74b90ed67
|
Fix streaming hang on last token when cache is on.
|
2023-05-26 03:03:01 -04:00 |
|
Andrei Betlen
|
5be8354e11
|
Added tokenizer
|
2023-05-26 03:00:51 -04:00 |
|
Andrei Betlen
|
8fa2ef1959
|
Format
|
2023-05-26 03:00:35 -04:00 |
|
Andrei Betlen
|
6bd1075291
|
Merge branch 'Maximilian-Winter/main' into main
|
2023-05-26 02:56:11 -04:00 |
|
Andrei Betlen
|
ca01f98e09
|
Add LlamaTokenizer class
|
2023-05-25 14:11:33 -04:00 |
|
Andrei Betlen
|
1d247e0f35
|
Add StoppingCriteria and LogitsProcessor to generate to match huggingface API
|
2023-05-25 14:04:54 -04:00 |
|
Maximilian-Winter
|
c2585b6889
|
Fixed list elements typing
|
2023-05-25 10:54:08 +02:00 |
|