Commit graph

297 commits

Author SHA1 Message Date
Andrei Betlen
57d8ec3899 Add setting to control request interruption 2023-07-07 03:37:23 -04:00
Andrei Betlen
4c7cdcca00 Add interruptible streaming requests for llama-cpp-python server. Closes #183 2023-07-07 03:04:17 -04:00
Andrei Betlen
98ae4e58a3 Update llama.cpp 2023-07-06 17:57:56 -04:00
Andrei Betlen
b994296c75 Update llama.cpp 2023-07-05 01:00:14 -04:00
Andrei Betlen
c67f786360 Update llama.cpp 2023-06-29 01:08:15 -04:00
Andrei Betlen
e34f4414cf Hotfix: logits_all bug 2023-06-29 00:57:27 -04:00
Andrei Betlen
a2ede37bd5 Load logits directly into scores buffer 2023-06-29 00:45:46 -04:00
Andrei Betlen
b95b0ffbeb Use pre-allocated buffers to store input_ids and scores 2023-06-29 00:40:47 -04:00
Andrei Betlen
a5e059c053 Free model when llama is unloaded. Closes #434 2023-06-28 23:58:55 -04:00
Andrei Betlen
3379dc40a1 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-26 08:50:48 -04:00
Andrei Betlen
952228407e Update llama.cpp 2023-06-26 08:50:38 -04:00
Andrei Betlen
b4a3db3e54 Update type signature 2023-06-26 08:50:30 -04:00
Andrei
5eb4ebb041
Merge branch 'main' into fix-state-pickle 2023-06-26 08:45:02 -04:00
samfundev
d788fb49bf
Only concatenate after all batches are done 2023-06-24 15:51:46 -04:00
Andrei
877ca6d016
Merge branch 'main' into fix-state-pickle 2023-06-23 15:13:07 -04:00
Alexey
282698b6d3
server: pass seed param from command line to llama 2023-06-23 00:19:24 +04:00
Andrei Betlen
e37798777e Update llama.cpp 2023-06-20 11:25:10 -04:00
Andrei Betlen
d410f12fae Update docs. Closes #386 2023-06-17 13:38:48 -04:00
Andrei Betlen
9f528f4715 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-17 13:37:17 -04:00
Andrei Betlen
d7153abcf8 Update llama.cpp 2023-06-16 23:11:14 -04:00
imaprogrammer
fd9f294b3a
Update llama.py: Added how many input tokens in ValueError exception 2023-06-16 14:11:57 +05:30
Andrei Betlen
1e20be6d0c Add low_vram to server settings 2023-06-14 22:13:42 -04:00
Andrei Betlen
44b83cada5 Add low_vram parameter 2023-06-14 22:12:33 -04:00
Andrei Betlen
f7c5cfaf50 Format server options 2023-06-14 22:08:28 -04:00
Andrei Betlen
9c41a3e990 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-14 21:50:43 -04:00
Andrei
f568baeef1
Merge pull request #351 from player1537-forks/th/add-logits-bias-parameter
Add support for `logit_bias` and `logit_bias_type` parameters
2023-06-14 21:49:56 -04:00
Andrei Betlen
f27393ab7e Add additional verbose logs for cache 2023-06-14 21:46:48 -04:00
Andrei Betlen
4cefb70cd0 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-14 21:40:19 -04:00
Andrei Betlen
715f98c591 Update llama.cpp 2023-06-14 21:40:13 -04:00
Okabintaro
10b0cb727b fix: Make LLamaState pickable for disk cache
I fixed the issue by making the saved state a bytes object instead of the ctypes one which can't be pickled.
2023-06-13 12:03:31 +02:00
Gabor
3129a0e7e5 correction to add back environment variable support <3 docker 2023-06-11 01:11:24 +01:00
Gabor
3ea31930e5 fixes abetlen/llama-cpp-python #358 2023-06-11 00:58:08 +01:00
Andrei Betlen
21acd7901f Re-enable cache 2023-06-10 12:22:31 -04:00
Andrei Betlen
6639371407 Update llama.cpp 2023-06-10 12:17:38 -04:00
Tanner Hobson
eb7645b3ba Add support for logit_bias and logit_bias_type parameters 2023-06-09 13:13:08 -04:00
Andrei Betlen
0da655b3be Temporarily disable cache until save state bug is fixed. 2023-06-09 11:10:24 -04:00
Andrei Betlen
556c7edf47 Truncate max_tokens if it exceeds context length 2023-06-09 10:57:36 -04:00
Andrei Betlen
0c42168508 Fix cache implementation breaking changes 2023-06-08 13:19:23 -04:00
Andrei Betlen
607d217caa Allow both .so and .dylib extensions for macos 2023-06-08 00:27:19 -04:00
Andrei
0f0b447fa4
Merge pull request #289 from Maximilian-Winter/main
Diskcache implementation for llama state.
2023-06-06 17:03:03 -04:00
Andrei
d508573fb4
Merge pull request #328 from spirilis/mirostat
Added mirostat support for completions, chat completions API
2023-06-06 16:58:23 -04:00
Andrei Betlen
aad4b17f52 Update llama.cpp 2023-06-06 16:23:55 -04:00
Andrei Betlen
8b4968ea67 Fix resize issue. Closes #330 2023-06-06 11:37:57 -04:00
Eric B
9b1c9e902c Added mirostat support for completions, chat completions API 2023-06-05 22:37:11 -04:00
Andrei Betlen
7b57420ea9 Update llama.cpp 2023-06-05 18:17:29 -04:00
Maximilian-Winter
29f9c9cca3 Added both LlamaChache classes Disk and RAM. 2023-05-31 22:33:56 +02:00
Maximilian Winter
9ea7a379d3
Merge branch 'abetlen:main' into main 2023-05-31 12:55:51 +02:00
Andrei
49fe9395a1
Merge pull request #277 from abetlen/add-numpy-support
Use numpy for internal buffers
2023-05-29 20:59:30 -04:00
Maximilian-Winter
719c3eae0a Diskcache implementation for llama state. 2023-05-28 15:56:38 +02:00
Andrei Betlen
80066f0b80 Use async routes 2023-05-27 09:12:58 -04:00