Commit graph

273 commits

Author SHA1 Message Date
Andrei Betlen
848c83dfd0 Add FORCE_CMAKE option 2023-04-25 01:36:37 -04:00
Andrei Betlen
9dddb3a607 Bump version 2023-04-25 00:19:44 -04:00
Andrei Betlen
d484c5634e Bugfix: Check cache keys as prefix to prompt tokens 2023-04-24 22:18:54 -04:00
Andrei Betlen
b75fa96bf7 Update docs 2023-04-24 19:56:57 -04:00
Andrei Betlen
cbe95bbb75 Add cache implementation using llama state 2023-04-24 19:54:41 -04:00
Andrei Betlen
2c359a28ff Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-04-24 17:51:27 -04:00
Andrei Betlen
197cf80601 Add save/load state api for Llama class 2023-04-24 17:51:25 -04:00
Andrei Betlen
c4c332fc51 Update llama.cpp 2023-04-24 17:42:09 -04:00
Andrei Betlen
280a047dd6 Update llama.cpp 2023-04-24 15:52:24 -04:00
Andrei Betlen
86f8e5ad91 Refactor internal state for Llama class 2023-04-24 15:47:54 -04:00
Andrei
f37456133a
Merge pull request #108 from eiery/main
Update n_batch default to 512 to match upstream llama.cpp
2023-04-24 13:48:09 -04:00
Andrei Betlen
02cf881317 Update llama.cpp 2023-04-24 09:30:10 -04:00
eiery
aa12d8a81f
Update llama.py
update n_batch default to 512 to match upstream llama.cpp
2023-04-23 20:56:40 -04:00
Andrei Betlen
7230599593 Disable mmap when applying lora weights. Closes #107 2023-04-23 14:53:17 -04:00
Andrei Betlen
e99caedbbd Update llama.cpp 2023-04-22 19:50:28 -04:00
Andrei Betlen
643b73e155 Bump version 2023-04-21 19:38:54 -04:00
Andrei Betlen
1eb130a6b2 Update llama.cpp 2023-04-21 17:40:27 -04:00
Andrei Betlen
ba3959eafd Update llama.cpp 2023-04-20 05:15:31 -04:00
Andrei Betlen
207adbdf13 Bump version 2023-04-20 01:48:24 -04:00
Andrei Betlen
3d290623f5 Update llama.cpp 2023-04-20 01:08:15 -04:00
Andrei Betlen
e4647c75ec Add use_mmap flag to server 2023-04-19 15:57:46 -04:00
Andrei Betlen
207ebbc8dc Update llama.cpp 2023-04-19 14:02:11 -04:00
Andrei Betlen
0df4d69c20 If lora base is not set avoid re-loading the model by passing NULL 2023-04-18 23:45:25 -04:00
Andrei Betlen
95c0dc134e Update type signature to allow for null pointer to be passed. 2023-04-18 23:44:46 -04:00
Andrei Betlen
453e517fd5 Add seperate lora_base path for applying LoRA to quantized models using original unquantized model weights. 2023-04-18 10:20:46 -04:00
Andrei Betlen
32ca803bd8 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-04-18 02:22:39 -04:00
Andrei Betlen
b2d44aa633 Update llama.cpp 2023-04-18 02:22:35 -04:00
Andrei
4ce6670bbd
Merge pull request #87 from SagsMug/main
Fix TypeError in low_level chat
2023-04-18 02:11:40 -04:00
Andrei Betlen
eb7f278cc6 Add lora_path parameter to Llama model 2023-04-18 01:43:44 -04:00
Andrei Betlen
35abf89552 Add bindings for LoRA adapters. Closes #88 2023-04-18 01:30:04 -04:00
Andrei Betlen
3f68e95097 Update llama.cpp 2023-04-18 01:29:27 -04:00
Mug
1b73a15e62 Merge branch 'main' of https://github.com/abetlen/llama-cpp-python 2023-04-17 14:45:42 +02:00
Mug
53d17ad003 Fixed end of text wrong type, and fix n_predict behaviour 2023-04-17 14:45:28 +02:00
Andrei Betlen
b2a24bddac Update docs 2023-04-15 22:31:14 -04:00
Andrei Betlen
e38485a66d Bump version. 2023-04-15 20:27:55 -04:00
Andrei Betlen
89856ef00d Bugfix: only eval new tokens 2023-04-15 17:32:53 -04:00
Andrei Betlen
887f3b73ac Update llama.cpp 2023-04-15 12:16:05 -04:00
Andrei Betlen
92c077136d Add experimental cache 2023-04-15 12:03:09 -04:00
Andrei Betlen
a6372a7ae5 Update stop sequences for chat 2023-04-15 12:02:48 -04:00
Andrei Betlen
83b2be6dc4 Update chat parameters 2023-04-15 11:58:43 -04:00
Andrei Betlen
62087514c6 Update chat prompt 2023-04-15 11:58:19 -04:00
Andrei Betlen
02f9fb82fb Bugfix 2023-04-15 11:39:52 -04:00
Andrei Betlen
3cd67c7bd7 Add type annotations 2023-04-15 11:39:21 -04:00
Andrei Betlen
d7de0e8014 Bugfix 2023-04-15 00:08:04 -04:00
Andrei Betlen
e90e122f2a Use clear 2023-04-14 23:33:18 -04:00
Andrei Betlen
ac7068a469 Track generated tokens internally 2023-04-14 23:33:00 -04:00
Andrei Betlen
25b646c2fb Update llama.cpp 2023-04-14 23:32:05 -04:00
Andrei Betlen
6e298d8fca Set kv cache size to f16 by default 2023-04-14 22:21:19 -04:00
Andrei Betlen
9c8c2c37dc Update llama.cpp 2023-04-14 10:01:57 -04:00
Andrei Betlen
6c7cec0c65 Fix completion request 2023-04-14 10:01:15 -04:00