Andrei Betlen
|
4cefb70cd0
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-06-14 21:40:19 -04:00 |
|
Andrei Betlen
|
715f98c591
|
Update llama.cpp
|
2023-06-14 21:40:13 -04:00 |
|
Gabor
|
3129a0e7e5
|
correction to add back environment variable support <3 docker
|
2023-06-11 01:11:24 +01:00 |
|
Gabor
|
3ea31930e5
|
fixes abetlen/llama-cpp-python #358
|
2023-06-11 00:58:08 +01:00 |
|
Andrei Betlen
|
21acd7901f
|
Re-enable cache
|
2023-06-10 12:22:31 -04:00 |
|
Andrei Betlen
|
6639371407
|
Update llama.cpp
|
2023-06-10 12:17:38 -04:00 |
|
Andrei Betlen
|
0da655b3be
|
Temporarily disable cache until save state bug is fixed.
|
2023-06-09 11:10:24 -04:00 |
|
Andrei Betlen
|
556c7edf47
|
Truncate max_tokens if it exceeds context length
|
2023-06-09 10:57:36 -04:00 |
|
Andrei Betlen
|
0c42168508
|
Fix cache implementation breaking changes
|
2023-06-08 13:19:23 -04:00 |
|
Andrei Betlen
|
607d217caa
|
Allow both .so and .dylib extensions for macos
|
2023-06-08 00:27:19 -04:00 |
|
Andrei
|
0f0b447fa4
|
Merge pull request #289 from Maximilian-Winter/main
Diskcache implementation for llama state.
|
2023-06-06 17:03:03 -04:00 |
|
Andrei
|
d508573fb4
|
Merge pull request #328 from spirilis/mirostat
Added mirostat support for completions, chat completions API
|
2023-06-06 16:58:23 -04:00 |
|
Andrei Betlen
|
aad4b17f52
|
Update llama.cpp
|
2023-06-06 16:23:55 -04:00 |
|
Andrei Betlen
|
8b4968ea67
|
Fix resize issue. Closes #330
|
2023-06-06 11:37:57 -04:00 |
|
Eric B
|
9b1c9e902c
|
Added mirostat support for completions, chat completions API
|
2023-06-05 22:37:11 -04:00 |
|
Andrei Betlen
|
7b57420ea9
|
Update llama.cpp
|
2023-06-05 18:17:29 -04:00 |
|
Maximilian-Winter
|
29f9c9cca3
|
Added both LlamaChache classes Disk and RAM.
|
2023-05-31 22:33:56 +02:00 |
|
Maximilian Winter
|
9ea7a379d3
|
Merge branch 'abetlen:main' into main
|
2023-05-31 12:55:51 +02:00 |
|
Andrei
|
49fe9395a1
|
Merge pull request #277 from abetlen/add-numpy-support
Use numpy for internal buffers
|
2023-05-29 20:59:30 -04:00 |
|
Maximilian-Winter
|
719c3eae0a
|
Diskcache implementation for llama state.
|
2023-05-28 15:56:38 +02:00 |
|
Andrei Betlen
|
80066f0b80
|
Use async routes
|
2023-05-27 09:12:58 -04:00 |
|
Andrei Betlen
|
c2b59a5f59
|
Import unnused import
|
2023-05-26 22:59:29 -04:00 |
|
Andrei Betlen
|
8f2b4456ad
|
Format
|
2023-05-26 22:04:31 -04:00 |
|
Andrei Betlen
|
84e313bd6e
|
Align dtype to match c structs
|
2023-05-26 22:02:16 -04:00 |
|
Andrei Betlen
|
66bcb8d70d
|
Merge branch 'main' into add-numpy-support
|
2023-05-26 20:25:03 -04:00 |
|
Andrei Betlen
|
8f35bddd7e
|
Fix stop sequence performance bug.
|
2023-05-26 20:23:49 -04:00 |
|
Andrei Betlen
|
7fc7bc30e7
|
Remove usage of eval_tokens for cache check
|
2023-05-26 20:12:05 -04:00 |
|
Andrei Betlen
|
fe331ec589
|
Replace eval_logits and eval_tokens with numpy arrays
|
2023-05-26 20:03:31 -04:00 |
|
Andrei Betlen
|
8eb9769f78
|
Add support for numpy
|
2023-05-26 16:12:45 -04:00 |
|
Andrei Betlen
|
4c1b7f7a76
|
Bugfix for logits_processor and stopping_criteria
|
2023-05-26 10:25:28 -04:00 |
|
Andrei Betlen
|
433a2e3e8a
|
Add extra logits_processor and stopping_criteria
|
2023-05-26 03:13:24 -04:00 |
|
Andrei Betlen
|
f74b90ed67
|
Fix streaming hang on last token when cache is on.
|
2023-05-26 03:03:01 -04:00 |
|
Andrei Betlen
|
5be8354e11
|
Added tokenizer
|
2023-05-26 03:00:51 -04:00 |
|
Andrei Betlen
|
8fa2ef1959
|
Format
|
2023-05-26 03:00:35 -04:00 |
|
Andrei Betlen
|
6bd1075291
|
Merge branch 'Maximilian-Winter/main' into main
|
2023-05-26 02:56:11 -04:00 |
|
Andrei Betlen
|
ca01f98e09
|
Add LlamaTokenizer class
|
2023-05-25 14:11:33 -04:00 |
|
Andrei Betlen
|
1d247e0f35
|
Add StoppingCriteria and LogitsProcessor to generate to match huggingface API
|
2023-05-25 14:04:54 -04:00 |
|
Maximilian-Winter
|
c2585b6889
|
Fixed list elements typing
|
2023-05-25 10:54:08 +02:00 |
|
Maximilian-Winter
|
da463e6c8c
|
Added types to logit processor list and stop criteria list
|
2023-05-25 09:07:16 +02:00 |
|
Maximilian-Winter
|
c05fcdf42f
|
Fixed none value of logits processors.
|
2023-05-24 22:02:06 +02:00 |
|
Maximilian-Winter
|
5bb780d455
|
Implemented logit processors and stop criteria's
|
2023-05-24 21:55:44 +02:00 |
|
Andrei Betlen
|
fab064ded9
|
Remove unnecessary ffi calls
|
2023-05-23 17:56:21 -04:00 |
|
Andrei Betlen
|
0adb9ec37a
|
Use model_name and index in response
|
2023-05-21 21:30:03 -04:00 |
|
Andrei Betlen
|
922b5b2bfd
|
Merge branch 'main' into server-embedding
|
2023-05-21 21:21:38 -04:00 |
|
Andrei Betlen
|
cd102e9da1
|
Cache shared library function calls for static tokens
|
2023-05-21 19:18:56 -04:00 |
|
Andrei Betlen
|
b895511cca
|
Fix penalize_nl
|
2023-05-21 18:38:06 -04:00 |
|
Andrei Betlen
|
03e2947b03
|
Fix unnecessary memory allocation while sampling
|
2023-05-21 18:36:34 -04:00 |
|
Andrei Betlen
|
fafe47114c
|
Update llama.cpp
|
2023-05-21 17:47:21 -04:00 |
|
Andrei Betlen
|
76b1d2cd20
|
Change properties to functions to match token functions
|
2023-05-20 08:24:06 -04:00 |
|
Andrei Betlen
|
a7ba85834f
|
Add n_ctx, n_vocab, and n_embd properties
|
2023-05-20 08:13:41 -04:00 |
|