Commit graph

1726 commits

Author SHA1 Message Date
Andrei Betlen
6153baab2d Clean up logprobs implementation 2023-04-14 09:59:33 -04:00
Andrei Betlen
26cc4ee029 Fix signature for stop parameter 2023-04-14 09:59:08 -04:00
Andrei Betlen
7dc0838fff Bump version 2023-04-13 00:35:05 -04:00
Andrei Betlen
6595ad84bf Add field to disable reseting between generations 2023-04-13 00:28:00 -04:00
Andrei Betlen
22fa5a621f Revert "Deprecate generate method"
This reverts commit 6cf5876538.
2023-04-13 00:19:55 -04:00
Andrei Betlen
4f5f99ef2a Formatting 2023-04-12 22:40:12 -04:00
Andrei Betlen
0daf16defc Enable logprobs on completion endpoint 2023-04-12 19:08:11 -04:00
Andrei Betlen
19598ac4e8 Fix threading bug. Closes #62 2023-04-12 19:07:53 -04:00
Andrei Betlen
005c78d26c Update llama.cpp 2023-04-12 14:29:00 -04:00
Andrei Betlen
c854c2564b Don't serialize stateful parameters 2023-04-12 14:07:14 -04:00
Andrei Betlen
2f9b649005 Style fix 2023-04-12 14:06:22 -04:00
Andrei Betlen
6cf5876538 Deprecate generate method 2023-04-12 14:06:04 -04:00
Andrei Betlen
b3805bb9cc Implement logprobs parameter for text completion. Closes #2 2023-04-12 14:05:11 -04:00
Niek van der Maas
9ce8146231 More generic model name 2023-04-12 11:56:16 +02:00
Niek van der Maas
c14201dc0f Add Dockerfile + build workflow 2023-04-12 11:53:39 +02:00
Andrei Betlen
2a60eb820f Update llama.cpp 2023-04-11 23:53:46 -04:00
Andrei Betlen
9f1e565594 Update llama.cpp 2023-04-11 11:59:03 -04:00
Andrei Betlen
213cc5c340 Remove async from function signature to avoid blocking the server 2023-04-11 11:54:31 -04:00
Andrei Betlen
3727ba4d9e Bump version 2023-04-10 12:56:48 -04:00
Andrei Betlen
5247e32d9e Update llama.cpp 2023-04-10 12:56:23 -04:00
jm12138
90e1021154 Add unlimited max_tokens 2023-04-10 15:56:05 +00:00
Andrei Betlen
ffb1e80251 Bump version 2023-04-10 11:37:41 -04:00
Andrei
a5554a2f02
Merge pull request #61 from jm12138/fix_windows_install
Add UTF-8 Encoding in read_text.
2023-04-10 11:35:04 -04:00
jm12138
adfd9f681c Matched the other encode calls 2023-04-10 15:33:31 +00:00
Andrei
0460fdb9ce
Merge pull request #28 from SagsMug/local-lib
Allow local llama library usage
2023-04-10 11:32:19 -04:00
Mug
2559e5af9b Changed the environment variable name into "LLAMA_CPP_LIB" 2023-04-10 17:27:17 +02:00
Andrei
63d8a3c688
Merge pull request #63 from SagsMug/main
Low level chat: Added iterative search to prevent instructions from being echoed
2023-04-10 11:23:00 -04:00
Mug
ee71ce8ab7 Make windows users happy (hopefully) 2023-04-10 17:12:25 +02:00
Mug
cf339c9b3c Better custom library debugging 2023-04-10 17:06:58 +02:00
Mug
4132293d2d Merge branch 'main' of https://github.com/abetlen/llama-cpp-python into local-lib 2023-04-10 17:00:42 +02:00
Mug
76131d5bb8 Use environment variable for library override 2023-04-10 17:00:35 +02:00
Mug
3bb45f1658 More reasonable defaults 2023-04-10 16:38:45 +02:00
Mug
0cccb41a8f Added iterative search to prevent instructions from being echoed, add ignore eos, add no-mmap, fixed 1 character echo too much bug 2023-04-10 16:35:38 +02:00
jm12138
c65a621b6b Add UTF-8 Encoding in read_text. 2023-04-10 10:28:24 +00:00
Andrei Betlen
241d608bbb Update workflow permissions 2023-04-10 02:35:00 -04:00
Andrei Betlen
3d56c3b706 Run tests for pr's to main 2023-04-10 02:19:22 -04:00
Andrei Betlen
bc02ce353b Bump version 2023-04-10 02:12:19 -04:00
Andrei Betlen
1f67ad2a0b Add use_mmap option 2023-04-10 02:11:35 -04:00
Andrei Betlen
d41cb0ecf7 Add create release step to workflow 2023-04-10 01:54:52 -04:00
Andrei Betlen
8594b8388e Add build and release 2023-04-10 01:29:32 -04:00
Andrei Betlen
a984f55d79 Quickfix: forgot to clone submodules when building and publishing pypi package 2023-04-10 00:51:25 -04:00
Andrei Betlen
196650ccb2 Update model paths to be more clear they should point to file 2023-04-09 22:45:55 -04:00
Andrei Betlen
a79d3eb732 Fix workflow name 2023-04-09 22:38:19 -04:00
Andrei Betlen
fda975e5a9 Rename test publish 2023-04-09 22:34:17 -04:00
Andrei Betlen
baa394491c Add PyPI publish workflow 2023-04-09 22:32:30 -04:00
Andrei Betlen
8c2bb3042f Bump version 2023-04-09 22:12:23 -04:00
Andrei Betlen
c3c2623e8b Update llama.cpp 2023-04-09 22:01:33 -04:00
Andrei Betlen
e636214b4e Add test publish workflow 2023-04-08 19:57:37 -04:00
Andrei Betlen
314ce7d1cc Fix cpu count default 2023-04-08 19:54:04 -04:00
Andrei Betlen
3fbc06361f Formatting 2023-04-08 16:01:45 -04:00