Andrei Betlen
|
6cf5876538
|
Deprecate generate method
|
2023-04-12 14:06:04 -04:00 |
|
Andrei Betlen
|
b3805bb9cc
|
Implement logprobs parameter for text completion. Closes #2
|
2023-04-12 14:05:11 -04:00 |
|
Andrei Betlen
|
2a60eb820f
|
Update llama.cpp
|
2023-04-11 23:53:46 -04:00 |
|
Andrei Betlen
|
9f1e565594
|
Update llama.cpp
|
2023-04-11 11:59:03 -04:00 |
|
Andrei Betlen
|
213cc5c340
|
Remove async from function signature to avoid blocking the server
|
2023-04-11 11:54:31 -04:00 |
|
Andrei Betlen
|
3727ba4d9e
|
Bump version
|
2023-04-10 12:56:48 -04:00 |
|
Andrei Betlen
|
5247e32d9e
|
Update llama.cpp
|
2023-04-10 12:56:23 -04:00 |
|
Andrei Betlen
|
ffb1e80251
|
Bump version
|
2023-04-10 11:37:41 -04:00 |
|
Andrei
|
a5554a2f02
|
Merge pull request #61 from jm12138/fix_windows_install
Add UTF-8 Encoding in read_text.
|
2023-04-10 11:35:04 -04:00 |
|
jm12138
|
adfd9f681c
|
Matched the other encode calls
|
2023-04-10 15:33:31 +00:00 |
|
Andrei
|
0460fdb9ce
|
Merge pull request #28 from SagsMug/local-lib
Allow local llama library usage
|
2023-04-10 11:32:19 -04:00 |
|
Mug
|
2559e5af9b
|
Changed the environment variable name into "LLAMA_CPP_LIB"
|
2023-04-10 17:27:17 +02:00 |
|
Andrei
|
63d8a3c688
|
Merge pull request #63 from SagsMug/main
Low level chat: Added iterative search to prevent instructions from being echoed
|
2023-04-10 11:23:00 -04:00 |
|
Mug
|
ee71ce8ab7
|
Make windows users happy (hopefully)
|
2023-04-10 17:12:25 +02:00 |
|
Mug
|
cf339c9b3c
|
Better custom library debugging
|
2023-04-10 17:06:58 +02:00 |
|
Mug
|
4132293d2d
|
Merge branch 'main' of https://github.com/abetlen/llama-cpp-python into local-lib
|
2023-04-10 17:00:42 +02:00 |
|
Mug
|
76131d5bb8
|
Use environment variable for library override
|
2023-04-10 17:00:35 +02:00 |
|
Mug
|
0cccb41a8f
|
Added iterative search to prevent instructions from being echoed, add ignore eos, add no-mmap, fixed 1 character echo too much bug
|
2023-04-10 16:35:38 +02:00 |
|
jm12138
|
c65a621b6b
|
Add UTF-8 Encoding in read_text.
|
2023-04-10 10:28:24 +00:00 |
|
Andrei Betlen
|
241d608bbb
|
Update workflow permissions
|
2023-04-10 02:35:00 -04:00 |
|
Andrei Betlen
|
3d56c3b706
|
Run tests for pr's to main
|
2023-04-10 02:19:22 -04:00 |
|
Andrei Betlen
|
bc02ce353b
|
Bump version
|
2023-04-10 02:12:19 -04:00 |
|
Andrei Betlen
|
1f67ad2a0b
|
Add use_mmap option
|
2023-04-10 02:11:35 -04:00 |
|
Andrei Betlen
|
d41cb0ecf7
|
Add create release step to workflow
|
2023-04-10 01:54:52 -04:00 |
|
Andrei Betlen
|
8594b8388e
|
Add build and release
|
2023-04-10 01:29:32 -04:00 |
|
Andrei Betlen
|
a984f55d79
|
Quickfix: forgot to clone submodules when building and publishing pypi package
|
2023-04-10 00:51:25 -04:00 |
|
Andrei Betlen
|
196650ccb2
|
Update model paths to be more clear they should point to file
|
2023-04-09 22:45:55 -04:00 |
|
Andrei Betlen
|
a79d3eb732
|
Fix workflow name
|
2023-04-09 22:38:19 -04:00 |
|
Andrei Betlen
|
fda975e5a9
|
Rename test publish
|
2023-04-09 22:34:17 -04:00 |
|
Andrei Betlen
|
baa394491c
|
Add PyPI publish workflow
|
2023-04-09 22:32:30 -04:00 |
|
Andrei Betlen
|
8c2bb3042f
|
Bump version
|
2023-04-09 22:12:23 -04:00 |
|
Andrei Betlen
|
c3c2623e8b
|
Update llama.cpp
|
2023-04-09 22:01:33 -04:00 |
|
Andrei Betlen
|
e636214b4e
|
Add test publish workflow
|
2023-04-08 19:57:37 -04:00 |
|
Andrei Betlen
|
314ce7d1cc
|
Fix cpu count default
|
2023-04-08 19:54:04 -04:00 |
|
Andrei Betlen
|
3fbc06361f
|
Formatting
|
2023-04-08 16:01:45 -04:00 |
|
Andrei Betlen
|
0067c1a588
|
Formatting
|
2023-04-08 16:01:18 -04:00 |
|
Andrei Betlen
|
0a5c551371
|
Bump version
|
2023-04-08 15:09:48 -04:00 |
|
Andrei Betlen
|
38f442deb0
|
Bugfix: Wrong size of embeddings. Closes #47
|
2023-04-08 15:05:33 -04:00 |
|
Andrei Betlen
|
6d1bda443e
|
Add clients example. Closes #46
|
2023-04-08 09:35:32 -04:00 |
|
Andrei Betlen
|
c940193e64
|
Bump version
|
2023-04-08 03:13:39 -04:00 |
|
Andrei Betlen
|
edaaa1bd63
|
Only build wheels on workflow dispatch
|
2023-04-08 03:11:25 -04:00 |
|
Andrei Betlen
|
ae3e9c3d6f
|
Update shared library extension for macos
|
2023-04-08 02:45:21 -04:00 |
|
Andrei Betlen
|
6a143ac0db
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-04-08 02:40:42 -04:00 |
|
Andrei Betlen
|
e611cfc56d
|
Build shared library with make on unix platforms
|
2023-04-08 02:39:17 -04:00 |
|
Andrei Betlen
|
a3f713039f
|
Update llama.cpp
|
2023-04-08 02:38:42 -04:00 |
|
Andrei
|
41365b0456
|
Merge pull request #15 from SagsMug/main
llama.cpp chat example implementation
|
2023-04-07 20:43:33 -04:00 |
|
Mug
|
16fc5b5d23
|
More interoperability to the original llama.cpp, and arguments now work
|
2023-04-07 13:32:19 +02:00 |
|
Andrei Betlen
|
c3b1aa6ab7
|
Clone submodule
|
2023-04-07 03:19:07 -04:00 |
|
Andrei Betlen
|
d4912a80da
|
Install build dependencies
|
2023-04-07 03:18:56 -04:00 |
|
Andrei Betlen
|
d74800da52
|
Build wheels
|
2023-04-07 03:14:38 -04:00 |
|