Andrei Betlen
196650ccb2
Update model paths to be more clear they should point to file
2023-04-09 22:45:55 -04:00
Andrei Betlen
a79d3eb732
Fix workflow name
2023-04-09 22:38:19 -04:00
Andrei Betlen
fda975e5a9
Rename test publish
2023-04-09 22:34:17 -04:00
Andrei Betlen
baa394491c
Add PyPI publish workflow
2023-04-09 22:32:30 -04:00
Andrei Betlen
8c2bb3042f
Bump version
2023-04-09 22:12:23 -04:00
Andrei Betlen
c3c2623e8b
Update llama.cpp
2023-04-09 22:01:33 -04:00
Andrei Betlen
e636214b4e
Add test publish workflow
2023-04-08 19:57:37 -04:00
Andrei Betlen
314ce7d1cc
Fix cpu count default
2023-04-08 19:54:04 -04:00
Andrei Betlen
3fbc06361f
Formatting
2023-04-08 16:01:45 -04:00
Andrei Betlen
0067c1a588
Formatting
2023-04-08 16:01:18 -04:00
Andrei Betlen
0a5c551371
Bump version
2023-04-08 15:09:48 -04:00
Andrei Betlen
38f442deb0
Bugfix: Wrong size of embeddings. Closes #47
2023-04-08 15:05:33 -04:00
Andrei Betlen
6d1bda443e
Add clients example. Closes #46
2023-04-08 09:35:32 -04:00
Andrei Betlen
c940193e64
Bump version
2023-04-08 03:13:39 -04:00
Andrei Betlen
edaaa1bd63
Only build wheels on workflow dispatch
2023-04-08 03:11:25 -04:00
Andrei Betlen
ae3e9c3d6f
Update shared library extension for macos
2023-04-08 02:45:21 -04:00
Andrei Betlen
6a143ac0db
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-04-08 02:40:42 -04:00
Andrei Betlen
e611cfc56d
Build shared library with make on unix platforms
2023-04-08 02:39:17 -04:00
Andrei Betlen
a3f713039f
Update llama.cpp
2023-04-08 02:38:42 -04:00
Andrei
41365b0456
Merge pull request #15 from SagsMug/main
...
llama.cpp chat example implementation
2023-04-07 20:43:33 -04:00
Mug
16fc5b5d23
More interoperability to the original llama.cpp, and arguments now work
2023-04-07 13:32:19 +02:00
Andrei Betlen
c3b1aa6ab7
Clone submodule
2023-04-07 03:19:07 -04:00
Andrei Betlen
d4912a80da
Install build dependencies
2023-04-07 03:18:56 -04:00
Andrei Betlen
d74800da52
Build wheels
2023-04-07 03:14:38 -04:00
Andrei Betlen
0fd32046cb
Bump version
2023-04-06 22:48:54 -04:00
Andrei Betlen
88c23d04a8
Fix windows dll location issue
2023-04-06 22:44:31 -04:00
Andrei Betlen
241722c981
Quote destination
2023-04-06 22:38:53 -04:00
Andrei Betlen
d75196d7a1
Install with pip during build step
...
Use setup.py install
Upgrade version of setuptools
Revert to develop
Use setup.py build and pip install
Just use pip install
Use correct name in pyproject.toml
Make pip install verbose
2023-04-06 22:21:45 -04:00
Andrei Betlen
dd1c298620
Fix typo
2023-04-06 21:28:03 -04:00
Andrei Betlen
baa825dacb
Add windows and mac runners
2023-04-06 21:27:01 -04:00
Andrei Betlen
da539cc2ee
Safer calculation of default n_threads
2023-04-06 21:22:19 -04:00
Andrei Betlen
9b7526895d
Bump version
2023-04-06 21:19:08 -04:00
Andrei Betlen
7851cc1e3c
Don't install pydantic by default
2023-04-06 21:10:34 -04:00
Andrei Betlen
09707f5b2a
Remove console script
2023-04-06 21:08:32 -04:00
Andrei Betlen
930db37dd2
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-04-06 21:07:38 -04:00
Andrei Betlen
55279b679d
Handle prompt list
2023-04-06 21:07:35 -04:00
Andrei
c2e690b326
Merge pull request #29 from MillionthOdin16/main
...
Fixes and Tweaks to Defaults
2023-04-06 21:06:31 -04:00
Mug
10c7571117
Fixed too many newlines, now onto args.
...
Still needs shipping work so you could do "python -m llama_cpp.examples." etc.
2023-04-06 15:33:22 +02:00
Mug
085cc92b1f
Better llama.cpp interoperability
...
Has some too many newline issues so WIP
2023-04-06 15:30:57 +02:00
MillionthOdin16
2e91affea2
Ignore ./idea folder
2023-04-05 18:23:17 -04:00
MillionthOdin16
c283edd7f2
Set n_batch to default values and reduce thread count:
...
Change batch size to the llama.cpp default of 8. I've seen issues in llama.cpp where batch size affects quality of generations. (It shouldn't) But in case that's still an issue I changed to default.
Set auto-determined num of threads to 1/2 system count. ggml will sometimes lock cores at 100% while doing nothing. This is being addressed, but can cause bad experience for user if pegged at 100%
2023-04-05 18:17:29 -04:00
MillionthOdin16
b9b6dfd23f
Merge remote-tracking branch 'origin/main'
2023-04-05 17:51:43 -04:00
MillionthOdin16
76a82babef
Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048.
2023-04-05 17:44:53 -04:00
Andrei Betlen
38f7dea6ca
Update README and docs
2023-04-05 17:44:25 -04:00
MillionthOdin16
1e90597983
Add pydantic dep. Errors if pedantic isn't present. Also throws errors relating to TypeDict or subclass() if the version is too old or new...
2023-04-05 17:37:06 -04:00
Andrei Betlen
267d3648fc
Bump version
2023-04-05 16:26:22 -04:00
Andrei Betlen
74bf043ddd
Update llama.cpp
2023-04-05 16:25:54 -04:00
Andrei Betlen
44448fb3a8
Add server as a subpackage
2023-04-05 16:23:25 -04:00
Andrei Betlen
e1b5b9bb04
Update fastapi server example
2023-04-05 14:44:26 -04:00
Mug
283e59c5e9
Fix bug in init_break not being set when exited via antiprompt and others.
2023-04-05 14:47:24 +02:00