Andrei Betlen
6a143ac0db
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-04-08 02:40:42 -04:00
Andrei Betlen
e611cfc56d
Build shared library with make on unix platforms
2023-04-08 02:39:17 -04:00
Andrei Betlen
a3f713039f
Update llama.cpp
2023-04-08 02:38:42 -04:00
Andrei
41365b0456
Merge pull request #15 from SagsMug/main
...
llama.cpp chat example implementation
2023-04-07 20:43:33 -04:00
Mug
16fc5b5d23
More interoperability to the original llama.cpp, and arguments now work
2023-04-07 13:32:19 +02:00
Andrei Betlen
c3b1aa6ab7
Clone submodule
2023-04-07 03:19:07 -04:00
Andrei Betlen
d4912a80da
Install build dependencies
2023-04-07 03:18:56 -04:00
Andrei Betlen
d74800da52
Build wheels
2023-04-07 03:14:38 -04:00
Andrei Betlen
0fd32046cb
Bump version
2023-04-06 22:48:54 -04:00
Andrei Betlen
88c23d04a8
Fix windows dll location issue
2023-04-06 22:44:31 -04:00
Andrei Betlen
241722c981
Quote destination
2023-04-06 22:38:53 -04:00
Andrei Betlen
d75196d7a1
Install with pip during build step
...
Use setup.py install
Upgrade version of setuptools
Revert to develop
Use setup.py build and pip install
Just use pip install
Use correct name in pyproject.toml
Make pip install verbose
2023-04-06 22:21:45 -04:00
Andrei Betlen
dd1c298620
Fix typo
2023-04-06 21:28:03 -04:00
Andrei Betlen
baa825dacb
Add windows and mac runners
2023-04-06 21:27:01 -04:00
Andrei Betlen
da539cc2ee
Safer calculation of default n_threads
2023-04-06 21:22:19 -04:00
Andrei Betlen
9b7526895d
Bump version
2023-04-06 21:19:08 -04:00
Andrei Betlen
7851cc1e3c
Don't install pydantic by default
2023-04-06 21:10:34 -04:00
Andrei Betlen
09707f5b2a
Remove console script
2023-04-06 21:08:32 -04:00
Andrei Betlen
930db37dd2
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
2023-04-06 21:07:38 -04:00
Andrei Betlen
55279b679d
Handle prompt list
2023-04-06 21:07:35 -04:00
Andrei
c2e690b326
Merge pull request #29 from MillionthOdin16/main
...
Fixes and Tweaks to Defaults
2023-04-06 21:06:31 -04:00
Mug
10c7571117
Fixed too many newlines, now onto args.
...
Still needs shipping work so you could do "python -m llama_cpp.examples." etc.
2023-04-06 15:33:22 +02:00
Mug
085cc92b1f
Better llama.cpp interoperability
...
Has some too many newline issues so WIP
2023-04-06 15:30:57 +02:00
MillionthOdin16
2e91affea2
Ignore ./idea folder
2023-04-05 18:23:17 -04:00
MillionthOdin16
c283edd7f2
Set n_batch to default values and reduce thread count:
...
Change batch size to the llama.cpp default of 8. I've seen issues in llama.cpp where batch size affects quality of generations. (It shouldn't) But in case that's still an issue I changed to default.
Set auto-determined num of threads to 1/2 system count. ggml will sometimes lock cores at 100% while doing nothing. This is being addressed, but can cause bad experience for user if pegged at 100%
2023-04-05 18:17:29 -04:00
MillionthOdin16
b9b6dfd23f
Merge remote-tracking branch 'origin/main'
2023-04-05 17:51:43 -04:00
MillionthOdin16
76a82babef
Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048.
2023-04-05 17:44:53 -04:00
Andrei Betlen
38f7dea6ca
Update README and docs
2023-04-05 17:44:25 -04:00
MillionthOdin16
1e90597983
Add pydantic dep. Errors if pedantic isn't present. Also throws errors relating to TypeDict or subclass() if the version is too old or new...
2023-04-05 17:37:06 -04:00
Andrei Betlen
267d3648fc
Bump version
2023-04-05 16:26:22 -04:00
Andrei Betlen
74bf043ddd
Update llama.cpp
2023-04-05 16:25:54 -04:00
Andrei Betlen
44448fb3a8
Add server as a subpackage
2023-04-05 16:23:25 -04:00
Andrei Betlen
e1b5b9bb04
Update fastapi server example
2023-04-05 14:44:26 -04:00
Mug
283e59c5e9
Fix bug in init_break not being set when exited via antiprompt and others.
2023-04-05 14:47:24 +02:00
Mug
99ceecfccd
Move to new examples directory
2023-04-05 14:28:02 +02:00
Mug
e3ea354547
Allow local llama library usage
2023-04-05 14:23:01 +02:00
Mug
e4c6f34d95
Merge branch 'main' of https://github.com/abetlen/llama-cpp-python
2023-04-05 14:18:27 +02:00
Andrei Betlen
6de2f24aca
Bump version
2023-04-05 06:53:43 -04:00
Andrei Betlen
e96a5c5722
Make Llama instance pickleable. Closes #27
2023-04-05 06:52:17 -04:00
Andrei Betlen
152e4695c3
Bump Version
2023-04-05 04:43:51 -04:00
Andrei Betlen
c177c807e5
Add supported python versions
2023-04-05 04:43:19 -04:00
Andrei Betlen
17fdd1547c
Update workflow name and add badge to README
2023-04-05 04:41:24 -04:00
Andrei Betlen
7643f6677d
Bugfix for Python3.7
2023-04-05 04:37:33 -04:00
Andrei Betlen
4d015c33bd
Fix syntax error
2023-04-05 04:35:15 -04:00
Andrei Betlen
47570df17b
Checkout submodules
2023-04-05 04:34:19 -04:00
Andrei Betlen
e3f999e732
Add missing scikit-build install
2023-04-05 04:31:38 -04:00
Andrei Betlen
43c20d3282
Add initial github action to run automated tests
2023-04-05 04:30:32 -04:00
Andrei Betlen
b1babcf56c
Add quantize example
2023-04-05 04:17:26 -04:00
Andrei Betlen
c8e13a78d0
Re-organize examples folder
2023-04-05 04:10:13 -04:00
Andrei Betlen
c16bda5fb9
Add performance tuning notebook
2023-04-05 04:09:19 -04:00