Andrei Betlen
|
99f064e681
|
docker: Add libopenblas to simple image
|
2023-07-09 01:36:39 -04:00 |
|
Andrei Betlen
|
00da643929
|
Update llama.cpp
|
2023-07-08 20:30:34 -04:00 |
|
Andrei Betlen
|
3c85c41573
|
docker: update path to dockerfile
|
2023-07-08 04:04:11 -04:00 |
|
Andrei Betlen
|
1f5e748a7e
|
docker: fix docker build action args
|
2023-07-08 04:00:43 -04:00 |
|
Andrei Betlen
|
9e153fd11d
|
docker: update context path
|
2023-07-08 03:44:51 -04:00 |
|
Andrei Betlen
|
5b7d76608d
|
docker: add checkout action to dockerfile
|
2023-07-08 03:43:17 -04:00 |
|
Andrei Betlen
|
3a2635b9e1
|
Update docker workflow for new simple image
|
2023-07-08 03:37:28 -04:00 |
|
Andrei Betlen
|
670fe4b701
|
Update changelog
|
2023-07-08 03:37:12 -04:00 |
|
Andrei
|
24724202ee
|
Merge pull request #64 from jm12138/add_unlimited_max_tokens
Add unlimited max_tokens
|
2023-07-08 02:38:06 -04:00 |
|
Andrei
|
5d756de314
|
Merge branch 'main' into add_unlimited_max_tokens
|
2023-07-08 02:37:38 -04:00 |
|
Andrei
|
236c4cf442
|
Merge pull request #456 from AgentJ-WR/patch-1
Show how to adjust context window in README.md
|
2023-07-08 02:32:20 -04:00 |
|
Andrei
|
7952ca50c9
|
Merge pull request #452 from audreyfeldroy/update-macos-metal-gpu-step-4
Update macOS Metal GPU step 4
|
2023-07-08 02:32:09 -04:00 |
|
Andrei
|
b8e0bed295
|
Merge pull request #453 from wu-qing-157/main
Fix incorrect token_logprobs (due to indexing after sorting)
|
2023-07-08 02:31:52 -04:00 |
|
Andrei Betlen
|
d6e6aad927
|
bugfix: fix compatibility bug with openai api on last token
|
2023-07-08 00:06:11 -04:00 |
|
Andrei Betlen
|
4f2b5d0b53
|
Format
|
2023-07-08 00:05:10 -04:00 |
|
AgentJ-WR
|
ea4fbadab3
|
Show how to adjust context window in README.md
|
2023-07-07 23:24:57 -04:00 |
|
Andrei Betlen
|
34c505edf2
|
perf: convert pointer to byref
|
2023-07-07 22:54:07 -04:00 |
|
Andrei Betlen
|
52753b77f5
|
Upgrade fastapi to 0.100.0 and pydantic v2
|
2023-07-07 21:38:46 -04:00 |
|
Andrei Betlen
|
11eae75211
|
perf: avoid allocating new buffers during sampling
|
2023-07-07 19:28:53 -04:00 |
|
Andrei Betlen
|
7887376bff
|
Update llama.cpp
|
2023-07-07 19:06:54 -04:00 |
|
Andrei Betlen
|
a14d8a9b3f
|
perf: assign to candidates data structure instead
|
2023-07-07 18:58:43 -04:00 |
|
wu-qing-157
|
9e61661518
|
fix indexing token_logprobs after sorting
|
2023-07-07 10:18:49 +00:00 |
|
Audrey Roy Greenfeld
|
d270ec231a
|
Update macOS Metal GPU step 4
* Update "today" to version 0.1.62
* Fix numbering (there were 2 step 4's)
|
2023-07-07 11:15:04 +01:00 |
|
Andrei Betlen
|
ca11673061
|
Add universal docker image
|
2023-07-07 03:38:51 -04:00 |
|
Andrei Betlen
|
57d8ec3899
|
Add setting to control request interruption
|
2023-07-07 03:37:23 -04:00 |
|
Andrei Betlen
|
cc542b4452
|
Update llama.cpp
|
2023-07-07 03:04:54 -04:00 |
|
Andrei Betlen
|
4c7cdcca00
|
Add interruptible streaming requests for llama-cpp-python server. Closes #183
|
2023-07-07 03:04:17 -04:00 |
|
Andrei Betlen
|
98ae4e58a3
|
Update llama.cpp
|
2023-07-06 17:57:56 -04:00 |
|
Andrei Betlen
|
a1b2d5c09b
|
Bump version
|
2023-07-05 01:06:46 -04:00 |
|
Andrei Betlen
|
b994296c75
|
Update llama.cpp
|
2023-07-05 01:00:14 -04:00 |
|
Andrei
|
058b134ab6
|
Merge pull request #443 from abetlen/dependabot/pip/mkdocs-material-9.1.18
Bump mkdocs-material from 9.1.17 to 9.1.18
|
2023-07-05 00:40:46 -04:00 |
|
dependabot[bot]
|
9261a52916
|
Bump mkdocs-material from 9.1.17 to 9.1.18
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.1.17 to 9.1.18.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.1.17...9.1.18)
---
updated-dependencies:
- dependency-name: mkdocs-material
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-07-04 21:21:09 +00:00 |
|
Andrei
|
5e0a6b664d
|
Merge pull request #442 from abetlen/dependabot/pip/typing-extensions-4.7.1
Bump typing-extensions from 4.6.3 to 4.7.1
|
2023-07-04 17:19:36 -04:00 |
|
dependabot[bot]
|
f1b442337d
|
Bump typing-extensions from 4.6.3 to 4.7.1
Bumps [typing-extensions](https://github.com/python/typing_extensions) from 4.6.3 to 4.7.1.
- [Release notes](https://github.com/python/typing_extensions/releases)
- [Changelog](https://github.com/python/typing_extensions/blob/main/CHANGELOG.md)
- [Commits](https://github.com/python/typing_extensions/compare/4.6.3...4.7.1)
---
updated-dependencies:
- dependency-name: typing-extensions
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-07-04 18:22:53 +00:00 |
|
Andrei
|
2379ed5809
|
Merge pull request #444 from abetlen/dependabot/pip/fastapi-0.99.1
Bump fastapi from 0.98.0 to 0.99.1
|
2023-07-04 14:21:44 -04:00 |
|
Andrei
|
bf1dc6693b
|
Merge pull request #436 from mikeyang01/main
Update README.md
|
2023-07-04 14:21:30 -04:00 |
|
dependabot[bot]
|
fb02077e3f
|
Bump fastapi from 0.98.0 to 0.99.1
Bumps [fastapi](https://github.com/tiangolo/fastapi) from 0.98.0 to 0.99.1.
- [Release notes](https://github.com/tiangolo/fastapi/releases)
- [Commits](https://github.com/tiangolo/fastapi/compare/0.98.0...0.99.1)
---
updated-dependencies:
- dependency-name: fastapi
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-07-03 20:55:32 +00:00 |
|
Mike
|
c8d0647caa
|
Update README.md
prevent not found errors
|
2023-06-30 16:42:13 +08:00 |
|
Andrei
|
28ec88cfb4
|
Merge pull request #435 from vladkens/patch-1
Update README.md
|
2023-06-29 21:49:39 -04:00 |
|
vladkens
|
485eee7bef
|
Update README.md
Fix installation link in readme
|
2023-06-30 00:48:21 +03:00 |
|
Andrei Betlen
|
c67f786360
|
Update llama.cpp
|
2023-06-29 01:08:15 -04:00 |
|
Andrei Betlen
|
e34f4414cf
|
Hotfix: logits_all bug
|
2023-06-29 00:57:27 -04:00 |
|
Andrei Betlen
|
4d1eb88b13
|
Bump version
|
2023-06-29 00:46:15 -04:00 |
|
Andrei Betlen
|
a2ede37bd5
|
Load logits directly into scores buffer
|
2023-06-29 00:45:46 -04:00 |
|
Andrei Betlen
|
b95b0ffbeb
|
Use pre-allocated buffers to store input_ids and scores
|
2023-06-29 00:40:47 -04:00 |
|
Andrei Betlen
|
a5e059c053
|
Free model when llama is unloaded. Closes #434
|
2023-06-28 23:58:55 -04:00 |
|
Andrei Betlen
|
442213b070
|
Add stopping criteria and logits processor to docs
|
2023-06-28 21:07:58 -04:00 |
|
Andrei Betlen
|
a3766591bb
|
Update docs
|
2023-06-27 13:02:30 -04:00 |
|
Andrei Betlen
|
530599a467
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-06-27 12:45:33 -04:00 |
|
Andrei Betlen
|
dae983342a
|
Update docs
|
2023-06-27 12:45:31 -04:00 |
|