Andrei Betlen
|
895f84f8fa
|
Add ROCm / AMD instructions to docs
|
2023-08-25 17:19:23 -04:00 |
|
Andrei Betlen
|
ac47d55577
|
Merge branch 'main' into v0.2-wip
|
2023-08-25 15:45:22 -04:00 |
|
Andrei Betlen
|
3f8bc417d7
|
Bump version
|
2023-08-25 15:18:15 -04:00 |
|
Andrei
|
915bbeacc5
|
Merge pull request #633 from abetlen/gguf
GGUF (Breaking Change to Model Files)
|
2023-08-25 15:13:12 -04:00 |
|
Andrei Betlen
|
ac37ea562b
|
Add temporary docs for GGUF model conversion
|
2023-08-25 15:11:08 -04:00 |
|
Andrei Betlen
|
ef23d1e545
|
Update llama.cpp
|
2023-08-25 14:35:53 -04:00 |
|
Andrei Betlen
|
c8a7637978
|
Ignore vendor directory for tests
|
2023-08-25 14:35:27 -04:00 |
|
Andrei Betlen
|
48cf43b427
|
Use _with_model variants for tokenization
|
2023-08-25 13:43:16 -04:00 |
|
Andrei Betlen
|
80389f71da
|
Update README
|
2023-08-25 05:02:48 -04:00 |
|
Andrei Betlen
|
8ac59465b9
|
Strip leading space when de-tokenizing.
|
2023-08-25 04:56:48 -04:00 |
|
Andrei Betlen
|
c2d1deaa8a
|
Update llama.cpp
|
2023-08-24 18:01:42 -04:00 |
|
Andrei Betlen
|
3674e5ed4e
|
Update model path
|
2023-08-24 01:01:20 -04:00 |
|
Andrei Betlen
|
db982a861f
|
Fix
|
2023-08-24 01:01:12 -04:00 |
|
Andrei Betlen
|
4ed632c4b3
|
Remove deprecated params
|
2023-08-24 01:01:05 -04:00 |
|
Andrei Betlen
|
cf405f6764
|
Merge branch 'main' into v0.2-wip
|
2023-08-24 00:30:51 -04:00 |
|
Andrei Betlen
|
bbbf0f4fc4
|
Update llama.cpp
|
2023-08-24 00:17:00 -04:00 |
|
Andrei
|
d644199fe8
|
Merge pull request #625 from abetlen/dependabot/pip/mkdocs-material-9.2.0
Bump mkdocs-material from 9.1.21 to 9.2.0
|
2023-08-22 15:53:57 -04:00 |
|
dependabot[bot]
|
abca3d81c8
|
Bump mkdocs-material from 9.1.21 to 9.2.0
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.1.21 to 9.2.0.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.1.21...9.2.0)
---
updated-dependencies:
- dependency-name: mkdocs-material
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-08-21 20:53:56 +00:00 |
|
Andrei
|
2a0844b190
|
Merge pull request #556 from Huge/patch-1
Fix dev setup in README.md so that everyone can run it
|
2023-08-17 23:21:45 -04:00 |
|
Andrei
|
876f39d751
|
Merge pull request #564 from Isydmr/main
Docker improvements
|
2023-08-17 23:21:27 -04:00 |
|
Andrei
|
4cf0461f97
|
Merge pull request #598 from pradhyumna85/main
Fixed Cuda Dockerfile
|
2023-08-17 23:20:43 -04:00 |
|
Andrei Betlen
|
8fc3fa9f1c
|
Bump version
|
2023-08-17 23:17:56 -04:00 |
|
Andrei Betlen
|
da1ef72c51
|
Update llama.cpp
|
2023-08-17 23:02:20 -04:00 |
|
Andrei Betlen
|
e632c59fa0
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-08-17 20:53:04 -04:00 |
|
Andrei
|
7ac73b8d94
|
Merge pull request #621 from c0sogi/main
Fix typos in llama_grammar
|
2023-08-17 20:52:48 -04:00 |
|
c0sogi
|
a240aa6b25
|
Fix typos in llama_grammar
|
2023-08-17 21:00:44 +09:00 |
|
Andrei Betlen
|
620cd2fd69
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-08-14 22:41:47 -04:00 |
|
Andrei Betlen
|
5788f1f2b2
|
Remove unnused import
|
2023-08-14 22:41:37 -04:00 |
|
Andrei
|
6dfb98117e
|
Merge pull request #600 from Vuizur/main
Add py.typed to conform with PEP 561
|
2023-08-14 22:40:41 -04:00 |
|
Andrei
|
b99e758045
|
Merge pull request #604 from aliencaocao/main-1
Add doc string for n_gpu_layers argument and make -1 offload all layers
|
2023-08-14 22:40:10 -04:00 |
|
Andrei Betlen
|
b345d60987
|
Update llama.cpp
|
2023-08-14 22:33:30 -04:00 |
|
Andrei
|
91e86e5d71
|
Merge pull request #610 from abetlen/dependabot/pip/fastapi-0.101.1
Bump fastapi from 0.101.0 to 0.101.1
|
2023-08-14 22:22:56 -04:00 |
|
Andrei
|
c48b18b364
|
Merge pull request #611 from abetlen/dependabot/pip/sse-starlette-1.6.5
Bump sse-starlette from 1.6.1 to 1.6.5
|
2023-08-14 22:22:51 -04:00 |
|
Andrei
|
07e23f46a5
|
Merge pull request #612 from abetlen/dependabot/pip/pydantic-settings-2.0.3
Bump pydantic-settings from 2.0.2 to 2.0.3
|
2023-08-14 22:22:44 -04:00 |
|
dependabot[bot]
|
485ad97909
|
Bump pydantic-settings from 2.0.2 to 2.0.3
Bumps [pydantic-settings](https://github.com/pydantic/pydantic-settings) from 2.0.2 to 2.0.3.
- [Release notes](https://github.com/pydantic/pydantic-settings/releases)
- [Commits](https://github.com/pydantic/pydantic-settings/compare/v2.0.2...v2.0.3)
---
updated-dependencies:
- dependency-name: pydantic-settings
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-08-14 20:40:16 +00:00 |
|
dependabot[bot]
|
e91969c888
|
Bump sse-starlette from 1.6.1 to 1.6.5
Bumps [sse-starlette](https://github.com/sysid/sse-starlette) from 1.6.1 to 1.6.5.
- [Release notes](https://github.com/sysid/sse-starlette/releases)
- [Changelog](https://github.com/sysid/sse-starlette/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sysid/sse-starlette/compare/v1.6.1...v1.6.5)
---
updated-dependencies:
- dependency-name: sse-starlette
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-08-14 20:38:42 +00:00 |
|
dependabot[bot]
|
077f8ed23e
|
Bump fastapi from 0.101.0 to 0.101.1
Bumps [fastapi](https://github.com/tiangolo/fastapi) from 0.101.0 to 0.101.1.
- [Release notes](https://github.com/tiangolo/fastapi/releases)
- [Commits](https://github.com/tiangolo/fastapi/compare/0.101.0...0.101.1)
---
updated-dependencies:
- dependency-name: fastapi
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-08-14 20:36:56 +00:00 |
|
Billy Cao
|
c471871d0b
|
make n_gpu_layers=-1 offload all layers
|
2023-08-13 11:21:28 +08:00 |
|
Billy Cao
|
d018c7b01d
|
Add doc string for n_gpu_layers argument
|
2023-08-12 18:41:47 +08:00 |
|
Hannes Krumbiegel
|
17dd7fa8e0
|
Add py.typed
|
2023-08-11 09:58:48 +02:00 |
|
Pradyumna Singh Rathore
|
d010ea70d2
|
Fixed Cuda Dockerfile
Previously models produced garbage output when running on GPU with layers offloaded.
Similar to related fix on another repo: 331326a0e3
|
2023-08-10 20:41:34 +05:30 |
|
MeouSker77
|
88184ed217
|
fix CJK output again
|
2023-08-09 22:04:35 +08:00 |
|
Andrei Betlen
|
66fb0345e8
|
Move grammar to function call argument
|
2023-08-08 15:08:54 -04:00 |
|
Andrei Betlen
|
1e844d3238
|
fix
|
2023-08-08 15:07:28 -04:00 |
|
Andrei Betlen
|
843b7ccd90
|
Merge branch 'main' into c0sogi/main
|
2023-08-08 14:43:02 -04:00 |
|
Andrei Betlen
|
bf0c603c51
|
Merge branch 'main' into fix-on-m1
|
2023-08-08 14:38:35 -04:00 |
|
Andrei Betlen
|
36041c8bec
|
Merge branch 'main' of github.com:abetlen/llama_cpp_python into main
|
2023-08-08 14:35:10 -04:00 |
|
Andrei Betlen
|
d015bdb4f8
|
Add mul_mat_q option
|
2023-08-08 14:35:06 -04:00 |
|
Andrei
|
dcc26f7f78
|
Merge pull request #573 from mzen17/spelling-error-patch
Fix typo "lowe-level API" to "low-level API" in the README
|
2023-08-08 14:32:34 -04:00 |
|
Andrei Betlen
|
f6a7850e1a
|
Update llama.cpp
|
2023-08-08 14:30:58 -04:00 |
|