Commit graph

1668 commits

Author SHA1 Message Date
Andrei Betlen
cf405f6764 Merge branch 'main' into v0.2-wip 2023-08-24 00:30:51 -04:00
Andrei Betlen
bbbf0f4fc4 Update llama.cpp 2023-08-24 00:17:00 -04:00
Andrei
d644199fe8
Merge pull request #625 from abetlen/dependabot/pip/mkdocs-material-9.2.0
Bump mkdocs-material from 9.1.21 to 9.2.0
2023-08-22 15:53:57 -04:00
dependabot[bot]
abca3d81c8
Bump mkdocs-material from 9.1.21 to 9.2.0
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.1.21 to 9.2.0.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.1.21...9.2.0)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-21 20:53:56 +00:00
Andrei
2a0844b190
Merge pull request #556 from Huge/patch-1
Fix dev setup in README.md so that everyone can run it
2023-08-17 23:21:45 -04:00
Andrei
876f39d751
Merge pull request #564 from Isydmr/main
Docker improvements
2023-08-17 23:21:27 -04:00
Andrei
4cf0461f97
Merge pull request #598 from pradhyumna85/main
Fixed Cuda Dockerfile
2023-08-17 23:20:43 -04:00
Andrei Betlen
8fc3fa9f1c Bump version 2023-08-17 23:17:56 -04:00
Andrei Betlen
da1ef72c51 Update llama.cpp 2023-08-17 23:02:20 -04:00
Andrei Betlen
e632c59fa0 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-17 20:53:04 -04:00
Andrei
7ac73b8d94
Merge pull request #621 from c0sogi/main
Fix typos in llama_grammar
2023-08-17 20:52:48 -04:00
c0sogi
a240aa6b25 Fix typos in llama_grammar 2023-08-17 21:00:44 +09:00
Andrei Betlen
620cd2fd69 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-14 22:41:47 -04:00
Andrei Betlen
5788f1f2b2 Remove unnused import 2023-08-14 22:41:37 -04:00
Andrei
6dfb98117e
Merge pull request #600 from Vuizur/main
Add py.typed to conform with PEP 561
2023-08-14 22:40:41 -04:00
Andrei
b99e758045
Merge pull request #604 from aliencaocao/main-1
Add doc string for n_gpu_layers argument and make -1 offload all layers
2023-08-14 22:40:10 -04:00
Andrei Betlen
b345d60987 Update llama.cpp 2023-08-14 22:33:30 -04:00
Andrei
91e86e5d71
Merge pull request #610 from abetlen/dependabot/pip/fastapi-0.101.1
Bump fastapi from 0.101.0 to 0.101.1
2023-08-14 22:22:56 -04:00
Andrei
c48b18b364
Merge pull request #611 from abetlen/dependabot/pip/sse-starlette-1.6.5
Bump sse-starlette from 1.6.1 to 1.6.5
2023-08-14 22:22:51 -04:00
Andrei
07e23f46a5
Merge pull request #612 from abetlen/dependabot/pip/pydantic-settings-2.0.3
Bump pydantic-settings from 2.0.2 to 2.0.3
2023-08-14 22:22:44 -04:00
dependabot[bot]
485ad97909
Bump pydantic-settings from 2.0.2 to 2.0.3
Bumps [pydantic-settings](https://github.com/pydantic/pydantic-settings) from 2.0.2 to 2.0.3.
- [Release notes](https://github.com/pydantic/pydantic-settings/releases)
- [Commits](https://github.com/pydantic/pydantic-settings/compare/v2.0.2...v2.0.3)

---
updated-dependencies:
- dependency-name: pydantic-settings
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-14 20:40:16 +00:00
dependabot[bot]
e91969c888
Bump sse-starlette from 1.6.1 to 1.6.5
Bumps [sse-starlette](https://github.com/sysid/sse-starlette) from 1.6.1 to 1.6.5.
- [Release notes](https://github.com/sysid/sse-starlette/releases)
- [Changelog](https://github.com/sysid/sse-starlette/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sysid/sse-starlette/compare/v1.6.1...v1.6.5)

---
updated-dependencies:
- dependency-name: sse-starlette
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-14 20:38:42 +00:00
dependabot[bot]
077f8ed23e
Bump fastapi from 0.101.0 to 0.101.1
Bumps [fastapi](https://github.com/tiangolo/fastapi) from 0.101.0 to 0.101.1.
- [Release notes](https://github.com/tiangolo/fastapi/releases)
- [Commits](https://github.com/tiangolo/fastapi/compare/0.101.0...0.101.1)

---
updated-dependencies:
- dependency-name: fastapi
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-14 20:36:56 +00:00
Billy Cao
c471871d0b
make n_gpu_layers=-1 offload all layers 2023-08-13 11:21:28 +08:00
Billy Cao
d018c7b01d
Add doc string for n_gpu_layers argument 2023-08-12 18:41:47 +08:00
Hannes Krumbiegel
17dd7fa8e0 Add py.typed 2023-08-11 09:58:48 +02:00
Pradyumna Singh Rathore
d010ea70d2
Fixed Cuda Dockerfile
Previously models produced garbage output when running on GPU with layers offloaded.

Similar to related fix on another repo: 331326a0e3
2023-08-10 20:41:34 +05:30
MeouSker77
88184ed217 fix CJK output again 2023-08-09 22:04:35 +08:00
Andrei Betlen
66fb0345e8 Move grammar to function call argument 2023-08-08 15:08:54 -04:00
Andrei Betlen
1e844d3238 fix 2023-08-08 15:07:28 -04:00
Andrei Betlen
843b7ccd90 Merge branch 'main' into c0sogi/main 2023-08-08 14:43:02 -04:00
Andrei Betlen
bf0c603c51 Merge branch 'main' into fix-on-m1 2023-08-08 14:38:35 -04:00
Andrei Betlen
36041c8bec Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-08 14:35:10 -04:00
Andrei Betlen
d015bdb4f8 Add mul_mat_q option 2023-08-08 14:35:06 -04:00
Andrei
dcc26f7f78
Merge pull request #573 from mzen17/spelling-error-patch
Fix typo "lowe-level API" to "low-level API" in the README
2023-08-08 14:32:34 -04:00
Andrei Betlen
f6a7850e1a Update llama.cpp 2023-08-08 14:30:58 -04:00
Andrei
1dd4774ca9
Merge pull request #583 from abetlen/dependabot/pip/mkdocs-1.5.2
Bump mkdocs from 1.5.1 to 1.5.2
2023-08-08 14:24:40 -04:00
Andrei
03e575f6a5
Merge pull request #584 from abetlen/dependabot/pip/fastapi-0.101.0
Bump fastapi from 0.100.1 to 0.101.0
2023-08-08 14:24:31 -04:00
dependabot[bot]
83f8438c4f
Bump fastapi from 0.100.1 to 0.101.0
Bumps [fastapi](https://github.com/tiangolo/fastapi) from 0.100.1 to 0.101.0.
- [Release notes](https://github.com/tiangolo/fastapi/releases)
- [Commits](https://github.com/tiangolo/fastapi/compare/0.100.1...0.101.0)

---
updated-dependencies:
- dependency-name: fastapi
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-07 20:10:12 +00:00
dependabot[bot]
4cf2fc7d3d
Bump mkdocs from 1.5.1 to 1.5.2
Bumps [mkdocs](https://github.com/mkdocs/mkdocs) from 1.5.1 to 1.5.2.
- [Release notes](https://github.com/mkdocs/mkdocs/releases)
- [Commits](https://github.com/mkdocs/mkdocs/compare/1.5.1...1.5.2)

---
updated-dependencies:
- dependency-name: mkdocs
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-07 20:09:55 +00:00
c0sogi
0d7d2031a9 prevent memory access error by llama_grammar_free 2023-08-07 17:02:33 +09:00
c0sogi
b07713cb9f reset grammar for every generation 2023-08-07 15:16:25 +09:00
c0sogi
418aa83b01 Added grammar based sampling 2023-08-07 02:21:37 +09:00
Mike Zeng
097fba25e5
Fixed spelling error "lowe-level API" to "low-level API" 2023-08-05 02:00:04 -05:00
c0sogi
ac188a21f3 Added low level grammar API 2023-08-05 14:43:35 +09:00
bretello
9f499af6b0
Update llama.cpp 2023-08-03 18:25:28 +02:00
bretello
39978ccaf5
add mul_mat_q parameter
This also fixes a crash when loading the 70b llama2 model on MacOS with
metal and `n_gpu_layers=1`
2023-08-03 18:24:50 +02:00
Ihsan Soydemir
a5bc57e279
Update README.md 2023-08-03 16:49:45 +02:00
Ihsan Soydemir
d4844b93ae
Merge pull request #2 from Isydmr/isydmr/docker-improvements
Docker improvements
2023-08-03 16:42:27 +02:00
Ihsan Soydemir
cdab73536b
Docker improvements 2023-08-03 16:36:50 +02:00