Commit graph

1041 commits

Author SHA1 Message Date
Andrei Betlen
e0dcbc28a1 Update llama.cpp 2023-08-28 10:33:45 -04:00
Andrei Betlen
3d083f32a6 Bump version 2023-08-27 13:03:09 -04:00
Andrei
324af60731
Merge pull request #644 from manasmagdum/patch-1
Fix issue of missing words due to buffer overflow for Issue #642
2023-08-27 13:02:41 -04:00
Andrei
6e1a73b3f6
Merge branch 'main' into patch-1 2023-08-27 13:02:29 -04:00
Andrei Betlen
4887973c22 Update llama.cpp 2023-08-27 12:59:20 -04:00
manasmagdum
4100bdec31
Fix issue of missing words due to buffer overflow 2023-08-27 16:39:46 +05:30
Andrei Betlen
9ab49bc1d4 Bump version 2023-08-26 23:38:00 -04:00
Andrei Betlen
3a29d65f45 Update llama.cpp 2023-08-26 23:36:24 -04:00
Andrei Betlen
5de8009706 Add copilot-codex completions endpoint for drop-in copilot usage 2023-08-25 17:49:14 -04:00
Andrei Betlen
895f84f8fa Add ROCm / AMD instructions to docs 2023-08-25 17:19:23 -04:00
Andrei Betlen
ac47d55577 Merge branch 'main' into v0.2-wip 2023-08-25 15:45:22 -04:00
Andrei Betlen
3f8bc417d7 Bump version 2023-08-25 15:18:15 -04:00
Andrei
915bbeacc5
Merge pull request #633 from abetlen/gguf
GGUF (Breaking Change to Model Files)
2023-08-25 15:13:12 -04:00
Andrei Betlen
ac37ea562b Add temporary docs for GGUF model conversion 2023-08-25 15:11:08 -04:00
Andrei Betlen
ef23d1e545 Update llama.cpp 2023-08-25 14:35:53 -04:00
Andrei Betlen
c8a7637978 Ignore vendor directory for tests 2023-08-25 14:35:27 -04:00
Andrei Betlen
48cf43b427 Use _with_model variants for tokenization 2023-08-25 13:43:16 -04:00
Andrei Betlen
80389f71da Update README 2023-08-25 05:02:48 -04:00
Andrei Betlen
8ac59465b9 Strip leading space when de-tokenizing. 2023-08-25 04:56:48 -04:00
Andrei Betlen
c2d1deaa8a Update llama.cpp 2023-08-24 18:01:42 -04:00
Andrei Betlen
3674e5ed4e Update model path 2023-08-24 01:01:20 -04:00
Andrei Betlen
db982a861f Fix 2023-08-24 01:01:12 -04:00
Andrei Betlen
4ed632c4b3 Remove deprecated params 2023-08-24 01:01:05 -04:00
Andrei Betlen
cf405f6764 Merge branch 'main' into v0.2-wip 2023-08-24 00:30:51 -04:00
Andrei Betlen
bbbf0f4fc4 Update llama.cpp 2023-08-24 00:17:00 -04:00
Andrei
d644199fe8
Merge pull request #625 from abetlen/dependabot/pip/mkdocs-material-9.2.0
Bump mkdocs-material from 9.1.21 to 9.2.0
2023-08-22 15:53:57 -04:00
dependabot[bot]
abca3d81c8
Bump mkdocs-material from 9.1.21 to 9.2.0
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.1.21 to 9.2.0.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.1.21...9.2.0)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-21 20:53:56 +00:00
Andrei
2a0844b190
Merge pull request #556 from Huge/patch-1
Fix dev setup in README.md so that everyone can run it
2023-08-17 23:21:45 -04:00
Andrei
876f39d751
Merge pull request #564 from Isydmr/main
Docker improvements
2023-08-17 23:21:27 -04:00
Andrei
4cf0461f97
Merge pull request #598 from pradhyumna85/main
Fixed Cuda Dockerfile
2023-08-17 23:20:43 -04:00
Andrei Betlen
8fc3fa9f1c Bump version 2023-08-17 23:17:56 -04:00
Andrei Betlen
da1ef72c51 Update llama.cpp 2023-08-17 23:02:20 -04:00
Andrei Betlen
e632c59fa0 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-17 20:53:04 -04:00
Andrei
7ac73b8d94
Merge pull request #621 from c0sogi/main
Fix typos in llama_grammar
2023-08-17 20:52:48 -04:00
c0sogi
a240aa6b25 Fix typos in llama_grammar 2023-08-17 21:00:44 +09:00
Andrei Betlen
620cd2fd69 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-14 22:41:47 -04:00
Andrei Betlen
5788f1f2b2 Remove unnused import 2023-08-14 22:41:37 -04:00
Andrei
6dfb98117e
Merge pull request #600 from Vuizur/main
Add py.typed to conform with PEP 561
2023-08-14 22:40:41 -04:00
Andrei
b99e758045
Merge pull request #604 from aliencaocao/main-1
Add doc string for n_gpu_layers argument and make -1 offload all layers
2023-08-14 22:40:10 -04:00
Andrei Betlen
b345d60987 Update llama.cpp 2023-08-14 22:33:30 -04:00
Andrei
91e86e5d71
Merge pull request #610 from abetlen/dependabot/pip/fastapi-0.101.1
Bump fastapi from 0.101.0 to 0.101.1
2023-08-14 22:22:56 -04:00
Andrei
c48b18b364
Merge pull request #611 from abetlen/dependabot/pip/sse-starlette-1.6.5
Bump sse-starlette from 1.6.1 to 1.6.5
2023-08-14 22:22:51 -04:00
Andrei
07e23f46a5
Merge pull request #612 from abetlen/dependabot/pip/pydantic-settings-2.0.3
Bump pydantic-settings from 2.0.2 to 2.0.3
2023-08-14 22:22:44 -04:00
dependabot[bot]
485ad97909
Bump pydantic-settings from 2.0.2 to 2.0.3
Bumps [pydantic-settings](https://github.com/pydantic/pydantic-settings) from 2.0.2 to 2.0.3.
- [Release notes](https://github.com/pydantic/pydantic-settings/releases)
- [Commits](https://github.com/pydantic/pydantic-settings/compare/v2.0.2...v2.0.3)

---
updated-dependencies:
- dependency-name: pydantic-settings
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-14 20:40:16 +00:00
dependabot[bot]
e91969c888
Bump sse-starlette from 1.6.1 to 1.6.5
Bumps [sse-starlette](https://github.com/sysid/sse-starlette) from 1.6.1 to 1.6.5.
- [Release notes](https://github.com/sysid/sse-starlette/releases)
- [Changelog](https://github.com/sysid/sse-starlette/blob/main/CHANGELOG.md)
- [Commits](https://github.com/sysid/sse-starlette/compare/v1.6.1...v1.6.5)

---
updated-dependencies:
- dependency-name: sse-starlette
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-14 20:38:42 +00:00
dependabot[bot]
077f8ed23e
Bump fastapi from 0.101.0 to 0.101.1
Bumps [fastapi](https://github.com/tiangolo/fastapi) from 0.101.0 to 0.101.1.
- [Release notes](https://github.com/tiangolo/fastapi/releases)
- [Commits](https://github.com/tiangolo/fastapi/compare/0.101.0...0.101.1)

---
updated-dependencies:
- dependency-name: fastapi
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-14 20:36:56 +00:00
Billy Cao
c471871d0b
make n_gpu_layers=-1 offload all layers 2023-08-13 11:21:28 +08:00
Billy Cao
d018c7b01d
Add doc string for n_gpu_layers argument 2023-08-12 18:41:47 +08:00
Hannes Krumbiegel
17dd7fa8e0 Add py.typed 2023-08-11 09:58:48 +02:00
Pradyumna Singh Rathore
d010ea70d2
Fixed Cuda Dockerfile
Previously models produced garbage output when running on GPU with layers offloaded.

Similar to related fix on another repo: 331326a0e3
2023-08-10 20:41:34 +05:30