Andrei Betlen
eb9c7d4ed8
Update llama.cpp
2024-01-03 22:04:04 -05:00
Andrei Betlen
92284f32cb
Add HIP_PATH to dll search directories for windows users.
2023-12-22 15:29:56 -05:00
Andrei Betlen
2b0d3f36fa
set llama_max_devices using library function
2023-12-22 15:19:28 -05:00
Andrei Betlen
6d8bc090f9
fix: inccorect bindings for kv override. Based on #1011
2023-12-22 14:52:20 -05:00
Andrei Betlen
6473796343
Update llama.cpp
2023-12-22 14:10:34 -05:00
Andrei Betlen
4a85442c35
Update llama.cpp
2023-12-22 00:12:37 -05:00
Andrei Betlen
7df6c32544
Fix type annotations
2023-12-18 18:14:53 -05:00
Andrei Betlen
b703aad79e
Fix type annotation
2023-12-18 18:13:37 -05:00
Andrei Betlen
d0aedfcff6
Fix type annotation
2023-12-18 18:12:49 -05:00
Eduard Christian Dumitrescu
2993936b10
Fix ctypes definitions of llama_kv_cache_view_update
and llama_kv_cache_view_free
. ( #1028 )
2023-12-18 18:11:26 -05:00
Brandon Roberts
62944df142
Bugfix: Remove f16_kv, add offload_kqv field ( #1019 )
...
F16_KV appears to have been removed here: af99c6fbfc
This addresses two issues:
- #995 which just requests to add the KV cache offloading param
- #1006 a NULL ptr exception when using the embeddings (introduced by
leaving f16_kv in the fields struct)
2023-12-18 14:27:11 -05:00
Andrei Betlen
534b1ea9b5
Update llama.cpp
2023-12-16 18:57:43 -05:00
Andrei Betlen
c0fc0a1e82
Update llama.cpp
2023-12-13 21:43:16 -05:00
Andrei Betlen
f1edc66b21
Update llama.cpp
2023-12-11 10:21:35 -05:00
Andrei Betlen
396dbf0b2b
docs: Improve low-level docstrings
2023-11-27 19:03:02 -05:00
Andrei Betlen
f03a38e62a
Update llama.cpp
2023-11-26 15:38:22 -05:00
Andrei Betlen
36048d46af
Update llama.cpp
2023-11-23 16:26:00 -05:00
Andrei Betlen
be1f64d569
docs: Add docstrings from llama.cpp
2023-11-23 00:26:26 -05:00
Andrei Betlen
2c2afa320f
Update llama.cpp
2023-11-20 14:11:33 -05:00
Andrei Betlen
f0b30ef7dc
Update llama.cpp
2023-11-05 16:57:10 -05:00
Andrei Betlen
df9362eeea
Update llama.cpp
2023-11-03 11:34:50 -04:00
Andrei Betlen
fa83cc5f9c
Update llama.cpp
...
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
Update llama.cpp
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
2023-11-02 14:28:15 -04:00
Sujeendran Menon
7b136bb5b1
Fix for shared library not found and compile issues in Windows ( #848 )
...
* fix windows library dll name issue
* Updated README.md Windows instructions
* Update llama_cpp.py to handle different windows dll file versions
2023-11-01 18:55:57 -04:00
Andrei Betlen
d808fd436c
Update llama.cpp
2023-10-31 21:29:35 -04:00
Andrei Betlen
53861c9e53
Update llama.cpp
2023-10-24 03:13:32 -04:00
Andrei Betlen
ff580031d2
Update llama.cpp
2023-10-19 02:55:08 -04:00
Andrei Betlen
43dfe1e2ab
Update llama.cpp
2023-10-05 16:07:49 -04:00
Andrei Betlen
a7d17b8ac9
Update llama.cpp
2023-10-03 15:23:35 -04:00
Andrei Betlen
3720c739d4
Update llama.cpp
2023-09-29 19:58:21 -04:00
Andrei Betlen
1a1c3dc418
Update llama.cpp
2023-09-28 22:42:03 -04:00
Andrei Betlen
38e34c97f0
Update llama.cpp
2023-09-18 16:11:27 -04:00
Andrei Betlen
8d75016549
Install required runtime dlls to package directory on windows
2023-09-16 14:57:49 -04:00
Andrei Betlen
8474665625
Update base_path to fix issue resolving dll in windows isolation container.
2023-09-14 14:51:43 -04:00
Andrei Betlen
f4090a0bb2
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
2023-09-13 23:00:43 -04:00
Andrei Betlen
517f9ed80b
Convert missed llama.cpp constants into standard python types
2023-09-13 21:11:52 -04:00
Andrei Betlen
1910793f56
Merge branch 'main' into v0.2-wip
2023-09-12 16:43:32 -04:00
Andrei Betlen
d3f63211ef
Update llama.cpp
2023-09-09 12:12:32 -04:00
Andrei Betlen
186626d58e
Update llama.cpp
2023-09-01 14:26:13 -04:00
Andrei Betlen
47de3ab104
Update llama.cpp
2023-08-29 07:36:20 -04:00
Andrei Betlen
e0dcbc28a1
Update llama.cpp
2023-08-28 10:33:45 -04:00
Andrei Betlen
4887973c22
Update llama.cpp
2023-08-27 12:59:20 -04:00
Andrei Betlen
ac47d55577
Merge branch 'main' into v0.2-wip
2023-08-25 15:45:22 -04:00
Andrei Betlen
ef23d1e545
Update llama.cpp
2023-08-25 14:35:53 -04:00
Andrei Betlen
c2d1deaa8a
Update llama.cpp
2023-08-24 18:01:42 -04:00
Andrei Betlen
db982a861f
Fix
2023-08-24 01:01:12 -04:00
Andrei Betlen
cf405f6764
Merge branch 'main' into v0.2-wip
2023-08-24 00:30:51 -04:00
Andrei Betlen
bbbf0f4fc4
Update llama.cpp
2023-08-24 00:17:00 -04:00
Andrei Betlen
b345d60987
Update llama.cpp
2023-08-14 22:33:30 -04:00
Andrei Betlen
843b7ccd90
Merge branch 'main' into c0sogi/main
2023-08-08 14:43:02 -04:00
c0sogi
ac188a21f3
Added low level grammar API
2023-08-05 14:43:35 +09:00
bretello
39978ccaf5
add mul_mat_q
parameter
...
This also fixes a crash when loading the 70b llama2 model on MacOS with
metal and `n_gpu_layers=1`
2023-08-03 18:24:50 +02:00
Andrei Betlen
078902a6fe
Add llama_grammar_accept_token
2023-07-24 15:55:26 -04:00
Andrei Betlen
bf901773b0
Add llama_sample_grammar
2023-07-24 15:42:31 -04:00
Andrei Betlen
1b6997d69f
Convert constants to python types and allow python types in low-level api
2023-07-24 15:42:07 -04:00
Andrei Betlen
401309d11c
Revert "Merge pull request #521 from bretello/main"
...
This reverts commit 07f0f3a386
, reversing
changes made to d8a3ddbb1c
.
2023-07-24 13:11:10 -04:00
Andrei
07f0f3a386
Merge pull request #521 from bretello/main
...
raise exception when `llama_load_model_from_file` fails
2023-07-24 13:09:28 -04:00
Andrei Betlen
d8a3ddbb1c
Update llama.cpp
2023-07-24 13:08:06 -04:00
Andrei Betlen
985d559971
Update llama.cpp
2023-07-24 13:04:34 -04:00
bretello
8be7d67f7e
raise exception when llama_load_model_from_file
fails
2023-07-24 14:42:37 +02:00
Andrei Betlen
b83728ad1e
Update llama.cpp
2023-07-21 12:33:27 -04:00
Andrei Betlen
01435da740
Update llama.cpp
2023-07-20 18:54:25 -04:00
Andrei Betlen
d10ce62714
Revert ctypes argtype change
2023-07-20 18:51:53 -04:00
Vinicius
a8551477f5
Update llama_cpp.py - Fix c_char_p to Array[c_char_p] and c_float to Array[c_float]
2023-07-20 17:29:11 -03:00
Andrei Betlen
e4f9db37db
Fix context_params struct layout
2023-07-15 15:34:55 -04:00
Andrei Betlen
f0797a6054
Merge branch main into custom_rope
2023-07-15 15:11:01 -04:00
randoentity
3f8f276f9f
Add bindings for custom_rope
2023-07-10 17:37:46 +02:00
Andrei Betlen
98ae4e58a3
Update llama.cpp
2023-07-06 17:57:56 -04:00
Andrei Betlen
b994296c75
Update llama.cpp
2023-07-05 01:00:14 -04:00
Andrei Betlen
c67f786360
Update llama.cpp
2023-06-29 01:08:15 -04:00
Andrei Betlen
952228407e
Update llama.cpp
2023-06-26 08:50:38 -04:00
Andrei Betlen
e37798777e
Update llama.cpp
2023-06-20 11:25:10 -04:00
Andrei Betlen
d7153abcf8
Update llama.cpp
2023-06-16 23:11:14 -04:00
Andrei Betlen
715f98c591
Update llama.cpp
2023-06-14 21:40:13 -04:00
Andrei Betlen
6639371407
Update llama.cpp
2023-06-10 12:17:38 -04:00
Andrei Betlen
607d217caa
Allow both .so and .dylib extensions for macos
2023-06-08 00:27:19 -04:00
Andrei Betlen
aad4b17f52
Update llama.cpp
2023-06-06 16:23:55 -04:00
Andrei Betlen
7b57420ea9
Update llama.cpp
2023-06-05 18:17:29 -04:00
Andrei Betlen
fafe47114c
Update llama.cpp
2023-05-21 17:47:21 -04:00
Andrei Betlen
01a010be52
Fix llama_cpp and Llama type signatures. Closes #221
2023-05-19 11:59:33 -04:00
Andrei Betlen
61d58e7b35
Check for CUDA_PATH before adding
2023-05-17 15:26:38 -04:00
Aneesh Joy
e9794f91f2
Fixd CUBLAS dll load issue in Windows
2023-05-17 18:04:58 +01:00
Andrei Betlen
cbac19bf24
Add winmode arg only on windows if python version supports it
2023-05-15 09:15:01 -04:00
Andrei Betlen
c804efe3f0
Fix obscure Wndows DLL issue. Closes #208
2023-05-14 22:08:11 -04:00
Andrei Betlen
cdf59768f5
Update llama.cpp
2023-05-14 00:04:22 -04:00
Andrei Betlen
7a536e86c2
Allow model to tokenize strings longer than context length and set add_bos. Closes #92
2023-05-12 14:28:22 -04:00
Andrei Betlen
8dfde63255
Fix return type
2023-05-07 19:30:14 -04:00
Andrei Betlen
3fbda71790
Fix mlock_supported and mmap_supported return type
2023-05-07 03:04:22 -04:00
Andrei Betlen
7c3743fe5f
Update llama.cpp
2023-05-07 00:12:47 -04:00
Andrei Betlen
b5f3e74627
Add return type annotations for embeddings and logits
2023-05-05 14:22:55 -04:00
Andrei Betlen
3e28e0e50c
Fix: runtime type errors
2023-05-05 14:12:26 -04:00
Andrei Betlen
e24c3d7447
Prefer explicit imports
2023-05-05 14:05:31 -04:00
Andrei Betlen
40501435c1
Fix: types
2023-05-05 14:04:12 -04:00
Andrei Betlen
6702d2abfd
Fix candidates type
2023-05-05 14:00:30 -04:00
Andrei Betlen
5e7ddfc3d6
Fix llama_cpp types
2023-05-05 13:54:22 -04:00
Andrei Betlen
b6a9a0b6ba
Add types for all low-level api functions
2023-05-05 12:22:27 -04:00
Andrei Betlen
1d47cce222
Update llama.cpp
2023-05-03 09:33:30 -04:00
Matt Hoffner
f97ff3c5bb
Update llama_cpp.py
2023-05-01 20:40:06 -07:00
Andrei Betlen
350a1769e1
Update sampling api
2023-05-01 14:47:55 -04:00
Andrei Betlen
7837c3fdc7
Fix return types and import comments
2023-05-01 14:02:06 -04:00
Andrei Betlen
80184a286c
Update llama.cpp
2023-05-01 10:44:28 -04:00