Andrei Betlen
|
f03a38e62a
|
Update llama.cpp
|
2023-11-26 15:38:22 -05:00 |
|
Andrei Betlen
|
36048d46af
|
Update llama.cpp
|
2023-11-23 16:26:00 -05:00 |
|
Andrei Betlen
|
be1f64d569
|
docs: Add docstrings from llama.cpp
|
2023-11-23 00:26:26 -05:00 |
|
Andrei Betlen
|
2c2afa320f
|
Update llama.cpp
|
2023-11-20 14:11:33 -05:00 |
|
Andrei Betlen
|
f0b30ef7dc
|
Update llama.cpp
|
2023-11-05 16:57:10 -05:00 |
|
Andrei Betlen
|
df9362eeea
|
Update llama.cpp
|
2023-11-03 11:34:50 -04:00 |
|
Andrei Betlen
|
fa83cc5f9c
|
Update llama.cpp
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
Update llama.cpp
Fix build examples
Exclude examples directory
Revert cmake changes
Try actions/checkout@v4
Try to update submodules
Revert
|
2023-11-02 14:28:15 -04:00 |
|
Sujeendran Menon
|
7b136bb5b1
|
Fix for shared library not found and compile issues in Windows (#848)
* fix windows library dll name issue
* Updated README.md Windows instructions
* Update llama_cpp.py to handle different windows dll file versions
|
2023-11-01 18:55:57 -04:00 |
|
Andrei Betlen
|
d808fd436c
|
Update llama.cpp
|
2023-10-31 21:29:35 -04:00 |
|
Andrei Betlen
|
53861c9e53
|
Update llama.cpp
|
2023-10-24 03:13:32 -04:00 |
|
Andrei Betlen
|
ff580031d2
|
Update llama.cpp
|
2023-10-19 02:55:08 -04:00 |
|
Andrei Betlen
|
43dfe1e2ab
|
Update llama.cpp
|
2023-10-05 16:07:49 -04:00 |
|
Andrei Betlen
|
a7d17b8ac9
|
Update llama.cpp
|
2023-10-03 15:23:35 -04:00 |
|
Andrei Betlen
|
3720c739d4
|
Update llama.cpp
|
2023-09-29 19:58:21 -04:00 |
|
Andrei Betlen
|
1a1c3dc418
|
Update llama.cpp
|
2023-09-28 22:42:03 -04:00 |
|
Andrei Betlen
|
38e34c97f0
|
Update llama.cpp
|
2023-09-18 16:11:27 -04:00 |
|
Andrei Betlen
|
8d75016549
|
Install required runtime dlls to package directory on windows
|
2023-09-16 14:57:49 -04:00 |
|
Andrei Betlen
|
8474665625
|
Update base_path to fix issue resolving dll in windows isolation container.
|
2023-09-14 14:51:43 -04:00 |
|
Andrei Betlen
|
f4090a0bb2
|
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
|
2023-09-13 23:00:43 -04:00 |
|
Andrei Betlen
|
517f9ed80b
|
Convert missed llama.cpp constants into standard python types
|
2023-09-13 21:11:52 -04:00 |
|
Andrei Betlen
|
1910793f56
|
Merge branch 'main' into v0.2-wip
|
2023-09-12 16:43:32 -04:00 |
|
Andrei Betlen
|
d3f63211ef
|
Update llama.cpp
|
2023-09-09 12:12:32 -04:00 |
|
Andrei Betlen
|
186626d58e
|
Update llama.cpp
|
2023-09-01 14:26:13 -04:00 |
|
Andrei Betlen
|
47de3ab104
|
Update llama.cpp
|
2023-08-29 07:36:20 -04:00 |
|
Andrei Betlen
|
e0dcbc28a1
|
Update llama.cpp
|
2023-08-28 10:33:45 -04:00 |
|
Andrei Betlen
|
4887973c22
|
Update llama.cpp
|
2023-08-27 12:59:20 -04:00 |
|
Andrei Betlen
|
ac47d55577
|
Merge branch 'main' into v0.2-wip
|
2023-08-25 15:45:22 -04:00 |
|
Andrei Betlen
|
ef23d1e545
|
Update llama.cpp
|
2023-08-25 14:35:53 -04:00 |
|
Andrei Betlen
|
c2d1deaa8a
|
Update llama.cpp
|
2023-08-24 18:01:42 -04:00 |
|
Andrei Betlen
|
db982a861f
|
Fix
|
2023-08-24 01:01:12 -04:00 |
|
Andrei Betlen
|
cf405f6764
|
Merge branch 'main' into v0.2-wip
|
2023-08-24 00:30:51 -04:00 |
|
Andrei Betlen
|
bbbf0f4fc4
|
Update llama.cpp
|
2023-08-24 00:17:00 -04:00 |
|
Andrei Betlen
|
b345d60987
|
Update llama.cpp
|
2023-08-14 22:33:30 -04:00 |
|
Andrei Betlen
|
843b7ccd90
|
Merge branch 'main' into c0sogi/main
|
2023-08-08 14:43:02 -04:00 |
|
c0sogi
|
ac188a21f3
|
Added low level grammar API
|
2023-08-05 14:43:35 +09:00 |
|
bretello
|
39978ccaf5
|
add mul_mat_q parameter
This also fixes a crash when loading the 70b llama2 model on MacOS with
metal and `n_gpu_layers=1`
|
2023-08-03 18:24:50 +02:00 |
|
Andrei Betlen
|
078902a6fe
|
Add llama_grammar_accept_token
|
2023-07-24 15:55:26 -04:00 |
|
Andrei Betlen
|
bf901773b0
|
Add llama_sample_grammar
|
2023-07-24 15:42:31 -04:00 |
|
Andrei Betlen
|
1b6997d69f
|
Convert constants to python types and allow python types in low-level api
|
2023-07-24 15:42:07 -04:00 |
|
Andrei Betlen
|
401309d11c
|
Revert "Merge pull request #521 from bretello/main"
This reverts commit 07f0f3a386 , reversing
changes made to d8a3ddbb1c .
|
2023-07-24 13:11:10 -04:00 |
|
Andrei
|
07f0f3a386
|
Merge pull request #521 from bretello/main
raise exception when `llama_load_model_from_file` fails
|
2023-07-24 13:09:28 -04:00 |
|
Andrei Betlen
|
d8a3ddbb1c
|
Update llama.cpp
|
2023-07-24 13:08:06 -04:00 |
|
Andrei Betlen
|
985d559971
|
Update llama.cpp
|
2023-07-24 13:04:34 -04:00 |
|
bretello
|
8be7d67f7e
|
raise exception when llama_load_model_from_file fails
|
2023-07-24 14:42:37 +02:00 |
|
Andrei Betlen
|
b83728ad1e
|
Update llama.cpp
|
2023-07-21 12:33:27 -04:00 |
|
Andrei Betlen
|
01435da740
|
Update llama.cpp
|
2023-07-20 18:54:25 -04:00 |
|
Andrei Betlen
|
d10ce62714
|
Revert ctypes argtype change
|
2023-07-20 18:51:53 -04:00 |
|
Vinicius
|
a8551477f5
|
Update llama_cpp.py - Fix c_char_p to Array[c_char_p] and c_float to Array[c_float]
|
2023-07-20 17:29:11 -03:00 |
|
Andrei Betlen
|
e4f9db37db
|
Fix context_params struct layout
|
2023-07-15 15:34:55 -04:00 |
|
Andrei Betlen
|
f0797a6054
|
Merge branch main into custom_rope
|
2023-07-15 15:11:01 -04:00 |
|