Andrei Betlen
|
aad4b17f52
|
Update llama.cpp
|
2023-06-06 16:23:55 -04:00 |
|
Andrei Betlen
|
7b57420ea9
|
Update llama.cpp
|
2023-06-05 18:17:29 -04:00 |
|
Andrei Betlen
|
fafe47114c
|
Update llama.cpp
|
2023-05-21 17:47:21 -04:00 |
|
Andrei Betlen
|
01a010be52
|
Fix llama_cpp and Llama type signatures. Closes #221
|
2023-05-19 11:59:33 -04:00 |
|
Andrei Betlen
|
61d58e7b35
|
Check for CUDA_PATH before adding
|
2023-05-17 15:26:38 -04:00 |
|
Aneesh Joy
|
e9794f91f2
|
Fixd CUBLAS dll load issue in Windows
|
2023-05-17 18:04:58 +01:00 |
|
Andrei Betlen
|
cbac19bf24
|
Add winmode arg only on windows if python version supports it
|
2023-05-15 09:15:01 -04:00 |
|
Andrei Betlen
|
c804efe3f0
|
Fix obscure Wndows DLL issue. Closes #208
|
2023-05-14 22:08:11 -04:00 |
|
Andrei Betlen
|
cdf59768f5
|
Update llama.cpp
|
2023-05-14 00:04:22 -04:00 |
|
Andrei Betlen
|
7a536e86c2
|
Allow model to tokenize strings longer than context length and set add_bos. Closes #92
|
2023-05-12 14:28:22 -04:00 |
|
Andrei Betlen
|
8dfde63255
|
Fix return type
|
2023-05-07 19:30:14 -04:00 |
|
Andrei Betlen
|
3fbda71790
|
Fix mlock_supported and mmap_supported return type
|
2023-05-07 03:04:22 -04:00 |
|
Andrei Betlen
|
7c3743fe5f
|
Update llama.cpp
|
2023-05-07 00:12:47 -04:00 |
|
Andrei Betlen
|
b5f3e74627
|
Add return type annotations for embeddings and logits
|
2023-05-05 14:22:55 -04:00 |
|
Andrei Betlen
|
3e28e0e50c
|
Fix: runtime type errors
|
2023-05-05 14:12:26 -04:00 |
|
Andrei Betlen
|
e24c3d7447
|
Prefer explicit imports
|
2023-05-05 14:05:31 -04:00 |
|
Andrei Betlen
|
40501435c1
|
Fix: types
|
2023-05-05 14:04:12 -04:00 |
|
Andrei Betlen
|
6702d2abfd
|
Fix candidates type
|
2023-05-05 14:00:30 -04:00 |
|
Andrei Betlen
|
5e7ddfc3d6
|
Fix llama_cpp types
|
2023-05-05 13:54:22 -04:00 |
|
Andrei Betlen
|
b6a9a0b6ba
|
Add types for all low-level api functions
|
2023-05-05 12:22:27 -04:00 |
|
Andrei Betlen
|
1d47cce222
|
Update llama.cpp
|
2023-05-03 09:33:30 -04:00 |
|
Matt Hoffner
|
f97ff3c5bb
|
Update llama_cpp.py
|
2023-05-01 20:40:06 -07:00 |
|
Andrei Betlen
|
350a1769e1
|
Update sampling api
|
2023-05-01 14:47:55 -04:00 |
|
Andrei Betlen
|
7837c3fdc7
|
Fix return types and import comments
|
2023-05-01 14:02:06 -04:00 |
|
Andrei Betlen
|
80184a286c
|
Update llama.cpp
|
2023-05-01 10:44:28 -04:00 |
|
Andrei Betlen
|
ea0faabae1
|
Update llama.cpp
|
2023-04-28 15:32:43 -04:00 |
|
Andrei Betlen
|
9339929f56
|
Update llama.cpp
|
2023-04-26 20:00:54 -04:00 |
|
Andrei Betlen
|
cbd26fdcc1
|
Update llama.cpp
|
2023-04-25 19:03:41 -04:00 |
|
Andrei Betlen
|
02cf881317
|
Update llama.cpp
|
2023-04-24 09:30:10 -04:00 |
|
Andrei Betlen
|
e99caedbbd
|
Update llama.cpp
|
2023-04-22 19:50:28 -04:00 |
|
Andrei Betlen
|
1eb130a6b2
|
Update llama.cpp
|
2023-04-21 17:40:27 -04:00 |
|
Andrei Betlen
|
95c0dc134e
|
Update type signature to allow for null pointer to be passed.
|
2023-04-18 23:44:46 -04:00 |
|
Andrei Betlen
|
35abf89552
|
Add bindings for LoRA adapters. Closes #88
|
2023-04-18 01:30:04 -04:00 |
|
Andrei Betlen
|
005c78d26c
|
Update llama.cpp
|
2023-04-12 14:29:00 -04:00 |
|
Andrei Betlen
|
9f1e565594
|
Update llama.cpp
|
2023-04-11 11:59:03 -04:00 |
|
Mug
|
2559e5af9b
|
Changed the environment variable name into "LLAMA_CPP_LIB"
|
2023-04-10 17:27:17 +02:00 |
|
Mug
|
ee71ce8ab7
|
Make windows users happy (hopefully)
|
2023-04-10 17:12:25 +02:00 |
|
Mug
|
cf339c9b3c
|
Better custom library debugging
|
2023-04-10 17:06:58 +02:00 |
|
Mug
|
4132293d2d
|
Merge branch 'main' of https://github.com/abetlen/llama-cpp-python into local-lib
|
2023-04-10 17:00:42 +02:00 |
|
Mug
|
76131d5bb8
|
Use environment variable for library override
|
2023-04-10 17:00:35 +02:00 |
|
Andrei Betlen
|
c3c2623e8b
|
Update llama.cpp
|
2023-04-09 22:01:33 -04:00 |
|
Andrei Betlen
|
38f442deb0
|
Bugfix: Wrong size of embeddings. Closes #47
|
2023-04-08 15:05:33 -04:00 |
|
Andrei Betlen
|
ae3e9c3d6f
|
Update shared library extension for macos
|
2023-04-08 02:45:21 -04:00 |
|
Mug
|
e3ea354547
|
Allow local llama library usage
|
2023-04-05 14:23:01 +02:00 |
|
Andrei Betlen
|
51dbcf2693
|
Bugfix: wrong signature for quantize function
|
2023-04-04 22:36:59 -04:00 |
|
MillionthOdin16
|
a0758f0077
|
Update llama_cpp.py with PR requests
lib_base_name and load_shared_library
to
_lib_base_name and _load_shared_library
|
2023-04-03 13:06:50 -04:00 |
|
MillionthOdin16
|
a40476e299
|
Update llama_cpp.py
Make shared library code more robust with some platform specific functionality and more descriptive errors when failures occur
|
2023-04-02 21:50:13 -04:00 |
|
Andrei Betlen
|
1ed8cd023d
|
Update llama_cpp and add kv_cache api support
|
2023-04-02 13:33:49 -04:00 |
|
Andrei Betlen
|
49c8df369a
|
Fix type signature of token_to_str
|
2023-03-31 03:25:12 -04:00 |
|
Andrei Betlen
|
670d390001
|
Fix ctypes typing issue for Arrays
|
2023-03-31 03:20:15 -04:00 |
|