iyubondyrev
e6bbfb863c
examples: fix quantize example ( #1387 )
...
@iyubondyrev thank you!
2024-04-27 20:48:47 -04:00
anil
1eaace8ea3
Fix low_level_api_chat_cpp example to match current API ( #1086 )
...
* Fix low_level_api_chat_cpp to match current API
* Fix low_level_api_chat_cpp to match current API
* Using None instead of empty string to so that default prompt template can be used if no prompt provided
---------
Co-authored-by: Anil Pathak <anil@heyday.com>
2024-01-15 10:46:35 -05:00
Jonathan Soma
cfd698c75c
Update low_level_api_llama_cpp.py to match current API ( #1023 )
2023-12-18 15:59:11 -05:00
zocainViken
6dde6bd09c
bug fixing ( #925 )
2023-11-20 12:31:52 -05:00
Andrei Betlen
f4090a0bb2
Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs.
2023-09-13 23:00:43 -04:00
Juarez Bochi
20ac434d0f
Fix low level api examples
2023-09-07 17:50:47 -04:00
Andrei
2adf6f3f9a
Merge pull request #265 from dmahurin/fix-from-bytes-byteorder
...
fix "from_bytes() missing required argument 'byteorder'"
2023-05-26 12:53:06 -04:00
Andrei
34ad71f448
Merge pull request #274 from dmahurin/fix-missing-antiprompt
...
low_level_api_chat_cpp.py: Fix missing antiprompt output in chat.
2023-05-26 12:52:34 -04:00
Don Mahurin
0fa2ec4903
low_level_api_chat_cpp.py: Fix missing antiprompt output in chat.
2023-05-26 06:54:28 -07:00
Don Mahurin
d6a7adb17a
fix "missing 1 required positional argument: 'min_keep'"
2023-05-23 06:42:22 -07:00
Don Mahurin
327eedbfe1
fix "from_bytes() missing required argument 'byteorder'"
2023-05-23 00:20:34 -07:00
Mug
eaf9f19aa9
Fix lora
2023-05-08 15:27:42 +02:00
Mug
2c0d9b182c
Fix session loading and saving in low level example chat
2023-05-08 15:27:03 +02:00
Mug
fd80ddf703
Fix a bug with wrong type
2023-05-06 22:22:28 +02:00
Mug
996f63e9e1
Add utf8 to chat example
2023-05-06 15:16:58 +02:00
Mug
3ceb47b597
Fix mirastat requiring c_float
2023-05-06 13:35:50 +02:00
Mug
9797394c81
Wrong logit_bias parsed type
2023-05-06 13:27:52 +02:00
Mug
1895c11033
Rename postfix to suffix to match upstream
2023-05-06 13:18:25 +02:00
Mug
0e9f227afd
Update low level examples
2023-05-04 18:33:08 +02:00
Mug
c39547a986
Detect multi-byte responses and wait
2023-04-28 12:50:30 +02:00
Mug
5f81400fcb
Also ignore errors on input prompts
2023-04-26 14:45:51 +02:00
Mug
3c130f00ca
Remove try catch from chat
2023-04-26 14:38:53 +02:00
Mug
c4a8491d42
Fix decode errors permanently
2023-04-26 14:37:06 +02:00
Mug
53d17ad003
Fixed end of text wrong type, and fix n_predict behaviour
2023-04-17 14:45:28 +02:00
Mug
3bb45f1658
More reasonable defaults
2023-04-10 16:38:45 +02:00
Mug
0cccb41a8f
Added iterative search to prevent instructions from being echoed, add ignore eos, add no-mmap, fixed 1 character echo too much bug
2023-04-10 16:35:38 +02:00
Andrei Betlen
196650ccb2
Update model paths to be more clear they should point to file
2023-04-09 22:45:55 -04:00
Mug
16fc5b5d23
More interoperability to the original llama.cpp, and arguments now work
2023-04-07 13:32:19 +02:00
Mug
10c7571117
Fixed too many newlines, now onto args.
...
Still needs shipping work so you could do "python -m llama_cpp.examples." etc.
2023-04-06 15:33:22 +02:00
Mug
085cc92b1f
Better llama.cpp interoperability
...
Has some too many newline issues so WIP
2023-04-06 15:30:57 +02:00
Mug
283e59c5e9
Fix bug in init_break not being set when exited via antiprompt and others.
2023-04-05 14:47:24 +02:00
Mug
99ceecfccd
Move to new examples directory
2023-04-05 14:28:02 +02:00
Andrei Betlen
b1babcf56c
Add quantize example
2023-04-05 04:17:26 -04:00
Andrei Betlen
c8e13a78d0
Re-organize examples folder
2023-04-05 04:10:13 -04:00