This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
b9b6dfd23f
llama.cpp
/
llama_cpp
History
MillionthOdin16
76a82babef
Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048.
2023-04-05 17:44:53 -04:00
..
server
Set n_batch to the default value of 8. I think this is leftover from when n_ctx was missing and n_batch was 2048.
2023-04-05 17:44:53 -04:00
__init__.py
Black formatting
2023-03-24 14:59:29 -04:00
llama.py
Make Llama instance pickleable.
Closes
#27
2023-04-05 06:52:17 -04:00
llama_cpp.py
Bugfix: wrong signature for quantize function
2023-04-04 22:36:59 -04:00
llama_types.py
Bugfix for Python3.7
2023-04-05 04:37:33 -04:00