llama.cpp/llama_cpp
Andrei f37456133a
Merge pull request #108 from eiery/main
Update n_batch default to 512 to match upstream llama.cpp
2023-04-24 13:48:09 -04:00
..
server Add use_mmap flag to server 2023-04-19 15:57:46 -04:00
__init__.py Black formatting 2023-03-24 14:59:29 -04:00
llama.py Merge pull request #108 from eiery/main 2023-04-24 13:48:09 -04:00
llama_cpp.py Update llama.cpp 2023-04-24 09:30:10 -04:00
llama_types.py Bugfix for Python3.7 2023-04-05 04:37:33 -04:00