This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
1d177aaaef
llama.cpp
/
llama_cpp
/
server
History
Andrei Betlen
22d77eefd2
feat: Add option to enable
flash_attn
to Lllama params and ModelSettings
2024-04-30 09:29:16 -04:00
..
__init__.py
llama_cpp server: app is now importable, still runnable as a module
2023-04-29 11:41:25 -07:00
__main__.py
feat: Add support for yaml based configs
2024-04-10 02:47:01 -04:00
app.py
feat: add
disable_ping_events
flag (
#1257
)
2024-04-17 10:08:19 -04:00
cli.py
Fix python3.8 support
2024-01-19 08:17:49 -05:00
errors.py
misc: Format
2024-02-28 14:27:40 -05:00
model.py
feat: Generic Chat Formats, Tool Calling, and Huggingface Pull Support for Multimodal Models (Obsidian, LLaVA1.6, Moondream) (
#1147
)
2024-04-30 01:35:38 -04:00
settings.py
feat: Add option to enable
flash_attn
to Lllama params and ModelSettings
2024-04-30 09:29:16 -04:00
types.py
feat: Add logprobs support to chat completions (
#1311
)
2024-03-31 13:30:13 -04:00