ollama/.gitignore
Daniel Hiltgen 58d95cc9bd Switch back to subprocessing for llama.cpp
This should resolve a number of memory leak and stability defects by allowing
us to isolate llama.cpp in a separate process and shutdown when idle, and
gracefully restart if it has problems.  This also serves as a first step to be
able to run multiple copies to support multiple models concurrently.
2024-04-01 16:48:18 -07:00

14 lines
No EOL
107 B
Text

.DS_Store
.vscode
.env
.venv
.swp
dist
ollama
ggml-metal.metal
.cache
*.exe
.idea
test_data
*.crt
llm/build