ollama/docs
Bruce MacDonald 42998d797d
subprocess llama.cpp server (#401)
* remove c code
* pack llama.cpp
* use request context for llama_cpp
* let llama_cpp decide the number of threads to use
* stop llama runner when app stops
* remove sample count and duration metrics
* use go generate to get libraries
* tmp dir for running llm
2023-08-30 16:35:03 -04:00
..
tutorials Update langchainpy.md 2023-08-14 12:12:56 +03:00
api.md update orca to orca-mini 2023-08-27 13:26:30 -04:00
development.md subprocess llama.cpp server (#401) 2023-08-30 16:35:03 -04:00
faq.md cmd: use environment variables for server options 2023-08-10 14:17:53 -07:00
modelfile.md treat stop as stop sequences, not exact tokens (#442) 2023-08-30 11:53:42 -04:00
README.md Add tutorials for using Langchain with ollama 2023-08-10 21:27:37 -07:00
tutorials.md resolving bmacd comment 2023-08-11 13:51:44 -07:00