This website requires JavaScript.
Explore
Help
Sign in
baalajimaestro
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
d966b730ac
ollama
/
llm
/
llama.cpp
/
generate_linux.go
4 lines
47 B
Go
Raw
Normal View
History
Unescape
Escape
first pass at linux gpu support (#454) * linux gpu support * handle multiple gpus * add cuda docker image (#488) --------- Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-12 15:04:35 +00:00
package
llm
Adapted rocm support to cgo based llama.cpp
2023-11-29 19:00:37 +00:00
//go:generate bash ./gen_linux.sh
Reference in a new issue
Copy permalink