Jeffrey Morgan
ab0668293c
llm: fix build on amd64
2023-10-06 14:39:54 -07:00
Bruce MacDonald
5d22319a2c
rename server subprocess ( #700 )
...
- this makes it easier to see that the subprocess is associated with ollama
2023-10-06 10:15:42 -04:00
Michael Yang
058d0cd04b
silence warm up log
2023-09-21 14:53:33 -07:00
Michael Yang
ee1c994d15
update submodule ( #567 )
2023-09-21 16:22:23 -04:00
Michael Yang
a9ed7cc6aa
rename generate.go
2023-09-20 14:42:17 -07:00
Michael Yang
6c6a31a1e8
embed libraries using cmake
2023-09-20 14:41:57 -07:00
Bruce MacDonald
fc6ec356fc
remove libcuda.so
2023-09-20 20:36:14 +01:00
Bruce MacDonald
1255bc9b45
only package 11.8 runner
2023-09-20 20:00:41 +01:00
Bruce MacDonald
b9bb5ca288
use cuda_version
2023-09-20 17:58:16 +01:00
Bruce MacDonald
4e8be787c7
pack in cuda libs
2023-09-20 17:40:42 +01:00
Bruce MacDonald
66003e1d05
subprocess improvements ( #524 )
...
* subprocess improvements
- increase start-up timeout
- when runner fails to start fail rather than timing out
- try runners in order rather than choosing 1 runner
- embed metal runner in metal dir rather than gpu
- refactor logging and error messages
* Update llama.go
* Update llama.go
* simplify by using glob
2023-09-18 15:16:32 -04:00
Bruce MacDonald
2540c9181c
support for packaging in multiple cuda runners ( #509 )
...
* enable packaging multiple cuda versions
* use nvcc cuda version if available
---------
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-14 15:08:13 -04:00
Bruce MacDonald
f59c4d03f7
fix ggml arm64 cuda build ( #520 )
2023-09-12 17:06:48 -04:00
Bruce MacDonald
f221637053
first pass at linux gpu support ( #454 )
...
* linux gpu support
* handle multiple gpus
* add cuda docker image (#488 )
---------
Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-12 11:04:35 -04:00
Bruce MacDonald
09dd2aeff9
GGUF support ( #441 )
2023-09-07 13:55:37 -04:00
Jeffrey Morgan
61dda6a5e0
set minimum CMAKE_OSX_DEPLOYMENT_TARGET
to 11.0
2023-09-06 19:56:50 -04:00
Jeffrey Morgan
213ffdb548
macos amd64
compatibility fixes
2023-09-05 21:33:31 -04:00
Bruce MacDonald
d18282bfda
metal: add missing barriers for mul-mat ( #469 )
2023-09-05 19:37:13 -04:00
Jeffrey Morgan
7fa6e51686
generate binary dependencies based on GOARCH on macos ( #459 )
2023-09-05 12:53:57 -04:00
Bruce MacDonald
42998d797d
subprocess llama.cpp server ( #401 )
...
* remove c code
* pack llama.cpp
* use request context for llama_cpp
* let llama_cpp decide the number of threads to use
* stop llama runner when app stops
* remove sample count and duration metrics
* use go generate to get libraries
* tmp dir for running llm
2023-08-30 16:35:03 -04:00
Jeffrey Morgan
177b69a211
add missing entries for 34B
2023-08-25 18:35:35 -07:00
Michael Yang
7a378f8b66
patch llama.cpp for 34B
2023-08-25 10:06:55 -07:00
Michael Yang
f7b613332c
update llama.cpp
2023-08-14 15:47:00 -07:00
Jeffrey Morgan
22885aeaee
update llama.cpp
to f64d44a
2023-08-12 22:47:15 -04:00
Michael Yang
fccf8d179f
partial decode ggml bin for more info
2023-08-10 09:23:10 -07:00