Michael Yang
|
9685c34509
|
quantize any fp16/fp32 model
- FROM /path/to/{safetensors,pytorch}
- FROM /path/to/fp{16,32}.bin
- FROM model:fp{16,32}
|
2024-05-06 15:24:01 -07:00 |
|
Patrick Devine
|
ce8ce82567
|
add mixtral 8x7b model conversion (#3859)
|
2024-04-23 20:17:04 -07:00 |
|
Patrick Devine
|
9f8691c6c8
|
Add llama2 / torch models for ollama create (#3607)
|
2024-04-15 11:26:42 -07:00 |
|
Michael Yang
|
be517e491c
|
no rope parameters
|
2024-04-05 18:05:27 -07:00 |
|
Patrick Devine
|
3b6a9154dd
|
Simplify model conversion (#3422)
|
2024-04-01 16:14:53 -07:00 |
|
Patrick Devine
|
5a5efee46b
|
Add gemma safetensors conversion (#3250)
Co-authored-by: Michael Yang <mxyng@pm.me>
|
2024-03-28 18:54:01 -07:00 |
|
Patrick Devine
|
1b272d5bcd
|
change github.com/jmorganca/ollama to github.com/ollama/ollama (#3347)
|
2024-03-26 13:04:17 -07:00 |
|
Michael Yang
|
9ea492f1ce
|
convert: fix shape
|
2024-03-11 09:41:01 -07:00 |
|
Michael Yang
|
76bdebbadf
|
decode ggla
|
2024-03-08 15:46:25 -08:00 |
|
Michael Yang
|
18979ad4a1
|
convert: fix default shape
|
2024-03-08 15:42:48 -08:00 |
|
Patrick Devine
|
2c017ca441
|
Convert Safetensors to an Ollama model (#2824)
|
2024-03-06 21:01:51 -08:00 |
|