ollama/convert
Michael Yang 9685c34509 quantize any fp16/fp32 model
- FROM /path/to/{safetensors,pytorch}
- FROM /path/to/fp{16,32}.bin
- FROM model:fp{16,32}
2024-05-06 15:24:01 -07:00
..
sentencepiece Convert Safetensors to an Ollama model (#2824) 2024-03-06 21:01:51 -08:00
convert.go quantize any fp16/fp32 model 2024-05-06 15:24:01 -07:00
gemma.go quantize any fp16/fp32 model 2024-05-06 15:24:01 -07:00
llama.go quantize any fp16/fp32 model 2024-05-06 15:24:01 -07:00
mistral.go quantize any fp16/fp32 model 2024-05-06 15:24:01 -07:00
mixtral.go add mixtral 8x7b model conversion (#3859) 2024-04-23 20:17:04 -07:00
safetensors.go Fix lint warnings 2024-05-03 16:44:19 -07:00
sentencepiece_model.proto Convert Safetensors to an Ollama model (#2824) 2024-03-06 21:01:51 -08:00
torch.go Fix lint warnings 2024-05-03 16:44:19 -07:00