ollama/cmd
Jeffrey Morgan 423862042a
treat ollama run model < file as entire prompt, not prompt-per-line (#1126)
Previously, `ollama run` treated a non-terminal stdin (such as `ollama run model < file`) as containing one prompt per line. To run inference on a multi-line prompt, the only non-API workaround was to run `ollama run` interactively and wrap the prompt in `"""..."""`.

Now, `ollama run` treats a non-terminal stdin as containing a single prompt. For example, if `myprompt.txt` is a multi-line file, then `ollama run model < myprompt.txt` would treat `myprompt.txt`'s entire contents as the prompt.

Co-authored-by: Quinn Slack <quinn@slack.org>
2023-11-14 16:42:21 -05:00
..
cmd.go treat ollama run model < file as entire prompt, not prompt-per-line (#1126) 2023-11-14 16:42:21 -05:00
spinner.go vendor in progress bar and change to bytes instead of bibytes (#130) 2023-07-19 17:24:03 -07:00