Update as per bmacd

Signed-off-by: Matt Williams <m@technovangelist.com>
This commit is contained in:
Matt Williams 2023-11-08 18:09:05 -06:00
commit 13086363bd
4 changed files with 8 additions and 7 deletions

View file

@ -1,5 +1,10 @@
# Bash Shell examples # Bash Shell examples
When you review the examples on this site, it is possible to think that making use of AI with Ollama will be hard. You need an orchestrator, and vector database, complicated infrastructure, and more. But that is not always the case. Ollama is designed to be easy to use, and to be used in any environment. When calling `ollama`, you can pass it a file to run all the prompts in the file, one after the other:
The two examples here show how to list the models and query them from a simple bash script. `ollama run llama2 < sourcequestions.txt`
This concept is used in the following example.
## Compare Models
`comparemodels.sh` is a script that runs all the questions in `sourcequestions.txt` using any 4 models you choose that you have already pulled from the Ollama library or have created locally.

View file

@ -1,4 +0,0 @@
#!/usr/bin/env bash
# Run a model and pass in a text file of questions
ollama run llama2 < sourcequestions

View file

@ -30,7 +30,7 @@ for ITEM in "${SELECTIONS[@]}"; do
ollama run "$ITEM" "" ollama run "$ITEM" ""
echo "--------------------------------------------------------------" echo "--------------------------------------------------------------"
echo "Running the questions through the model $ITEM" echo "Running the questions through the model $ITEM"
COMMAND_OUTPUT=$(ollama run "$ITEM" --verbose < sourcequestions 2>&1| tee /dev/stderr) COMMAND_OUTPUT=$(ollama run "$ITEM" --verbose < sourcequestions.txt 2>&1| tee /dev/stderr)
# eval duration is sometimes listed in seconds and sometimes in milliseconds. # eval duration is sometimes listed in seconds and sometimes in milliseconds.
# Add up the values for each model # Add up the values for each model