better readme
Signed-off-by: Matt Williams <m@technovangelist.com>
This commit is contained in:
parent
5757925060
commit
80362fedce
3 changed files with 12 additions and 9 deletions
|
@ -1,11 +1,5 @@
|
||||||
# Bash Shell examples
|
# Bash Shell examples
|
||||||
|
|
||||||
![Example Gif](https://uce9aa94fbc06b088ca05a92fe37.previews.dropboxusercontent.com/p/thumb/ACF0g3yrdu5-tvuw59Wil8B5bwuLkvWFFQNrYJzEkqvJnv4WfyuqcTGfXhXDfqfbemi5jGr9-bccO8r5VZxXrAeU1l_Plq99HCqV6b10thwwlaQCNbkXkw4YSF0YlYu-wu5A6Vn2SlrdcfiwTl6et-m7CPYx8ad2jSZXcPEozDUqXqB-f_zZNskASYzWwQko9n6UjMKx6qt54FYvIiW6n3ZiNVlM0GGt91FAA2Y0zD23aBlOlIAN8wH7qLznS2rZsn1n_7ukJMwegcEVud_XNPbG8Hn_13NtwkVsf4uWThknUpslNRmxWisqlRCaxZY71Me9wz3puH3nlpxtNlwoNAvQcXf0S4u_r1WLx22KwWqmvYFU41X2j_1Kum8amUrAv_5WVnOL6ctWnrbV4fauYfT9ClwgmLAtLoHwaQSXo2R2Kut_QIAkFIDAyMj9Fe9Ifj0/p.gif)
|
When you review the examples on this site, it is possible to think that making use of AI with Ollama will be hard. You need an orchestrator, and vector database, complicated infrastructure, and more. But that is not always the case. Ollama is designed to be easy to use, and to be used in any environment.
|
||||||
|
|
||||||
When calling `ollama`, you can pass it a file to run all the prompts in the file, one after the other. This concept is used in two examples
|
The two examples here show how to list the models and query them from a simple bash script.
|
||||||
|
|
||||||
## Bulk Questions
|
|
||||||
`bulkquestions.sh` is a script that runs all the questions in `sourcequestions` using the llama2 model and outputs the answers.
|
|
||||||
|
|
||||||
## Compare Models
|
|
||||||
`comparemodels.sh` is a script that runs all the questions in `sourcequestions` using any 4 models you choose that you have already pulled from the Ollama library or have created locally.
|
|
||||||
|
|
|
@ -1,3 +1,4 @@
|
||||||
#!/usr/bin/env bash
|
#!/usr/bin/env bash
|
||||||
|
# Run a model and pass in a text file of questions
|
||||||
|
|
||||||
ollama run llama2 < sourcequestions
|
ollama run llama2 < sourcequestions
|
||||||
|
|
|
@ -1,9 +1,14 @@
|
||||||
#! /usr/bin/env bash
|
#! /usr/bin/env bash
|
||||||
|
# Compare multiple models by running them with the same questions
|
||||||
|
|
||||||
NUMBEROFCHOICES=4
|
NUMBEROFCHOICES=4
|
||||||
CHOICES=$(ollama list | awk '{print $1}')
|
|
||||||
SELECTIONS=()
|
SELECTIONS=()
|
||||||
declare -a SUMS=()
|
declare -a SUMS=()
|
||||||
|
|
||||||
|
# Get the list of models
|
||||||
|
CHOICES=$(ollama list | awk '{print $1}')
|
||||||
|
|
||||||
|
# Select which models to run as a comparison
|
||||||
echo "Select $NUMBEROFCHOICES models to compare:"
|
echo "Select $NUMBEROFCHOICES models to compare:"
|
||||||
select ITEM in $CHOICES; do
|
select ITEM in $CHOICES; do
|
||||||
if [[ -n $ITEM ]]; then
|
if [[ -n $ITEM ]]; then
|
||||||
|
@ -18,6 +23,7 @@ select ITEM in $CHOICES; do
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
|
||||||
|
# Loop through each of the selected models
|
||||||
for ITEM in "${SELECTIONS[@]}"; do
|
for ITEM in "${SELECTIONS[@]}"; do
|
||||||
echo "--------------------------------------------------------------"
|
echo "--------------------------------------------------------------"
|
||||||
echo "Loading the model $ITEM into memory"
|
echo "Loading the model $ITEM into memory"
|
||||||
|
@ -26,6 +32,8 @@ for ITEM in "${SELECTIONS[@]}"; do
|
||||||
echo "Running the questions through the model $ITEM"
|
echo "Running the questions through the model $ITEM"
|
||||||
COMMAND_OUTPUT=$(ollama run "$ITEM" --verbose < sourcequestions 2>&1| tee /dev/stderr)
|
COMMAND_OUTPUT=$(ollama run "$ITEM" --verbose < sourcequestions 2>&1| tee /dev/stderr)
|
||||||
|
|
||||||
|
# eval duration is sometimes listed in seconds and sometimes in milliseconds.
|
||||||
|
# Add up the values for each model
|
||||||
SUM=$(echo "$COMMAND_OUTPUT" | awk '
|
SUM=$(echo "$COMMAND_OUTPUT" | awk '
|
||||||
/eval duration:/ {
|
/eval duration:/ {
|
||||||
value = $3
|
value = $3
|
||||||
|
|
Loading…
Reference in a new issue