Merge pull request #128 from jmorganca/mchiang0610-patch-1
Update modelfile.md
This commit is contained in:
commit
d53988f619
8 changed files with 130 additions and 56 deletions
|
@ -1,79 +1,95 @@
|
|||
# Ollama Model File Reference
|
||||
# Ollama Model File
|
||||
|
||||
Ollama can build models automatically by reading the instructions from a Modelfile. A Modelfile is a text document that represents the complete configuration of the Model. You can see that a Modelfile is very similar to a Dockerfile.
|
||||
A model file is the blueprint to create and share models with Ollama.
|
||||
|
||||
## Format
|
||||
|
||||
Here is the format of the Modelfile:
|
||||
The format of the Modelfile:
|
||||
|
||||
```modelfile
|
||||
# comment
|
||||
INSTRUCTION arguments
|
||||
```
|
||||
|
||||
Nothing in the file is case-sensitive. However, the convention is for instructions to be uppercase to make it easier to distinguish from the arguments.
|
||||
| Instruction | Description |
|
||||
|------------------------- |--------------------------------------------------------- |
|
||||
| FROM<br>(required) | Defines the base model to be used when creating a model |
|
||||
| PARAMETER<br>(optional) | Sets the parameters for how the model will be run |
|
||||
| PROMPT <br>(optional) | Sets the prompt to use when the model will be run |
|
||||
| LICENSE<br>(optional) | Specify the license of the model. It is additive, and |
|
||||
|
||||
A Modelfile can include instructions in any order. But the convention is to start the Modelfile with the FROM instruction.
|
||||
## Examples
|
||||
|
||||
Although the example above shows a comment starting with a hash character, any instruction that is not recognized is seen as a comment.
|
||||
An example of a model file creating a mario blueprint:
|
||||
|
||||
## FROM
|
||||
|
||||
```modelfile
|
||||
FROM <image>[:<tag>]
|
||||
```
|
||||
|
||||
This defines the base model to be used. An image can be a known image on the Ollama Hub, or a fully-qualified path to a model file on your system
|
||||
|
||||
## LICENSE
|
||||
|
||||
```modelfile
|
||||
LICENSE """
|
||||
<license text>
|
||||
FROM llama2
|
||||
PARAMETER temperature 1
|
||||
PROMPT """
|
||||
System: You are Mario from super mario bros, acting as an assistant.
|
||||
User: {{ .Prompt }}
|
||||
Assistant:
|
||||
"""
|
||||
```
|
||||
|
||||
Some models need to be distributed with a license agreement. For example, the distribution clause for the Llama2 license requires including the license with the model.
|
||||
To use this:
|
||||
|
||||
## PARAMETER
|
||||
1. Save it as a file (eg. modelfile)
|
||||
2. `ollama create NAME -f <location of the file eg. ./modelfile>'`
|
||||
3. `ollama run NAME`
|
||||
4. Start using the model!
|
||||
|
||||
## FROM (Required)
|
||||
|
||||
The FROM instruction defines the base model to be used when creating a model.
|
||||
|
||||
```modelfile
|
||||
PARAMETER <parameter> <parametervalue>
|
||||
```
|
||||
FROM <model name>:<tag>
|
||||
```
|
||||
|
||||
### Build from llama2
|
||||
|
||||
```
|
||||
FROM llama2:latest
|
||||
```
|
||||
|
||||
A list of available base models:
|
||||
<https://github.com/jmorganca/ollama#model-library>
|
||||
|
||||
### Build from a bin file
|
||||
|
||||
```
|
||||
FROM ./ollama-model.bin
|
||||
```
|
||||
|
||||
## PARAMETER (Optional)
|
||||
|
||||
The PARAMETER instruction defines a parameter that can be set when the model is run.
|
||||
|
||||
```
|
||||
PARAMETER <parameter> <parametervalue>
|
||||
```
|
||||
|
||||
### Valid Parameters and Values
|
||||
|
||||
| Parameter | Description | Value Type | Value Range |
|
||||
| ---------------- | ------------------------------------------------------------------------------------------- | ---------- | ----------- |
|
||||
| NumCtx | | int | |
|
||||
| NumGPU | | int | |
|
||||
| MainGPU | | int | |
|
||||
| LowVRAM | | bool | |
|
||||
| F16KV | | bool | |
|
||||
| LogitsAll | | bool | |
|
||||
| VocabOnly | | bool | |
|
||||
| UseMMap | | bool | |
|
||||
| EmbeddingOnly | | bool | |
|
||||
| RepeatLastN | | int | |
|
||||
| RepeatPenalty | | float | |
|
||||
| FrequencyPenalty | | float | |
|
||||
| PresencePenalty | | float | |
|
||||
| temperature | The temperature of the model. Higher temperatures result in more creativity in the response | float | 0 - 1 |
|
||||
| TopK | | int | |
|
||||
| TopP | | float | |
|
||||
| TFSZ | | float | |
|
||||
| TypicalP | | float | |
|
||||
| Mirostat | | int | |
|
||||
| MirostatTau | | float | |
|
||||
| MirostatEta | | float | |
|
||||
| NumThread | | int | |
|
||||
|
||||
| Parameter | Description | Value Type | Example Usage |
|
||||
|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------------------|
|
||||
| NumCtx | Sets the size of the prompt context size length model. (Default: 2048) | int | Numctx 4096 |
|
||||
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) | float | Temperature 0.7 |
|
||||
| TopK | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | TopK 40 |
|
||||
| TopP | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | TopP 0.9 |
|
||||
| NumGPU | The number of GPUs to use. On macOS it defaults to 1 to enable metal support, 0 to disable. | int | numGPU 1 |
|
||||
| RepeatLastN | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = ctx-size) | int | RepeatLastN 64 |
|
||||
| RepeatPenalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1) | float | RepeatPenalty 1.1 |
|
||||
| TFSZ | Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1) | float | TFSZ 1 |
|
||||
| Mirostat | Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | int | Mirostat 0 |
|
||||
| MirostatTau | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0) | float | MirostatTau 5.0 |
|
||||
| MirostatEta | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) | float | MirostatEta 0.1 |
|
||||
| NumThread | Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). | int | NumThread 8 |
|
||||
|
||||
## PROMPT
|
||||
|
||||
Prompt is a multiline instruction that defines the prompt to be used when the model is run. Typically there are 3-4 components to a prompt: System, context, user, and response.
|
||||
Prompt is a set of instructions to an LLM to cause the model to return desired response(s). Typically there are 3-4 components to a prompt: System, context, user, and response.
|
||||
|
||||
```modelfile
|
||||
PROMPT """
|
||||
|
@ -87,4 +103,9 @@ You are a content marketer who needs to come up with a short but succinct tweet.
|
|||
### Response:
|
||||
"""
|
||||
|
||||
```
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- the **modelfile is not case sensitive**. In the examples, we use uppercase for instructions to make it easier to distinguish it from arguments.
|
||||
- Instructions can be in any order. In the examples, we start with FROM instruction to keep it easily readable.
|
||||
|
|
|
@ -1,7 +0,0 @@
|
|||
FROM llama2
|
||||
PARAMETER temperature 1
|
||||
PROMPT """
|
||||
System: You are Mario from super mario bros, acting as an assistant.
|
||||
User: {{ .Prompt }}
|
||||
Assistant:
|
||||
"""
|
11
examples/mario/Modelfile
Normal file
11
examples/mario/Modelfile
Normal file
|
@ -0,0 +1,11 @@
|
|||
FROM llama2
|
||||
PARAMETER temperature 1
|
||||
PROMPT """
|
||||
{{- if not .Context }}
|
||||
<<SYS>>
|
||||
You are Mario from super mario bros, acting as an assistant.
|
||||
<</SYS>>
|
||||
|
||||
{{- end }}
|
||||
[INST] {{ .Prompt }} [/INST]
|
||||
"""
|
BIN
examples/mario/logo.png
Normal file
BIN
examples/mario/logo.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 446 KiB |
49
examples/mario/readme.md
Normal file
49
examples/mario/readme.md
Normal file
|
@ -0,0 +1,49 @@
|
|||
<img src="logo.png" alt="image of Italian plumber" height="200"/>
|
||||
|
||||
# Example character: Mario
|
||||
|
||||
This example shows how to create a basic character using Llama2 as the base model.
|
||||
|
||||
To run this example:
|
||||
|
||||
1. Download the Modelfile
|
||||
2. `ollama pull llama2` to get the base model used in the model file.
|
||||
3. `ollama create NAME -f ./Modelfile`
|
||||
4. `ollama run NAME`
|
||||
|
||||
Ask it some questions like "Who are you?" or "Is Peach in trouble again?"
|
||||
|
||||
## Editing this file
|
||||
|
||||
What the model file looks like:
|
||||
|
||||
```
|
||||
FROM llama2
|
||||
PARAMETER temperature 1
|
||||
PROMPT """
|
||||
{{- if not .Context }}
|
||||
<<SYS>>
|
||||
You are Mario from super mario bros, acting as an assistant.
|
||||
<</SYS>>
|
||||
|
||||
{{- end }}
|
||||
[INST] {{ .Prompt }} [/INST]
|
||||
"""
|
||||
```
|
||||
|
||||
What if you want to change its behaviour?
|
||||
|
||||
- Try changing the prompt
|
||||
- Try changing the parameters [Docs](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md)
|
||||
- Try changing the model (e.g. An uncensored model by `FROM wizard-vicuna` this is the wizard-vicuna uncensored model )
|
||||
|
||||
Once the changes are made,
|
||||
|
||||
1. `ollama create NAME -f ./Modelfile`
|
||||
2. `ollama run NAME`
|
||||
3. Iterate until you are happy with the results.
|
||||
|
||||
Notes:
|
||||
|
||||
- This example is for research purposes only. There is no affiliation with any entity.
|
||||
- When using an uncensored model, please be aware that it may generate offensive content.
|
Loading…
Reference in a new issue