clean up my previous empty sentences

This commit is contained in:
Michael Chiang 2023-07-19 22:43:33 -07:00 committed by Michael Yang
parent 1c72e46e09
commit 5c5948b4e7

View file

@ -11,13 +11,13 @@ The format of the Modelfile:
INSTRUCTION arguments INSTRUCTION arguments
``` ```
| Instruction | Description | | Instruction | Description |
|-------------------------- |--------------------------------------------------------- | | ------------------------- | ----------------------------------------------------- |
| `FROM`<br>(required) | Defines the base model to be used when creating a model | | `FROM`<br>(required) | Defines the base model to use |
| `PARAMETER`<br>(optional) | Sets the parameters for how the model will be run | | `PARAMETER`<br>(optional) | Sets the parameters for how Ollama will run the model |
| `TEMPLATE`<br>(optional) | Sets the prompt template to use when the model will be run | | `SYSTEM`<br>(optional) | Specifies the system prompt that will set the context |
| `SYSTEM`<br>(optional) | // todo | | `TEMPLATE`<br>(optional) | The full prompt template to be sent to the model |
| `LICENSE`<br>(optional) | Specify the license of the model. It is additive, and | | `LICENSE`<br>(optional) | Specifies the legal license |
## Examples ## Examples
@ -25,14 +25,23 @@ An example of a model file creating a mario blueprint:
``` ```
FROM llama2 FROM llama2
# sets the temperature to 1 [higher is more creative, lower is more coherent]
# sets the context size to 4096
PARAMETER temperature 1 PARAMETER temperature 1
TEMPLATE """ PARAMETER num_ctx 4096
System: {{ .System }}
User: {{ .Prompt }}
Assistant:
"""
SYSTEM You are Mario from super mario bros, acting as an assistant. # Check for first system message, so the model output won't repeat itself.
# <<SYS>> and [INST] are special tags used by the Llama2 model.
PROMPT """
{{- if .First }}
<<SYS>>
You are Mario from super mario bros, acting as an assistant.
<</SYS>>
{{- end }}
[INST] {{ .Prompt }} [/INST]
"""
``` ```
To use this: To use this:
@ -44,7 +53,7 @@ To use this:
## FROM (Required) ## FROM (Required)
The FROM instruction defines the base model to be used when creating a model. The FROM instruction Defines the base model to use when creating a model.
``` ```
FROM <model name>:<tag> FROM <model name>:<tag>
@ -62,7 +71,7 @@ A list of available base models:
### Build from a bin file ### Build from a bin file
``` ```
FROM ./ollama-model.bin FROM ./ollama-model.bin
``` ```
## PARAMETER (Optional) ## PARAMETER (Optional)
@ -75,45 +84,28 @@ PARAMETER <parameter> <parametervalue>
### Valid Parameters and Values ### Valid Parameters and Values
| Parameter | Description | Value Type | Example Usage | | Parameter | Description | Value Type | Example Usage |
|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|-------------------| | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------------------ |
| NumCtx | Sets the size of the prompt context size length model. (Default: 2048) | int | Numctx 4096 | | num_ctx | Sets the size of the prompt context size length model. (Default: 2048) | int | num_ctx 4096 |
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) | float | Temperature 0.7 | | temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8) | float | temperature 0.7 |
| TopK | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | TopK 40 | | top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40) | int | top_k 40 |
| TopP | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | TopP 0.9 | | top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9) | float | top_p 0.9 |
| NumGPU | The number of GPUs to use. On macOS it defaults to 1 to enable metal support, 0 to disable. | int | numGPU 1 | | num_gpu | The number of GPUs to use. On macOS it defaults to 1 to enable metal support, 0 to disable. | int | num_gpu 1 |
| RepeatLastN | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = ctx-size) | int | RepeatLastN 64 | | repeat_last_n | Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = ctx-size) | int | repeat_last_n 64 |
| RepeatPenalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1) | float | RepeatPenalty 1.1 | | repeat_penalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1) | float | repeat_penalty 1.1 |
| TFSZ | Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1) | float | TFSZ 1 | | tfs_z | Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (default: 1) | float | tfs_z 1 |
| Mirostat | Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | int | Mirostat 0 | | mirostat | Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | int | mirostat 0 |
| MirostatTau | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0) | float | MirostatTau 5.0 | | mirostat_tau | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0) | float | mirostat_tau 5.0 |
| MirostatEta | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) | float | MirostatEta 0.1 | | mirostat_eta | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) | float | mirostat_eta 0.1 |
| NumThread | Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). | int | NumThread 8 | | num_thread | Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). | int | num_thread 8 |
## TEMPLATE ## Prompt
`TEMPLATE` is a set of instructions to an LLM to cause the model to return desired response(s). Typically there are 3-4 components to a prompt: system, input, and response. When building on top of the base models supplied by Ollama, it comes with the prompt template predefined. To override the supplied system prompt, simply add `SYSTEM: insert system prompt` to change the systme prompt.
```modelfile ### Prompt Template
TEMPLATE """
### System:
{{ .System }}
### Instruction: `TEMPLATE` the full prompt template to be passed into the model. It may include (optionally) a system prompt, user prompt, and assistant prompt. This is used to create a full custom prompt, and syntax may be model specific.
{{ .Prompt }}
### Response:
"""
SYSTEM """
You are a content marketer who needs to come up with a short but succinct tweet. Make sure to include the appropriate hashtags and links. Sometimes when appropriate, describe a meme that can be includes as well. All answers should be in the form of a tweet which has a max size of 280 characters. Every instruction will be the topic to create a tweet about.
"""
```
## SYSTEM
// todo
## Notes ## Notes