Production use of this feature is available for specific editions only. Contact our sales team for more information.
Use case
This component enables you to integrate generative AI output into your transformation pipeline to enrich your data. For example, you can use it to:- Analyze customer feedback from reviews or survey responses—the example below demonstrates this use case.
- Automatically classify incoming documents in the legal or financial sector.
- Convert medical notes from a range of formats into standardized ICD codes.
Properties
A human-readable name for the component.
Select a language model from the drop-down menu. Review the Snowflake documentation for supported models, costs, and quotas.Read Availability for details about which models are available in which regions.
An initial plain-English prompt to your chosen language model to provide the model with background information and instructions for a style of response. An example of response steps offered by Snowflake is “Respond in the style of a pirate.”The language model doesn’t generate a response to your system prompt, but to your user prompt. The system prompt informs the model on how to answer the user prompt.Only one system prompt may be provided.To use variables in this field, type the name of the variable prefixed by the dollar symbol and surrounded by { } brackets, as follows:
${variable}. Once you type ${, a drop-down list of autocompleted suggested variables will appear. This list updates as you type; for example, if you type ${date, functions and variables containing date will be listed.A plain-text prompt provided by the user. This prompt should be contextually relatable to the system prompt (if used).To use variables in this field, type the name of the variable prefixed by the dollar symbol and surrounded by { } brackets, as follows:
${variable}. Once you type ${, a drop-down list of autocompleted suggested variables will appear. This list updates as you type; for example, if you type ${date, functions and variables containing date will be listed.Select the source columns to feed as input to the model.
- Column Name: A column from the input table.
- Descriptive Name (optional): An alternate descriptive name to better contextualize the column. Recommended if your column names are low-context.
A value between 0 and 1 (inclusive) to control the randomness of the output of the language model. Higher temperatures (for example, 0.8) will result in more diverse and random outputs. Lower temperatures (for example, 0.2) make the output more focused and deterministic.
A value between 0 and 1 (inclusive) to control the randomness of the output of the language model—typically used as an alternative to temperature.Top P restricts the set of possible tokens that the mode will output, whereas Temperature influences which tokens are chosen at each step.Many LLM models recommend altering Top P or Temperature, but not both.
Set the maximum number of output tokens in the response. A small number of max tokens can result in truncated responses.
When set to Yes, responses that could be considered unsafe or harmful will be filtered out of the model’s output via Cortex Guard.
- Yes: Outputs both your source input columns and the new completion columns. This will also include those input columns not selected in Inputs.
- No: Only outputs the new completion columns.
Explanation of output
This component returns a string representation of a JSON object, containing the following keys:| Key | Description |
|---|---|
| choices | An array of the model’s responses. (Currently, only one response is provided.) Each response is an object containing a “messages” key whose value is the model’s response to the latest prompt. |
| created | UNIX timestamp (seconds since midnight, January 1, 1970) of when the response was generated. |
| model | The language model that created the response. |
| usage | An object recording the number of tokens consumed and generated by this completion. |
| completion_tokens | The number of tokens in the generated response. |
| prompt_tokens | The number of tokens in the prompt. |
| total_tokens | Sum of completion_tokens and prompt_tokens. |
Example
A coffee shop has been collecting customer reviews left on a web site, and wants to distill some key pieces of information from these reviews. The primary information wanted is: was the customer satisfied with the service? Input data:| COFFEE TYPE | REVIEW |
|---|---|
| Espresso | The espresso was bold and aromatic, but a tad too bitter for my taste. The barista was friendly, though, and the atmosphere was cozy. |
| Cappuccino | My cappuccino was perfectly balanced, with a creamy foam that melted in my mouth. However, the service was a bit slow, and the coffee wasn’t piping hot. |
| Latte | The latte was velvety smooth, but it lacked the flavor I was hoping for. The barista was friendly, and the ambiance was pleasant. |
| Americano | The Americano was strong and robust, just how I like it. However, the service was a bit impersonal, and the coffee could have been hotter. |
| Mocha | Indulging in the mocha was like sipping on liquid chocolate bliss. The service, however, was lacking, with long wait times and a disorganized atmosphere. |
- Model: llama2-70b-chat
- System Prompt: [blank]
- User Prompt: Was the customer satisfied with the service?
- Inputs:
- Column Name: REVIEW
- Descriptive Name: [blank]
- Temperature: [blank]
- Top P: [blank]
- Max Tokens: 6
- Include Input Columns: YES
| COFFEE TYPE | REVIEW | completion_result |
|---|---|---|
| Espresso | The espresso was bold and aromatic, but a tad too bitter for my taste. The barista was friendly, though, and the atmosphere was cozy. | {"messages": " The customer was partially satisfied with"} |
| Cappuccino | My cappuccino was perfectly balanced, with a creamy foam that melted in my mouth. However, the service was a bit slow, and the coffee wasn’t piping hot. | {"messages": " The customer was partially satisfied with"} |
| Latte | The latte was velvety smooth, but it lacked the flavor I was hoping for. The barista was friendly, and the ambiance was pleasant. | {"messages": " The customer was partially satisfied with"} |
| Americano | The Americano was strong and robust, just how I like it. However, the service was a bit impersonal, and the coffee could have been hotter. | {"messages": " No, the customer was not"} |
| Mocha | Indulging in the mocha was like sipping on liquid chocolate bliss. The service, however, was lacking, with long wait times and a disorganized atmosphere. | {"messages": " No, the customer was not"} |

