Production use of this feature is available for specific editions only. Contact our sales team for more information.
Use case
This component lets you combine different prompts in a single component. For example, you can use it to:- Generate multiple variations on an advertisement to target different audience segments.
- Create differently worded email campaigns to compare engagement.
Properties
A human-readable name for the component.
Configure
Select a language model from the drop-down menu. Review the Snowflake documentation for supported models, costs, and quotas.Read Availability for details about which models are available in which regions.
Select the source columns to feed as input to the model.
- Column Name: A column from the input table.
- Descriptive Name (optional): An alternate descriptive name to better contextualize the column. Recommended if your column names are low-context.
- Yes: Outputs both your source input columns and the new completion columns. This will also include those input columns not selected in Inputs.
- No: Only outputs the new completion columns.
A plain-text prompt provided by the user. This prompt lets the user provide basic instructions and background information on the data.To use variables in this field, type the name of the variable prefixed by the dollar symbol and surrounded by { } brackets, as follows:
${variable}. Once you type ${, a drop-down list of autocompleted suggested variables will appear. This list updates as you type; for example, if you type ${date, functions and variables containing date will be listed.Define the parameters for the output data including what each output column should be named and what prompt should be used to query the language model about the input data.
- Column Name: The name of a column that will hold responses to the corresponding prompt.
- Prompt: A prompt to the language model about the input data.
carbon_footprint and then set the corresponding prompt as “What is the estimated carbon footprint in tons per passenger?”.Select the behavior when the language model fails to generate a response.
- Continue on Error: If the language model fails to generate a response, the component will continue processing and return a
nullvalue in the output column. This is the default behavior. - Throw Error: If the language model fails to generate a response, the component will throw an error and stop processing.
A value between 0 and 1 (inclusive) to control the randomness of the output of the language model. Higher temperatures (for example, 0.8) will result in more diverse and random outputs. Lower temperatures (for example, 0.2) make the output more focused and deterministic.
A value between 0 and 1 (inclusive) to control the randomness of the output of the language model—typically used as an alternative to temperature.Top P restricts the set of possible tokens that the mode will output, whereas Temperature influences which tokens are chosen at each step.Many LLM models recommend altering Top P or Temperature, but not both.
Set the maximum number of output tokens in the response. A small number of max tokens can result in truncated responses.
When set to Yes, responses that could be considered unsafe or harmful will be filtered out of the model’s output via Cortex Guard.

