Cloud platform information
If you’re a Snowflake or Amazon Redshift user, familiarize yourself with the corresponding information about this component.Snowflake
When using Snowflake, external tables are useful for querying large datasets without physically loading them into Snowflake storage. The data is typically returned in a single column named VALUE, containing variant-type values. Each row stores the source data as a set of properties within this variant.- Due to the structure of the returned data, transformation (e.g using the Flatten Variant component) is often required to convert it into a usable format.
- External tables must be refreshed after any changes to the underlying data. Use the Refresh External Table component to update the metadata before querying.
Amazon Redshift
When using Amazon Redshift, the Create External Table component references data stored in an Amazon S3 bucket. Like Snowflake, the external table doesn’t store the data internally. If the component requires access to a cloud provider (AWS, Azure, or GCP), it will use credentials as follows:- If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
- If using Hybrid SaaS: By default the component will inherit the agent’s execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.
- External tables are part of Amazon Redshift Spectrum, and may not be available in all regions. For a list of supported regions, read Amazon Redshift endpoints and quotas.
- For more information about working with external tables, read Creating external tables for Redshift Spectrum.
Properties
- Snowflake
- Amazon Redshift
A human-readable name for the component.
Select one of:
- Create: (default). This option creates a new external table. The component will fail if an external table with the same name already exists.
- Create if not exists: This option will only create a new external table when an external table of the same name does not already exist. If the schema of the existing external table does not match the schema defined in this component, no attempt is made to fix or correct the schema, which could lead to errors later on in the pipeline if:
- Users did not expect an external table to already exist.
- Users did not expect to have a different schema to the one defined in this component.
- Replace: This option drops any existing external table of the same name and then creates a new one. Selecting this option guarantees that after the component succeeds, the external table matches the schema defined in this component.
import SnowflakeDatabase from ‘/snippets/content/snowflake-database.mdx’;
import SnowflakeSchema from ‘/snippets/content/snowflake-schema.mdx’;
The name of the external table to be created or used.
A partition is defined by an expression applied to a set of data, resulting in a partition that only contains rows that satisfy the expression. For example, partitioning a dataset by the value in the ‘year’ column.For more information about setting up partitions with Snowflake’s external tables, read Managing Regular Data Loads.
- Name: The name of the new partition column.
- Type: The data type of the partition column.
- Size: The data size of the new column values.
- Precision: The precision of the new column values.
- Partition Expression: The expression used to partition the data.
Select a Snowflake database for the external stage.
Select a Snowflake schema for the external stage.
Select an external stage for the data. Staging areas can be created through Snowflake using the CREATE STAGE command.
The directory path to follow to the target data. File names cannot be specified. The path is relative to the storage location given in the external stage setup.
Specify, using regular expression (RegEx) pattern syntax, files to be matched on the external stage.For example:
'.*flight.*[.]csv'Select a pre-made file format that will automatically set many of the component properties accordingly. These formats can be created through the Create File Format component.The list of file formats will come from the database and schema that the specified tables exist in.
The type of expected data to load. Some data may require additional formatting, explained in Preparing to Load Data.Available options are: AVRO, CSV, JSON, ORC, PARQUET, and XML.Component properties will change to reflect the choice made here and give options based on the specific file type.
(AVRO, CSV, JSON, PARQUET, XML) Select whether the input file is compressed in GZIP format, BROTLI, BZ2, DEFLATE, RAW_DEFLATE, ZSTD, or not compressed at all. The default is AUTO.When AUTO is selected, the compression algorithm is detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically.
(CSV) The delimiter to be used that separates records (rows) in the file. Defaults to newline.
\ can also signify a newline. \\r can signify a carriage return.(CSV) The delimiter that separates columns. The default is a comma. A [TAB] character can be specified as
\.(CSV) The number of rows at the top of the file to ignore. The default setting is 0 and the property is left blank.
(CSV) When true, skip any blank lines encountered in the CSV files. When false, blank lines produce an end-of-record error. Default is false.
(CSV) When this option is specified, the backslash character
\ in input data is treated as an escape character.(CSV) Single character string used as the escape character for unenclosed field values. Accepts common escape sequences, octal values, or hex values. Also accepts a value of NONE (default). If a character is specified in the Escape field, it will override this field.
(CSV) A character that is used to enclose strings. Can be a single quote
' or a double quote " or NONE (default). Escape the character with an instance of the same character.(CSV) Generate an error if the number of delimited columns in the input does not match that of the table. If false, extra columns are not loaded into the table and missing columns are recorded as NULL in the table. Default is false.
(CSV) When set to true, empty columns in the input file will become NULL. Default is true.
(CSV) When true, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character. When false, the load operation produces an error when invalid UTF-8 character encoding is detected. Default is false.
(CSV) The type of encoding that has been applied to the data. The default setting is UTF-8. Consult the Snowflake documentation for more information.
(JSON) Enables parsing of octal numbers. Default is false.
(JSON) When you pass in a JSON with a duplicate property, choose whether to allow the duplicate or not. Default is false.
(JSON) When true, enables removal of outer square brackets from JSONs. Default is false.
(JSON) When true, enables removal of NULL values. Default is false.
(JSON, XML) When true, replaces invalid UTF-8 sequences with the Unicode replacement character (U+FFFD), instead of throwing an error. Default is false.
(XML) When set to true, leading and trailing spaces in XML element values are preserved during parsing. Default is false.
(XML) When set to true, when parsing XML files, this will strip the outermost XML element, exposing the second-level elements as separate documents. Default is false.
(XML) When set to true, the parser will ignore Snowflake semi-structured data tags when processing XML files. Default is false.
(XML) When set to true, numeric and Boolean values in XML text elements are automatically converted to their native types during parsing. Default is false.
(AVRO, CSV, JSON, ORC, PARQUET) This option replaces the specified string with NULL in the output table. Use this if your data has a particular representation of missing data.
(AVRO, CSV, JSON, ORC, PARQUET) When true, removes trailing and leading whitespace from the input data. Default is false.

