Skip to main content
Create External Table is an orchestration component that lets you define a table that references data stored outside of your data warehouse. These external tables don’t store data themselves, but can be queried similarly to regular tables. External tables are read-only.

Cloud platform information

If you’re a Snowflake or Amazon Redshift user, familiarize yourself with the corresponding information about this component.

Snowflake

When using Snowflake, external tables are useful for querying large datasets without physically loading them into Snowflake storage. The data is typically returned in a single column named VALUE, containing variant-type values. Each row stores the source data as a set of properties within this variant.
  • Due to the structure of the returned data, transformation (e.g using the Flatten Variant component) is often required to convert it into a usable format.
  • External tables must be refreshed after any changes to the underlying data. Use the Refresh External Table component to update the metadata before querying.

Amazon Redshift

When using Amazon Redshift, the Create External Table component references data stored in an Amazon S3 bucket. Like Snowflake, the external table doesn’t store the data internally. If the component requires access to a cloud provider (AWS, Azure, or GCP), it will use credentials as follows:
  • If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
  • If using Hybrid SaaS: By default the component will inherit the agent’s execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.
Referencing external data in this way provides an efficient method for querying large datasets without consuming storage on the Amazon Redshift cluster.

Properties

Name
string
required
A human-readable name for the component.
Create/Replace
drop-down
required
Select one of:
  • Create: (default). This option creates a new external table. The component will fail if an external table with the same name already exists.
  • Create if not exists: This option will only create a new external table when an external table of the same name does not already exist. If the schema of the existing external table does not match the schema defined in this component, no attempt is made to fix or correct the schema, which could lead to errors later on in the pipeline if:
    • Users did not expect an external table to already exist.
    • Users did not expect to have a different schema to the one defined in this component.
  • Replace: This option drops any existing external table of the same name and then creates a new one. Selecting this option guarantees that after the component succeeds, the external table matches the schema defined in this component.
import CopyGrantsSnowflake from ‘/snippets/content/copy-grants-snowflake.mdx’;
Database
drop-down
required
import SnowflakeDatabase from ‘/snippets/content/snowflake-database.mdx’;
Schema
drop-down
required
import SnowflakeSchema from ‘/snippets/content/snowflake-schema.mdx’;
New Table Name
string
required
The name of the external table to be created or used.
Partition Columns
column editor
A partition is defined by an expression applied to a set of data, resulting in a partition that only contains rows that satisfy the expression. For example, partitioning a dataset by the value in the ‘year’ column.For more information about setting up partitions with Snowflake’s external tables, read Managing Regular Data Loads.
  • Name: The name of the new partition column.
  • Type: The data type of the partition column.
  • Size: The data size of the new column values.
  • Precision: The precision of the new column values.
  • Partition Expression: The expression used to partition the data.
import TextMode from ‘/snippets/content/text-mode.mdx’;import UseGridVariable from ‘/snippets/content/use-grid-variable.mdx’;
Stage Database
drop-down
required
Select a Snowflake database for the external stage.
Stage Schema
drop-down
required
Select a Snowflake schema for the external stage.
Stage
drop-down
required
Select an external stage for the data. Staging areas can be created through Snowflake using the CREATE STAGE command.
Relative Path
string
The directory path to follow to the target data. File names cannot be specified. The path is relative to the storage location given in the external stage setup.
Pattern
string
Specify, using regular expression (RegEx) pattern syntax, files to be matched on the external stage.For example: '.*flight.*[.]csv'
Format
drop-down
required
Select a pre-made file format that will automatically set many of the component properties accordingly. These formats can be created through the Create File Format component.The list of file formats will come from the database and schema that the specified tables exist in.
File Type
drop-down
required
The type of expected data to load. Some data may require additional formatting, explained in Preparing to Load Data.Available options are: AVRO, CSV, JSON, ORC, PARQUET, and XML.Component properties will change to reflect the choice made here and give options based on the specific file type.
Compression
drop-down
required
(AVRO, CSV, JSON, PARQUET, XML) Select whether the input file is compressed in GZIP format, BROTLI, BZ2, DEFLATE, RAW_DEFLATE, ZSTD, or not compressed at all. The default is AUTO.When AUTO is selected, the compression algorithm is detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically.
Record Delimiter
string
(CSV) The delimiter to be used that separates records (rows) in the file. Defaults to newline.\ can also signify a newline. \\r can signify a carriage return.
Field Delimiter
string
(CSV) The delimiter that separates columns. The default is a comma. A [TAB] character can be specified as \.
Skip Header
integer
(CSV) The number of rows at the top of the file to ignore. The default setting is 0 and the property is left blank.
Skip Blank Lines
boolean
required
(CSV) When true, skip any blank lines encountered in the CSV files. When false, blank lines produce an end-of-record error. Default is false.
Escape
string
(CSV) When this option is specified, the backslash character \ in input data is treated as an escape character.
Escape Unenclosed Field
string
(CSV) Single character string used as the escape character for unenclosed field values. Accepts common escape sequences, octal values, or hex values. Also accepts a value of NONE (default). If a character is specified in the Escape field, it will override this field.
Field Optionally Enclosed
string
(CSV) A character that is used to enclose strings. Can be a single quote ' or a double quote " or NONE (default). Escape the character with an instance of the same character.
Error On Column Count Mismatch
boolean
required
(CSV) Generate an error if the number of delimited columns in the input does not match that of the table. If false, extra columns are not loaded into the table and missing columns are recorded as NULL in the table. Default is false.
Empty Field As Null
boolean
required
(CSV) When set to true, empty columns in the input file will become NULL. Default is true.
Replace Invalid Characters
boolean
required
(CSV) When true, Snowflake replaces invalid UTF-8 characters with the Unicode replacement character. When false, the load operation produces an error when invalid UTF-8 character encoding is detected. Default is false.
Encoding Type
drop-down
(CSV) The type of encoding that has been applied to the data. The default setting is UTF-8. Consult the Snowflake documentation for more information.
Enable Octal
boolean
required
(JSON) Enables parsing of octal numbers. Default is false.
Allow Duplicates
boolean
required
(JSON) When you pass in a JSON with a duplicate property, choose whether to allow the duplicate or not. Default is false.
Strip Outer Array
boolean
required
(JSON) When true, enables removal of outer square brackets from JSONs. Default is false.
Strip Null Values
boolean
required
(JSON) When true, enables removal of NULL values. Default is false.
Ignore UTF-8 Errors
boolean
required
(JSON, XML) When true, replaces invalid UTF-8 sequences with the Unicode replacement character (U+FFFD), instead of throwing an error. Default is false.
Preserve Space
boolean
required
(XML) When set to true, leading and trailing spaces in XML element values are preserved during parsing. Default is false.
Strip Outer Element
boolean
required
(XML) When set to true, when parsing XML files, this will strip the outermost XML element, exposing the second-level elements as separate documents. Default is false.
Disable Snowflake Data
boolean
required
(XML) When set to true, the parser will ignore Snowflake semi-structured data tags when processing XML files. Default is false.
Disable Auto Convert
boolean
required
(XML) When set to true, numeric and Boolean values in XML text elements are automatically converted to their native types during parsing. Default is false.
Null If
string
(AVRO, CSV, JSON, ORC, PARQUET) This option replaces the specified string with NULL in the output table. Use this if your data has a particular representation of missing data.
Trim Space
boolean
required
(AVRO, CSV, JSON, ORC, PARQUET) When true, removes trailing and leading whitespace from the input data. Default is false.