Skip to main content
Process the latest files from cloud storage to maintain tables in your cloud data platform. If you have configured a streaming pipeline with Amazon S3 or Azure Blob as the destination, these pre-built pipelines can be used to load the Avro files into Snowflake, Databricks, or Amazon Redshift. Pre-built pipelines are available at Matillion Exchange.

Compatibility

Cloud data platformFull SaaSHybrid SaaS
Snowflake
Databricks
Amazon Redshift

Installation

  1. Open a branch on your project.
  2. In the Files panel, click AddBrowse Exchange.
  3. Search for streaming, and locate the pipeline Load streaming data from cloud storage.
  4. Click the pipeline to import it into your project.
You should now have a folder named Imported from Exchange, with a sub-folder named Load Streaming Data from Cloud Storage. This will contain the latest version of the pre-built pipelines.

Usage

  1. Open the “Example” pipeline.
  2. Copy the “Sync All Tables - Template” Run Orchestration component and paste it into your own orchestration pipeline.
  3. Click into the “Sync All Tables - Template” component and edit the Set Scalar Variables and Set Grid Variables parameters accordingly.
  4. You can now run or schedule your orchestration pipeline to keep Snowflake up to date with your streaming files.

Set Scalar Variables

The following variables can be set in the Set Scalar Variables parameter.
cloud_storage_url
string
required
The url of the cloud storage location that the streaming pipeline is writing to. This should take one of the following formats:
  • s3://<bucket>/<prefix>
  • azure://<storage_account>.blob.core.windows.net/<container>/<prefix>
  • gs://<bucket>/<prefix>
warehouse
string
required
The Snowflake virtual warehouse used to execute the SQL statements. Read Overview of Warehouses to learn more.
target_database
string
required
The Snowflake database where the external table and target tables will be created. Read Databases, Tables and Views - Overview to learn more.
stage_schema
string
The schema containing the external stage, and where the external table will be created. If not specified, the target_schema will be used.
target_schema
string
required
The schema where the target tables will be created, unless use_source_schemas has been set to Y. Read Database, Schema, and Share DDL to learn more.
external_stage
string
required
The name of an existing external stage that contains the files output by the streaming pipeline. The URL of the external stage must contain the cloud_storage_url.
external_table
string
required
The external table that will be created to read the files output by the streaming pipeline.
use_source_schemas
string
Create the target tables in a schema with the same name as the schema containing the source table. If the schema doesn’t already exist, the pipeline will try to create it. Options are Y or N.
target_prefix
string
required
A prefix to add to the source table name to generate the target table name. If no prefix is specified, the target table will have the same name as the source table.
fully_qualify_target_table
string
Includes the source database and schema in the target table name. If use_source_schemas = N, it is recommended to set this to Y, unless you are confident that your source table names will always be unique. Options are Y or N.
transformation_type
string
required
The type of transformation used when applying the change events to the target table. Available options:
  • Copy Table: The target table will be maintained as a copy of the source table.
  • Copy Table with Soft Deletes: Same as Copy Table, but records deleted in the source table will be retained in the target table.
  • Change Log: All change events will be extracted and appended to the target table.
A primary key is required on the source table for Copy Table and Copy Table with Soft Deletes transformations. The primary key is used by the pre-built pipeline to merge updates into the target table. If the source table doesn’t have a primary key, the transformation type will be updated to Change Log for that table.
append_metadata
string
required
Whether to add all metadata columns to the target table, or just the minimum required for the selected transformation_type. Options are Y or N.
bytes_to_decimal_function
string
required
The name of a user defined function (UDF) that will be created to convert VariableScaleDecimals back to a decimal representation. If no function name is specified, any columns of type VariableScaleDecimal in the Avro files will be created as Variants in Snowflake.
schema_drift_action
string
required
If the pipeline detects that there have been schema changes in the source table, and which are not compatible with the current target table, the target table can be altered to accept the new data. Options are Update Target or Fail Job.

Set Grid Variables

The following variables can be set in the Set Grid Variables parameter.
primary_key_override
drop-down
Provide a list of primary key columns for the source tables. By default, the pipeline will read the primary key columns from the change data capture Avro files. However, if the source table does not a have a primary key defined in its Data Definition Language (DDL), a list of unique columns can be specified here to enable Copy Table transformations.
The values for the source_database, source_schema, source_table, and source_column are case sensitive, and must match the source database.