Compatibility
| Cloud data platform | Full SaaS | Hybrid SaaS |
|---|---|---|
| Snowflake | ✅ | ✅ |
| Databricks | ❌ | ✅ |
| Amazon Redshift | ❌ | ✅ |
Installation
- Open a branch on your project.
- In the Files panel, click Add → Browse Exchange.
- Search for
streaming, and locate the pipeline Load streaming data from cloud storage. - Click the pipeline to import it into your project.
Usage
- Open the “Example” pipeline.
- Copy the “Sync All Tables - Template” Run Orchestration component and paste it into your own orchestration pipeline.
- Click into the “Sync All Tables - Template” component and edit the
Set Scalar VariablesandSet Grid Variablesparameters accordingly. - You can now run or schedule your orchestration pipeline to keep Snowflake up to date with your streaming files.
Set Scalar Variables
The following variables can be set in theSet Scalar Variables parameter.
- Snowflake
- Databricks
- Redshift
The url of the cloud storage location that the streaming pipeline is writing to. This should take one of the following formats:
s3://<bucket>/<prefix>azure://<storage_account>.blob.core.windows.net/<container>/<prefix>gs://<bucket>/<prefix>
The Snowflake virtual warehouse used to execute the SQL statements. Read Overview of Warehouses to learn more.
The Snowflake database where the external table and target tables will be created. Read Databases, Tables and Views - Overview to learn more.
The schema containing the external stage, and where the external table will be created. If not specified, the target_schema will be used.
The schema where the target tables will be created, unless use_source_schemas has been set to Y. Read Database, Schema, and Share DDL to learn more.
The name of an existing external stage that contains the files output by the streaming pipeline. The URL of the external stage must contain the
cloud_storage_url.The external table that will be created to read the files output by the streaming pipeline.
Create the target tables in a schema with the same name as the schema containing the source table. If the schema doesn’t already exist, the pipeline will try to create it. Options are
Y or N.A prefix to add to the source table name to generate the target table name. If no prefix is specified, the target table will have the same name as the source table.
Includes the source database and schema in the target table name. If
use_source_schemas = N, it is recommended to set this to Y, unless you are confident that your source table names will always be unique. Options are Y or N.The type of transformation used when applying the change events to the target table. Available options:
- Copy Table: The target table will be maintained as a copy of the source table.
- Copy Table with Soft Deletes: Same as
Copy Table, but records deleted in the source table will be retained in the target table. - Change Log: All change events will be extracted and appended to the target table.
A primary key is required on the source table for
Copy Table and Copy Table with Soft Deletes transformations. The primary key is used by the pre-built pipeline to merge updates into the target table. If the source table doesn’t have a primary key, the transformation type will be updated to Change Log for that table.Whether to add all metadata columns to the target table, or just the minimum required for the selected
transformation_type. Options are Y or N.The name of a user defined function (UDF) that will be created to convert VariableScaleDecimals back to a decimal representation. If no function name is specified, any columns of type VariableScaleDecimal in the Avro files will be created as Variants in Snowflake.
If the pipeline detects that there have been schema changes in the source table, and which are not compatible with the current target table, the target table can be altered to accept the new data. Options are
Update Target or Fail Job.Set Grid Variables
The following variables can be set in theSet Grid Variables parameter.
- Snowflake
- Databricks
- Amazon Redshift
Provide a list of primary key columns for the source tables. By default, the pipeline will read the primary key columns from the change data capture Avro files. However, if the source table does not a have a primary key defined in its Data Definition Language (DDL), a list of unique columns can be specified here to enable Copy Table transformations.
The values for the source_database, source_schema, source_table, and source_column are case sensitive, and must match the source database.

