Video example
Watch our video about using Flex connector: YouTube.Properties
Reference material is provided below for the Connect, Configure, Destination, and Advanced Settings properties.A human-readable name for the component.
Connect
The data source to load data from in this pipeline. The drop-down menu lists the API endpoints available in the connector. For detailed information about specific endpoints, read the API documentation.
The authentication method to authorize access to your data. Currently supports OAuth 2.0 Client Credentials. Read Authenticating to the API to learn more.
Select your authentication profile.To create a new profile, read OAuth client credentials.
Configure
- Parameter Name: The name of a URI parameter.
- Parameter Value: The value of the corresponding parameter.
| Required parameter | Endpoints | Description |
|---|---|---|
| server | All endpoints | Enter eu1 or us1, depending on the region of your account. To find your account’s region, click the Profile & Account icon on the left side of the page. |
| api_version | All endpoints | Enter v1. |
| projectId | List All Environments, List All Published Pipelines, Execute Published Pipeline, Get Pipeline Status, Get Pipeline Steps Status, List All Schedules, Create Schedule, List Artifacts, Get Artifact, Promote Artifact, List All Secret References, Create Secret Reference | projectId is unique to every project. Retrieve this value by using the List All Projects endpoint. |
| pipelineExecutionId | Get Pipeline Status, Get Pipeline Steps Status | pipelineExecutionId is unique to every pipeline. Retrieve this value by using the Execute Published Pipeline endpoint. |
| secretReferenceName | Create Secret Reference | The name of secret reference. This can be found in the Secret definitions tab in , under the Name column. |
| agentId | Get Agent Details, Trigger Agent Command, Get Agent Client Credentials, Perform Action On Agent Credentials | The ID of the agent to retrieve. This can be found in the left navigation, under Agents & Instances, then click Agents. Select the intended agent, and click the Parameters tab. |
- Parameter Name: The name of a query parameter.
- Parameter Value: The value of the corresponding parameter.
| Required parameter | Endpoints | Description |
|---|---|---|
| size | List All Projects, List All Environments, List All Published Pipelines, Get Pipeline Steps Status, List All Schedules, List Artifacts, List All Secret References, List All Agents, Query Audit Events, Get Lineage Events | Enter the number of records per page, ranging from 1 to 100. |
| environmentName | List All Published Pipelines, List Artifacts, Get Artifact | Enter the environment name. For example, test-environment-1. |
| consumedFrom | Get Consumption | Enter the start date for the results. This value is inclusive, meaning results from this date onward are included. For example, 2024-11-01. |
| consumedBefore | Get Consumption | Enter the end date for the results. This value is exclusive, meaning it includes only results occurring before (but not on) this date. For example, 2024-12-01. |
| consumedFrom | Get Matillion ETL Users Consumption | Enter the start date and time for the results. This value is inclusive, meaning results from this date and time onward are included. For example, 2024-07-01T00:00:00.123Z. |
| consumedBefore | Get Matillion ETL Users Consumption | Enter the end date and time for the results. This value is exclusive, meaning it includes only results occurring before (but not on) this date and time. For example,2024-07-31T00:00:00.123Z. |
| versionName | Get Artifact | The Version name when you Push local changes to the remote repository in . For more information, read Git push local changes. |
| limit | Pipeline Executions | Enter the maximum number of results to return. The default value is set to 25. |
| from | Query Audit Events | Enter the earliest date and time of audit events to retrieve. The date time format must be in ISO 8601 format, for example: 2025-02-20T07:15:15.000-01:00. |
| to | Query Audit Events | Enter the latest date and time of audit events to retrieve. The date time format must be in ISO 8601 format, for example: 2025-02-21T07:15:15.000-01:00. |
| generatedFrom | Get Lineage Events | Include events generated on or after this date time. The value must be earlier than generatedBefore. |
| generatedBefore | Get Lineage Events | Include events generated up to, but not including, this date time. The value must be later than generatedFrom. |
| page | Get Lineage Events | The page number to use for pagination, starting at 0. Must be 0 or greater. |
The Get Matillion ETL Users endpoint provides information about the number of credits charged for Matillion ETL users, and identifies which users contributed to those charges. Matillion ETL users are billed based on monthly active unique users, so ensure that the
consumedFrom and consumedBefore parameters correspond to the timeframe of a single monthly invoice.- Parameter Name: The name of a header parameter.
- Parameter Value: The value of the corresponding parameter.
| Required parameter | Endpoints | Description |
|---|---|---|
| Content-Type | Execute Published Pipeline, Get Pipeline Status, Get Pipeline Steps Status, Create Schedule, Promote Artifact, Create Secret Reference, List All Agents, Create Agent, Get Agent Details, Trigger Agent Command, Get Agent Client Credentials, Perform Action On Agent Credentials, Query Audit Events, Get Lineage Events | Enter application/json. |
| accept | Execute Published Pipeline, Get Pipeline Status, Get Pipeline Steps Status | Enter application/json. |
A JSON body to include as part of a POST request. Use Custom Connector to test your endpoints work as expected before moving to pipelines.You should also consult the developer documentation for the API you’re connecting to—as the developer portal may provide additional information about endpoints and requests.For the Execute Published Pipeline endpoint, include the following POST Body. This example demonstrates a POST Body used to execute a pipeline:For the Create Schedule endpoint, include the following POST Body. This example demonstrates a POST Body used to create a schedule:For the Promote Artifact endpoint, include the following POST Body. This example demonstrates a POST Body used to promote an artifact:For the Create Secret Reference endpoint, include the following POST Body. This example demonstrates a POST Body used to create a secret for an AWS agent:For the Create Agent endpoint, include the following POST Body. This example demonstrates a POST Body used to create a new AWS agent:For the Trigger Agent Command endpoint, include the following POST Body. This example demonstrates a POST Body used to trigger the Other available agent commands are:
RESTART agent command:PAUSERESUME
For other agents, the POST body will vary. For example, in Azure, you must specify a value for
vaultName.A numeric value to limit the maximum number of records per page.
Destination
Select your cloud data warehouse.- Snowflake
- Databricks
- Amazon Redshift
- Snowflake: Load your data into Snowflake. You’ll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into your preferred cloud storage location.
- Snowflake
- Cloud Storage
The Snowflake warehouse used to run the queries. The special value
[Environment Default] uses the warehouse defined in the environment. Read Overview of Warehouses to learn more.The Snowflake database. The special value
[Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.The Snowflake schema. The special value
[Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.The name of the table to be created.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It’s appended onto the end of the existing data in the table. If the specified table name doesn’t exist, then the table will be created, and your data will be inserted into the table.
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
- Credentials: Connects to the external stage (AWS, Azure) using your configured cloud provider credentials. Not available for Google Cloud Storage.
- Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Choose a data staging platform using the drop-down menu.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Snowflake: Stage your data on a Snowflake internal stage.
- Azure Storage: Stage your data in an Azure Blob Storage container.
- Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.
- Amazon S3
- Snowflake
- Azure Storage
- Google Cloud Storage
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Advanced Settings
Set the severity level of logging. Choose from Error, Warn, Info, Debug, or Trace. Logs can be found in the Message field of the task details after the pipeline has been run.
Choose whether to return the entire payload or only selected data objects. Read Structure to learn how to select which data objects to include in your API response.
- No: Will return the entire payload. This is the default setting.
- Yes: Will return only the objects in Custom Connector that are marked as Selected Data in the Structure setting.
Deactivate soft delete for Azure blobs (Databricks)
If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the “Enable soft delete for blobs” setting in your Azure account for your pipeline to run successfully. To do this:- In the Azure portal, navigate to your storage account.
- In the menu, under Data management, click Data protection.
- Clear the Enable soft delete for blobs checkbox. For more information, read Soft delete for blobs.

