Skip to main content
The Database Query component runs SQL queries on an accessible database, and copies the results to a table via storage. You can query cloud or on-premises databases, as long as they are network-accessible. You can stage data (get data into a table) with this component, to perform further processing and transformations on it. The target table should be considered temporary, because it will either be truncated or recreated each time the component runs. You do not need to use the Create Table component when using this component. Once the component has run once, you can use transformation pipelines to transform your data to fit your business requirements. If the component requires access to a cloud provider (AWS, Azure, or GCP), it will use credentials as follows:
  • If using Matillion Full SaaS: The component will use the cloud credentials associated with your environment to access resources.
  • If using Hybrid SaaS: By default the component will inherit the agent’s execution role (service account role). However, if there are cloud credentials associated to your environment, these will overwrite the role.
This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the load option Recreate Target Table to Off will prevent both recreation and truncation. Do not modify the target table structure manually.
We recommend using key-pair authentication for this component instead of a username and password, because Snowflake has announced plans to block single-factor password authentication by November 2025. For more information, read our Tech note.
The following new connectors are available to replace some Database Query database types. These connectors offer an Incremental Load option, which allows you to only load new or updated records each time your pipeline runs. If you’re using Database Query to connect to any of the following sources, we recommend that you use these new connectors instead:

Video example


Properties

Reference material is provided below for the Connect, Configure, Destination, and Advanced Settings properties.
Name
string
required
A human-readable name for the component.

Connect

Database Type
drop-down
required
Select the database type. Refer to section Database driver versions further down for more information.Choose from:
  • Amazon Redshift
  • IBM DB2 for i
  • MariaDB
  • Microsoft SQL Server
  • Oracle
  • PostgreSQL
  • Snowflake
  • SQL Server (Microsoft Driver)
  • Sybase ASE
Connection URL
string
required
The URL for your chosen JDBC database. The general pattern of the URL will depend on the database, as follows:
DatabaseURL
Amazon Redshiftjdbc:redshift://<host>/<database>
IBM DB2 for ijdbc:as400://<host>/<database>
MariaDBjdbc:mariadb://<host>/<database>
Microsoft SQL Serverjdbc:jtds:sqlserver://<host>/<database>
Oraclejdbc:oracle:thin:@<host>:1521:<database>
PostgreSQLjdbc:postgresql://<host>/<database>
Snowflakejdbc:snowflake://dummy-account.snowflakecomputing.com/
SQL Server (Microsoft Driver)jdbc:sqlserver://<host>;databaseName=<database>
Sybase ASEjdbc:jtds:sybase://<host>/<database>
Make appropriate substitutions for the <host> and <database> parameters in these URL strings.Although many parameters and options can be added to the end of the URL, it is generally easier to add them in the Connection Options property.
For this component to work with a Microsoft SQL Server database connection, you must set the connection option trustServerCertificate to true.If you’re using certified SSL keys, you don’t need this connection option to be set. Read SqlConnectionStringBuilder.TrustServerCertificate Property to learn more.
Username
string
A valid username for the database connection. Optional because authentication can also be performed using the Connection Options property.
Password
drop-down
Choose the secret definition that represents your credentials for this connector.If you have not already saved your credentials for this connector as a secret definition, click Add secret to create a secret definition representing these credentials. Read Secrets and secret definitions for details about creating a secret definition.Optional because authentication can also be performed using the Connection Options property.
Password Type
drop-down
When Database Type is set to Snowflake, choose whether your password is in the form of a password or private key.
Private Key
drop-down
When Database Type is set to Snowflake and Password Type is set to Private key, select the secret that represents your Snowflake private key.Read Using Snowflake key-pair authentication to learn how to store your Snowflake private key using a secret.
Warning for AWS usersIf you are storing a multi-line secret in AWS Secrets Manager:
  1. Add your key and value to the Key/value tab of the Secret value section when storing your secret.
  2. Click the Plaintext tab.
  3. Replace any whitespace characters before and after ----- with \n. Do not remove whitespace characters in the BEGIN/END RSA PRIVATE KEY parts.
For example: {"dwh-bash-private-key":"-----BEGIN RSA PRIVATE KEY-----\nline1\nline2\nline3\n-----END RSA PRIVATE KEY-----"}Alternatively, you can run the following code in your terminal, replacing values where appropriate:
PEM_CONTENT=$(awk '{printf "%s\\n", $0}' /path/to/your/file.pem)

aws secretsmanager create-secret \
  --name "MyKeyValueSecretWithPem" \
  --description "Secret with PEM file content" \
  --secret-string "{\"pem\":\"$PEM_CONTENT\"}"
Warning for Azure usersDo not store multi-line secrets via the Azure Key Vault GUI, as newlines may be stripped. Instead, use the Azure CLI depending on your use case. Read Store a multi-line secret in Azure Key Vault to work around this issue.The following CLI command will maintain newlines:az keyvault secret set --vault-name <vault-name> --name <secret-name> --file <key-file-name>
Passphrase
drop-down
Use the drop-down menu to select the corresponding secret definition that denotes the value of your passphrase.If your private key is passphrase protected, you will also need to add a secret to store the passphrase. Read Using Snowflake key-pair authentication to learn how to store the Snowflake private key using a secret.
Connection Options
column editor
required
  • Parameter: A JDBC parameter supported by the database driver. Consult the specific database documentation for more details.
  • Value: A value for the given parameter.
Click the Text Mode toggle at the bottom of the Connection Options dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.To use grid variables, select the Use Grid Variable checkbox at the bottom of the Connection Options dialog.
For this component to work with a Microsoft SQL Server database connection, you must set the connection option trustServerCertificate to true.If you’re using certified SSL keys, you don’t need this connection option to be set. Read SqlConnectionStringBuilder.TrustServerCertificate Property to learn more.
SSH Tunnel
drop-down
required
Select an SSH Tunnel from the list of Network items. For detailed usage instructions, read the SSH Tunneling documentation.
If selected, the Connection URL will be the data source that your secure tunnel connects to.

Configure

Mode
drop-down
required
  • Basic: This mode will build a query for you using settings from the Schema, Data Source, Data Selection, Data Source Filter, Combine Filters, and Limit parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from the service you’re connecting to. The available fields and their descriptions are documented in the documentation specific to the database product.
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
SQL Query
code editor
required
This is an SQL-like SELECT query, written in the SQL accepted by your cloud data warehouse. Treat collections as table names, and fields as columns. Only available in Advanced mode.For detailed information about tables and views for this connector, read the section about the data model, found below.
Data Source
drop-down
required
Select a single data source to be extracted from the source system and loaded into a table in the destination. The source system defines the data sources available. Use multiple components to load multiple data sources.For detailed information about tables and views for this connector, read the section about the data model, found below.
Data Selection
dual listbox
required
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.To use grid variables, select the Use Grid Variable checkbox at the bottom of the Data Selection dialog.
Data Source Filter
column editor
required
Define one or more filter conditions that each row of data must meet to be included in the load.
  • Input Column: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so “Equals” becomes “Not equals”, “Less than” becomes “Greater than or equal to”, etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: “Equal to”, “Greater than”, “Less than”, “Greater than or equal to”, “Less than or equal to”, “Like”, “Null”. “Equal to” can match exact strings and numeric values, while other comparators, such as “Greater than” and “Less than”, will work only with numerics. The “Like” operator allows the wildcard character % to be used at the start and end of a string value to match a column. The Null operator matches only null values, ignoring whatever the value is set to. Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
  • Value: The value to be compared.
Click the Text Mode toggle at the bottom of the Connection Options dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.
Combine Filters
drop-down
required
The data source filters you have defined can be combined using either And or Or logic. If And, then all filter conditions must be satisfied to load the data row. If Or, then only a single filter condition must be satisfied. The default is And.If you have only one filter, or no filters, this parameter is essentially ignored.
Limit
integer
required
Set a numeric value to limit the number of rows that are loaded. The default is 100. To load all rows from your data source, delete the default 100 and leave the field empty (i.e. do not set a limit).

Destination

Select your cloud data warehouse.
Type
drop-down
required
  • Standard: The data will be staged in your storage location before being loaded into a table. This is the only setting currently available.
Warehouse
drop-down
required
The Snowflake warehouse used to run the queries. The special value [Environment Default] uses the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
drop-down
required
The Snowflake database. The special value [Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
drop-down
required
The Snowflake schema. The special value [Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Target Table
string
required
The name of the table to be created in your Snowflake database. This table will be recreated and will drop any existing table of the same name.You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Primary Keys
dual listbox
Select one or more columns to be designated as the table’s primary key.
Stage
drop-down
required
Select a managed stage. The special value, [Custom], will create a stage “on the fly” for use solely within this component.
Stage Platform
drop-down
required
Choose where the data is staged before being loaded into your Snowflake table using the drop-down menu.
  • Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
  • Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
  • Existing Google Cloud Storage Location: Activates the Storage Integration and GCS Staging Area properties, allowing users to specify a custom staging area within Google Cloud Storage.
  • Snowflake Managed: Create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete. This is the default setting.
Stage Authentication
drop-down
required
Select an authentication method for data staging. Only available when Stage Platform is set to either Existing Amazon S3 Location or Existing Azure Blob Storage Location.
  • Credentials: Uses the credentials configured in the environment. If no credentials have been configured, an error will occur.
  • Storage Integration: Use a Snowflake storage integration to authentication data staging.
Storage Integration
drop-down
required
Select a Snowflake storage integration from the drop-down list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location and must be set up in advance of selection.
S3 Staging Area
drop-down
required
Select an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. The temporary objects created in this bucket will be removed again after the load completes.
Storage Account
drop-down
required
Select a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.
Blob Container
drop-down
required
Select a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
GCS Staging Area
drop-down
required
The URL and path of the target Google Cloud Storage bucket to be used for staging the queried data.

Advanced Settings

Load Options
multiple drop-downs
required
  • Clean Staged Files: Destroy staged files after loading data. Default is On.
  • String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
  • Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
  • File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
  • Trim String Columns: Remove leading and trailing characters from a string column. Default is On.
  • Compression Type: Set the compression type to either gzip (default) or None.
Encryption
drop-down
required
Decide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
KMS Key ID
drop-down
required
The ID of the KMS encryption key you have chosen to use in the Encryption property.
Concurrency
integer
required
The number of files to create in the specified S3 bucket. The default value is 2.Each instance is limited to 20 concurrent tasks at any one time. This is regardless of the amount of resources assigned to the agent instance. As such, a high level of concurrency in your pipelines would result in tasks being queued and would result in the overall pipeline execution taking longer. Read Horizontal scaling for more information.For Amazon Redshift projects, this parameter is called Concurrency Value in the UI.
Concurrency Method
drop-down
required
This parameter only applies to Amazon Redshift projects.
  • Absolute: Uses the absolute value set in the Concurrency property (e.g. if set to 8, then eight files would be created in the staging store). This is the default setting.
  • STV_SLICES: The concurrency is treated as a calculated value. The calculation is:
Number of files = COUNT(*) from STV_SLICES x concurrency-value
If the STV_SLICES table count = 4, and you set the Concurrency value to 8, then the number of files created in the staging store is 4 x 8 = 32.
Fetch Size
integer
required
Optionally specify the batch size of rows to fetch at a time, for example, 500.When left blank, the chosen database’s driver default fetch size is used.

Database driver versions

DatabaseDriver version
Amazon Redshiftredshift-jdbc42:2.1.0.18
IBM DB2 for ijt400:9.1
MariaDBmariadb-java-client:2.7.7
Microsoft SQL Serverjtds:1.3.1
Oracleojdbc8:21.9.0.0
PostgreSQLpostgresql:42.5.5
Sybase ASEjtds:1.3.1
Snowflakesnowflake-jdbc:3.17.0
SQL Server (Microsoft driver)12.8.1.jre11

Due to licensing restrictions, this component uses the MariaDB driver when interacting with MySQL databases in Full SaaS deployments. For customers using a Hybrid SaaS deployment, the native MySQL driver can be used to interact directly with MySQL databases.