Skip to main content
An environment defines the connection between a project and your chosen cloud data warehouse. Environments include useful defaults such as a default warehouse, database, and schema, that can be used to pre-populate component configurations in . Not added a project yet? Read Add project.
We recommend using environments to separate your development and production environments:
  • Use development environments for building, testing, and iterating on pipelines before they are deployed.
  • Use production environments to run pipelines that are fully deployed to work on live data. Only stable and thoroughly tested pipelines should be deployed here.
  • You can also use intermediate environments, such as staging, test, or preprod, to validate pipelines before they are deployed to production. These can also be used for performance testing.
For more information, read Matillion’s Unlocking Data Productivity DataOps guide.

Add an environment

  1. In the left navigation, click .
  2. Select your project.
  3. Click the Environments tab.
  4. Click Add new environment.
ParameterDescription
Environment nameA unique name for the environment. Max 255 characters.
AgentA working agent. This is only required if you are using a Hybrid SaaS solution. To learn how to create an agent, read Create an agent.
Default environment accessUse the drop-down to select the default access for all new and existing users added to the project. For more information, read Environment roles.
Agents can be restricted to specific projects and environments. If an agent is not allowed for your project or environment, it will not appear in the Agent drop-down. For more information, read Restricting agents.
Click Continue. Depending on the data platform that you selected when creating your project, follow the corresponding instructions below to specify your cloud data warehouse credentials and select your data warehouse defaults for this environment.

Snowflake

Prerequisites

Before configuring a Snowflake connection, you will need: For details about Snowflake key-pair authentication, read the Snowflake guide to Configuring key-pair authentication.

Specify credentials

Use the reference tables below to set up your environment connection to your cloud data platform. If you’re using a Full SaaS deployment, credentials such as passwords and private keys are stored directly as strings. However, if you’re using a Hybrid SaaS deployment with your own AWS or Azure Data Productivity Cloud agent, credentials such as passwords and private keys are only retrieved via references to secrets created in either AWS Secrets Manager or Azure Key Vault.

Key-pair

We recommend using key-pair authentication to set up your connection to Snowflake, because Snowflake has announced plans to block single-factor password authentication by November 2025. For more information, read our Tech note. Refer to this table if you’re using Snowflake key-pair authentication.
ParameterDescription
AccountEnter your Snowflake account name and region. In the URL you use to log in to Snowflake, this is the part between https and snowflakecomputing.com.
Credentials typeSelect Key pair.
UsernameYour Snowflake username.
Private keyYour Snowflake private key. To generate a key, read the Snowflake documentation for Generate the private key. The full content of the generated Snowflake private key file must be copied into this field, including the header and footer lines. Field only available if Credentials type is Key pair when using a Full SaaS deployment model.
PassphraseAn optional passphrase to use with your private key. Field only available if Credentials type is Key pair and when using a Full SaaS deployment model.
Vault nameFor Hybrid SaaS on Azure deployment models only. Select the Azure Key Vault instance that this project will use to store secrets. Select [Default] to use the default key vault specified in the Data Productivity Cloud agent environment variables.
Private key secret nameA named entry created in AWS Secrets Manager or Azure Key Vault denoting the secret that holds your Snowflake private key. Read Using Snowflake key-pair authentication to learn how to store the key as a secret. Field only available if using a Hybrid SaaS deployment model.
Passphrase secret name (optional)A named entry created in AWS Secrets Manager or Azure Key Vault denoting the secret that holds your Snowflake key pair passphrase. Field only available if using a Hybrid SaaS deployment model.
Passphrase secret key (optional)The secret key tied to your passphrase secret name. Field only available if using a Hybrid SaaS deployment model.
If your private key has been shared, the format may have been altered. To correct this, run the following command to validate and convert the key to the correct format:
openssl rsa -in key.pem -check

Password

Refer to this table if you’re using your Snowflake password to authenticate to Snowflake.
ParameterDescription
AccountEnter your Snowflake account name and region. In the URL you use to log in to Snowflake, this is the part between https and snowflakecomputing.com.
Credentials typeSelect Username and password.
UsernameYour Snowflake username.
PasswordYour Snowflake password. This field is only available if using a Full SaaS deployment; otherwise, you will specify your password as a secret.
Vault nameFor Hybrid SaaS on Azure deployment models only. Select the Azure Key Vault instance that this project will use to store secrets. Select [Default] to use the default key vault specified in the Data Productivity Cloud agent environment variables.
Secret nameA named entry created in AWS Secrets Manager or Azure Key Vault for holding your Snowflake password. Field only available if using a Hybrid SaaS deployment model.
Secret keyA named secret key tied to your secret name. Field only available if using a Hybrid SaaS on AWS deployment model.

Programmatic access token

An alternative authentication option is to use a Snowflake programmatic access token (PAT). To use this option, follow the instructions for Password authentication, above, using your PAT as the password. For more details of this authentication option, read Snowflake programmatic access token authentication.

Select defaults

ParameterDescription
Default roleThe default Snowflake role for this environment connection. Read Overview of Access Control to learn more.
Default warehouseThe default Snowflake warehouse for this environment connection. Read Overview of Warehouses to learn more.
Default databaseThe default Snowflake database for this environment connection. Read Database, Schema, and Share DDL to learn more.
Default schemaThe default Snowflake schema for this environment connection. Read Database, Schema, and Share DDL to learn more.
Default session parametersAny session parameters you want to set as the default for this environment connection. Click the cog icon to open the Configure Session Parameters dialog, and enter a name and value for each required parameter. See below for more details.

Default session parameters

You can set session parameters to change the behavior of the Snowflake connection. An example of this would be setting the QUOTED_IDENTIFIERS_IGNORE_CASE parameter to determine whether the case of letters in double-quoted object identifiers is preserved. Setting default session parameters when you create an environment is optional, and should only be done if you need to change the default behavior of the Snowflake connection. To set default session parameters for the environment:
  1. In the Default session parameters field, click the cog icon to open the Configure Session Parameters dialog.
  2. Enter a name and value for each required parameter.
  3. Click Save to close the dialog.
For a description of the available session parameters, read the Snowflake documentation. Note that for security reasons we don’t allow all parameters on that page to be set, only the session-level parameters.

Databricks

Specify credentials

Use the reference tables below to set up your environment connection to your cloud data platform. If you’re using a Full SaaS deployment, credentials such as passwords and private keys are stored directly as strings. However, if you’re using a Hybrid SaaS deployment with your own AWS or Azure Data Productivity Cloud agent, credentials such as passwords and private keys are only retrieved via references to secrets created in either AWS Secrets Manager or Azure Key Vault.
ParameterDescription
Instance nameYour Databricks instance name. Read the Databricks documentation to learn how to determine your instance name.
Personal Access TokenYour Databricks personal access token. Read the Databricks documentation to learn how to create a personal access token.
Vault nameFor Hybrid SaaS on Azure deployment models only. Select the Azure Key Vault instance that this project will use to store secrets. Select [Default] to use the default key vault specified in the Data Productivity Cloud agent environment variables.
Secret nameA named entry created in AWS Secrets Manager or Azure Key Vault.
Secret keyFor Hybrid SaaS on AWS deployment model only. A named secret key tied to your secret name.

Select defaults

ParameterDescription
Endpoint/ClusterThe Databricks cluster that the Data Productivity Cloud will connect to.
CatalogChoose a Databricks Unity Catalog to connect to.
SchemaChoose a Databricks schema to connect to.

Amazon Redshift

Specify credentials

Use the reference tables below to set up your environment connection to your cloud data platform. If you’re using a Full SaaS deployment, credentials such as passwords and private keys are stored directly as strings. However, if you’re using a Hybrid SaaS deployment with your own AWS or Azure Data Productivity Cloud agent, credentials such as passwords and private keys are only retrieved via references to secrets created in either AWS Secrets Manager or Azure Key Vault.
ParameterDescription
EndpointThe physical address of the leader node. This will be either a name or an IP address.
PortThis is usually 5439 or 5432, but it can be configured differently when setting up your Amazon Redshift cluster.
Use SSLSelect this to encrypt communications between the Data Productivity Cloud and Amazon Redshift. Some Amazon Redshift clusters may be configured to require this.
UsernameThe username for the environment connection.
PasswordFor Full SaaS deployment model only. Your Redshift password.
Vault nameFor Hybrid SaaS on Azure deployment models only. Select the Azure Key Vault instance that this project will use to store secrets. Select [Default] to use the default key vault specified in the Data Productivity Cloud agent environment variables.
Secret nameFor Hybrid SaaS deployment model only. A named entry created in AWS Secrets Manager or Azure Key Vault.
Secret keyFor Hybrid SaaS on AWS deployment model only. A named secret key tied to your secret name.
Ensure the IAM user has appropriate permissions to read from and write to the specified S3 bucket. At a minimum, the user should have:
  • s3:GetObject
  • s3:PutObject
  • s3:ListBucket
Next, in the Specify AWS cloud credentials dialog, in the drop-down, choose one of the following options:
  • Use the cloud credentials assigned to the agent you specified when creating this environment.
  • Enter different cloud credentials. This will override the IAM role belonging to the agent you specified.
If you choose to enter different cloud credentials, use the fields to enter the cloud credential name, access key ID, and secret access key. For details about access keys, read the AWS documentation.

Select defaults

ParameterDescription
Default databaseThe database you created when setting up your Amazon Redshift cluster. You may run with multiple database names—in which case, choose the one you want to use for this environment.
Default schemaThis is public by default, but if you have configured multiple schemas within your Amazon Redshift database, you should specify the schema you want to use.
Default S3 bucketThe S3 bucket that this environment will use for staging data by default, unless specifically overridden within a component.
If you use a Matillion Full SaaS solution, the cloud credentials associated with your environment will be used to access the S3 bucket.If you use a Hybrid SaaS solution, your new environment will inherit the Data Productivity Cloud agent’s execution role (service account role) to access the default S3 bucket specified here.To overwrite this role, associate different cloud credentials with this environment after you have finished creating it. You can create these credentials before or after creating the environment.

Associate cloud provider credentials with an environment

Each environment in your project must have at least one set of cloud credentials associated with it. This will allow you to access account resources on different platforms other than that hosting your project. For example, if your project is on AWS and you want to access resources in Azure, you need to associate your Azure cloud credentials with the environment. You can associate credentials from multiple providers, but only one set of credentials for each cloud provider. For example, you can associate both AWS and Azure credentials, but not two different AWS credentials. You can associate credentials with an environment when you first Create cloud provider credentials, or you can associate them later as follows:
  1. In your project, click the Environments tab.
  2. Click the three dots on the corresponding row of the environment you want to associate, and select Associate Credentials.
  3. Select the credentials from the drop-downs. You can associate one set of credentials for each cloud provider.
  4. Click Associate.

Manage environments

To view your environments:
  1. From the Your projects menu, select your project.
  2. Navigate to the Environments tab.
Click the column headers to sort your environments by name, default agent, cloud data warehouse account name, or credential type.

Edit an environment

  1. Click the three dots in the row of the environment you want to edit.
  2. Click Edit environment.

Delete an environment

Deleting an environment permanently removes the environment from your project. All artifacts and schedules in the deleted environment will be inaccessible. This action cannot be undone.
Before you delete an environment, you must:
  • Disable any active schedules that run pipelines in this environment.
  • Change the default environment of any branches that currently use this environment as their default. For more information, read Branches.
To delete an environment:
  1. Click the three dots in the row of the environment you want to delete.
  2. Click Delete environment.
  3. In the confirmation dialog, enter the name of the environment you want to delete.
  4. Click Delete environment.