Skip to main content
The Microsoft Exchange orchestration component uses the Connect and Configure parameters to create a table of Microsoft Exchange data, which is then stored in your preferred storage location (Snowflake, Databricks, Amazon Redshift, or cloud storage). You do not need to use the Create Table component when using this connector, as the Microsoft Exchange component will create a new table or replace an existing table for you using the Destination parameters you define. If the component requires access to a cloud provider (AWS, Azure, or GCP), it will use the cloud credentials associated with your environment to access resources. To stage data to Azure Blob Storage, the Azure credentials associated with your environment must be assigned the Storage Blob Data Contributor role. For more information, read User assigned with the Storage Blob Data Contributor role.

Properties

Reference material is provided below for the Connect, Configure, Destination, and Advanced Settings properties.
Name
string
required
A human-readable name for the component.

Connect

Authentication Type
drop-down
required
Select OAuth 2.0 Authorization Code from the drop-down menu.
Auth Secret
drop-down
required
Choose your OAuth connection from the drop-down menu.Click Manage to navigate to the OAuth connections list to review OAuth connections and to add new connections. Read OAuth to learn how to create an OAuth connection.To set up a new OAuth:
  1. Click Manage under the drop-down menu.
  2. Click Add OAuth connection above the OAuth connections list.
  3. Give an appropriate OAuth name.
  4. Use the drop-down menu to select the Microsoft Exchange provider.
  5. Select OAuth 2.0 Authorization Code Grant from the Authentication type drop-down.
  6. Enter your Azure Tenant ID. For information about how to create a tenant ID, read Create a new tenant in Microsoft Entra ID.
  7. Click Authorize.
Connection Options
column editor
  • Parameter: A JDBC parameter supported by the database driver. The available parameters are explained in the data model. Manual setup is not usually required, since sensible defaults are assumed.
  • Value: A value for the given parameter.
Click the Text Mode toggle at the bottom of the Connection Options dialog to open a multi-line editor that lets you add items in a single block. For more information, read Text mode.

Configure

Mode
drop-down
required
  • Basic: This mode will build a query for you using settings from the Data Source, Data Selection, Data Source Filter, Combine Filters, and Row Limit parameters. In most cases, this mode will be sufficient.
  • Advanced: This mode will require you to write an SQL-like query to call data from the service you’re connecting to. The available fields and their descriptions are documented in the data model.
There are some special pseudo columns that can form part of a query filter, but are not returned as data. This is fully described in the data model.
While the query is exposed in an SQL-like language, the exact semantics can be surprising, for example, filtering on a column can return more data than not filtering on it. This is an impossible scenario with regular SQL.
SQL Query
code editor
required
This is an SQL-like SELECT query, written in the SQL accepted by your cloud data warehouse. Treat collections as table names, and fields as columns. Only available in Advanced mode.
Data Source
drop-down
required
Select a single data source to be extracted from the source system and loaded into a table in the destination. The source system defines the data sources available. Use multiple components to load multiple data sources.
Data Selection
dual listbox
required
Choose one or more columns to return from the query. The columns available are dependent upon the data source selected. Move columns left-to-right to include in the query.To use grid variables, select the Use Grid Variable checkbox at the bottom of the Data Selection dialog.
Data Source Filter
column editor
Define one or more filter conditions that each row of data must meet to be included in the load.
  • Input Column: Select an input column. The available input columns vary depending upon the data source.
  • Qualifier:
    • Is: Compares the column to the value using the comparator.
    • Not: Reverses the effect of the comparison, so “Equals” becomes “Not equals”, “Less than” becomes “Greater than or equal to”, etc.
  • Comparator: Choose a method of comparing the column to the value. Possible comparators include: “Equal to”, “Greater than”, “Less than”, “Greater than or equal to”, “Less than or equal to”, “Like”, “Null”. Not all data sources support all comparators.
  • Value: The value to be compared.
Click the Text Mode toggle at the bottom of the dialog to open a multi-line editor. For more information, read Text mode.
Combine Filters
drop-down
The data source filters you have defined can be combined using either And or Or logic. If And, then all filter conditions must be satisfied to load the data row. If Or, then only a single filter condition must be satisfied. The default is And.If you have only one filter, or no filters, this parameter is essentially ignored.
Row Limit
integer
Set a numeric value to limit the number of rows that are loaded. The default is an empty field, which will load all rows.

Destination

Select your cloud data warehouse.
Destination
drop-down
required
Select the destination for your data. This is either in Snowflake as a table or as files in cloud storage.
  • Snowflake: Load your data into a table in Snowflake. The data must first be staged via Snowflake or a cloud storage solution.
  • Cloud Storage: Load your data directly into files in your preferred cloud storage location. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
Warehouse
drop-down
required
The Snowflake warehouse used to run the queries. The special value [Environment Default] uses the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
drop-down
required
The Snowflake database to access. The special value [Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
drop-down
required
The Snowflake schema. The special value [Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Table Name
string
required
The name of the table to be created in your Snowflake database. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Load Strategy
drop-down
required
Define what happens if the table name already exists in the specified Snowflake database and schema.
  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
  • Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It’s appended onto the end of the existing data in the table. If the specified table name doesn’t exist, then the table will be created, and your data will be inserted into the table.
Primary Keys
dual listbox
Select one or more columns to be designated as the table’s primary key.
Clean Staged files
boolean
required
  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.
Stage Access Strategy
drop-down
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
  • Credentials: Connects to the external stage (AWS, Azure) using your configured cloud provider credentials. Not available for Google Cloud Storage.
  • Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Stage Platform
drop-down
required
Use the drop-down menu to choose where the data is staged before being loaded into your Snowflake table.
  • Amazon S3: Stage your data on an AWS S3 bucket.
  • Snowflake: Stage your data on a Snowflake internal stage.
  • Azure Storage: Stage your data in an Azure Blob Storage container.
  • Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.

Advanced Settings

Auto Debug
boolean
Choose whether to automatically log debug information about your load. These logs can be found in the task history and should be included in support requests concerning the component. This property is set to No by default. Turning this on will override any debugging Connection Options you may have set.
Debug Level
drop-down
required
The level of detail you want to include in your debug logs. Select a level between 1 and 4:
  1. Will log the query, the number of rows returned by it, the start of execution, the time taken, and any errors.
  2. Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
  3. Will log everything included in Levels 1 and 2, and additionally log the body of the request and the response. This is the default logging level when debug logging is activated.
  4. Will log everything included in Levels 1, 2, and 3, and additionally log transport-level communication with the data source. This includes SSL negotiation.
Levels above 1 can log huge amounts of data and result in slower query execution.
Parse 'Null' & Empty Strings as NULL
boolean
required
Converts common strings that represent null into a null value. This is case-sensitive and works with the following strings: "", “NULL”, “NUL”, “Null”, “null”. The default is No.
Currently, this property is only applicable when using Snowflake as your destination.

Trim String Columns
boolean
required
When Yes, remove leading and trailing characters from a string column. The default is No.

Data model

The JDBC driver for this component models Microsoft Exchange APIs as relational tables, views, and stored procedures, which are documented in the data model. You’ll also find API limitations and requirements. This connector also allows you to query system tables in Advanced mode. To see the available system tables in the data model, read the System Tables section of the data model. For more information about using system tables, read our System tables guide.

Deactivate soft delete for Azure blobs (Databricks)

If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the “Enable soft delete for blobs” setting in your Azure account for your pipeline to run successfully. To do this:
  1. In the Azure portal, navigate to your storage account.
  2. In the menu, under Data management, click Data protection.
  3. Clear the Enable soft delete for blobs checkbox. For more information, read Soft delete for blobs.