Storage Blob Data Contributor role. For more information, read User assigned with the Storage Blob Data Contributor role.
Properties
Reference material is provided below for the Connect, Configure, and Destination properties.A human-readable name for the component.
Connect
Your Workday host name. Read Workday authentication guide to learn how to acquire this credential.
Your Workday Tenant ID. Read Workday authentication guide to learn how to acquire this credential.
The authentication method to authorize access to your Workday data. Choose OAuth 2.0 Authorization Code to use an OAuth connection, or Username & password to use a username and password.
(OAuth 2.0 Authorization Code only)Choose your OAuth connection from the drop-down menu.Click Manage to navigate to the OAuth connections list to review OAuth connections and to add new connections. Read OAuth to learn how to create an OAuth connection.Additionally, read Workday authentication guide, which explains how to create an OAuth connection for Workday.
(Username & password only) Your Workday username.
(Username & password only)Choose the secret definition that represents your credentials for this connector.If you have not already saved your credentials for this connector as a secret definition, click Add secret to create a secret definition representing these credentials. Read Secrets and secret definitions for details about creating a secret definition.
Configure
The version of Workday Web Services directory you want to use. The default is
v41.0, but you can replace this with any valid version. supports any Workday Web Services version listed here.Select the Workday Web Service you want to query. The drop-down will include all services available to the selected Version.
Select the operation you want to perform on the selected Workday Web Service. The drop-down will include all operations available to the selected Web Service.
Set filter settings for extracting data from Workday.
- Object Name - Id Type: Select an object from the drop-down.
- ID: Specify the value of the object.
- Descriptor: Provide a description of the object.
Select columns to be extracted to load.
Exclude fine-grained organization object detail from the extract and load operation.
Destination
Select your cloud data warehouse.- Snowflake
- Databricks
- Amazon Redshift
Select the destination for your data. This is either in Snowflake as a table or as files in cloud storage.
- Snowflake: Load your data into a table in Snowflake. The data must first be staged via Snowflake or a cloud storage solution.
- Cloud Storage: Load your data directly into files in your preferred cloud storage location. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
- Snowflake
- Cloud Storage
The Snowflake warehouse used to run the queries. The special value
[Environment Default] uses the warehouse defined in the environment. Read Overview of Warehouses to learn more.The Snowflake database to access. The special value
[Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.The Snowflake schema. The special value
[Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.The name of the table to be created in your Snowflake database. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Define what happens if the table name already exists in the specified Snowflake database and schema.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It’s appended onto the end of the existing data in the table. If the specified table name doesn’t exist, then the table will be created, and your data will be inserted into the table.
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
- Credentials: Connects to the external stage (AWS, Azure) using your configured cloud provider credentials. Not available for Google Cloud Storage.
- Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Use the drop-down menu to choose where the data is staged before being loaded into your Snowflake table.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Snowflake: Stage your data on a Snowflake internal stage.
- Azure Storage: Stage your data in an Azure Blob Storage container.
- Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.
Deactivate soft delete for Azure blobs (Databricks)
If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the “Enable soft delete for blobs” setting in your Azure account for your pipeline to run successfully. To do this:- In the Azure portal, navigate to your storage account.
- In the menu, under Data management, click Data protection.
- Clear the Enable soft delete for blobs checkbox. For more information, read Soft delete for blobs.

