Skip to main content
The Anaplan orchestration component uses the Connect and Configure parameters to query the Anaplan Bulk API and create a table of Anaplan export data, which is then stored in your preferred storage location (Snowflake, Databricks, Amazon Redshift, or cloud storage). You do not need to use the Create Table component when using this connector, as the Anaplan component will create a new table or replace an existing table for you using the Destination parameters you define. The Anaplan connector supports full data loads only. Each pipeline run exports the complete dataset from the selected Anaplan export action. If the component requires access to a cloud provider (AWS, Azure, or GCP), it will use the cloud credentials associated with your environment to access resources. To stage data to Azure Blob Storage, the Azure credentials associated with your environment must be assigned the Storage Blob Data Contributor role. For more information, read User assigned with the Storage Blob Data Contributor role.

Properties

Reference material is provided below for the Connect, Configure, Destination, and Advanced Settings properties.
This connector does not include a Connection Options parameter.
Name
string
required
A human-readable name for the component.

Connect

Authentication Method
drop-down
required
Currently, only Anaplan Username & Password credentials are supported.
Username
string
required
Your Anaplan login username.
Password
string
required
Your Anaplan password. This is stored as a secret definition. Read Secrets and secret definitions to learn how to store a password as a secret definition.
Login passwords have a 90-day lifespan in Anaplan, and must be reset manually.

Configure

Workspace
drop-down
required
An Anaplan workspace. You can find your Workspace ID in the URL of a given model. See the portion of the URL that reads /workspaces/[workspace-id]/ where [workspace-id] is an alphanumeric Workspace ID.
Model
drop-down
required
An Anaplan model. You can find your Model ID in the URL of a given model. See the portion of the URL that reads /models/[models-id]/ where [model-id] is an alphanumeric Model ID.
Data Source Type
drop-down
required
Currently only supports Anaplan export.
Export
drop-down
required
Select an available export. Anaplan offers documentation on creating export actions here.
It’s recommended that you format your exports as “Tabular Single or Multi Column CSV” to allow exports to be stored in a cloud data warehouse as a standard table.

Destination

Select your cloud data warehouse.
Destination
drop-down
required
Select the destination for your data. This is either in Snowflake as a table or as files in cloud storage.
  • Snowflake: Load your data into a table in Snowflake. The data must first be staged via Snowflake or a cloud storage solution.
  • Cloud Storage: Load your data directly into files in your preferred cloud storage location. The format of these files can differ between source systems and will not have a file extension so we suggest inspecting the output to determine the format of the data.
Warehouse
drop-down
required
The Snowflake warehouse used to run the queries. The special value [Environment Default] uses the warehouse defined in the environment. Read Overview of Warehouses to learn more.
Database
drop-down
required
The Snowflake database to access. The special value [Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.
Schema
drop-down
required
The Snowflake schema. The special value [Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.
Table Name
string
required
The name of the table to be created in your Snowflake database. You can use a Table Input component in a transformation pipeline to access and transform this data after it has been loaded.
Load Strategy
drop-down
required
Define what happens if the table name already exists in the specified Snowflake database and schema.
  • Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
  • Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
  • Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
  • Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It’s appended onto the end of the existing data in the table. If the specified table name doesn’t exist, then the table will be created, and your data will be inserted into the table.
Primary Keys
dual listbox
Select one or more columns to be designated as the table’s primary key.
Clean Staged files
boolean
required
  • Yes: Staged files will be destroyed after data is loaded. This is the default setting.
  • No: Staged files are retained in the staging area after data is loaded.
Stage Access Strategy
drop-down
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
  • Credentials: Connects to the external stage (AWS, Azure) using your configured cloud provider credentials. Not available for Google Cloud Storage.
  • Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Stage Platform
drop-down
required
Use the drop-down menu to choose where the data is staged before being loaded into your Snowflake table.
  • Amazon S3: Stage your data on an AWS S3 bucket.
  • Snowflake: Stage your data on a Snowflake internal stage.
  • Azure Storage: Stage your data in an Azure Blob Storage container.
  • Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.

Advanced Settings

Parse 'Null' & Empty Strings as NULL
boolean
required
Converts common strings that represent null into a null value. This is case-sensitive and works with the following strings: "", “NULL”, “NUL”, “Null”, “null”. The default is No.
Currently, this property is only applicable when using Snowflake as your destination.

Deactivate soft delete for Azure blobs (Databricks)

If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the “Enable soft delete for blobs” setting in your Azure account for your pipeline to run successfully. To do this:
  1. In the Azure portal, navigate to your storage account.
  2. In the menu, under Data management, click Data protection.
  3. Clear the Enable soft delete for blobs checkbox. For more information, read Soft delete for blobs.