Properties
Reference material is provided below for the Connect, Configure, Destination, and Advanced Settings properties.A human-readable name for the component.
Connect
The data source to load data from in this pipeline. The drop-down menu lists the Braze API endpoints available in the connector. For detailed information about specific endpoints, read the Braze API documentation.
| Endpoint | Method | Reference |
|---|---|---|
| Export Segment List | GET | Export segment list |
| Export Segment Analytics | GET | Export segment analytics |
| Export Monthly Active Users for Last 30 Days | GET | Export monthly active users for last 30 days |
| Export Campaign List | GET | Export campaigns list |
| Export Campaign Details | GET | Export campaign details |
| Export Campaign Analytics | GET | Export campaign analytics |
| Export Segment Details | GET | Export segment details |
| List Catalogs | GET | List catalogs |
| List Catalog Item Details | GET | List multiple catalog item details |
The authentication method to authorize access to your Braze data. Currently supports bearer token.
Use the drop-down menu to select the corresponding secret definition that denotes your Braze bearer token.Read Secrets and secret definitions to learn how to create a new secret definition.Read the Braze API documentation to learn how to acquire a bearer token.
Configure
- Parameter Name: The name of a URI parameter.
- Parameter Value: The value of the corresponding parameter.
| Required parameter | Endpoints | Description |
|---|---|---|
| instance_url | All endpoints | The URL of your instance. |
| catalog_name | List Catalog Item Details | Name of the catalog. |
- Parameter Name: The name of a query parameter.
- Parameter Value: The value of the corresponding parameter.
| Required parameter | Endpoints | Description |
|---|---|---|
| sort_direction | Export Segment List, Export Campaign List | Sort creation time from newest to oldest: pass in the value in descending order. Sort creation time from oldest to newest: pass in the value in ascending order. |
| segment_id | Export Segment Analytics, Export Segment Details | Read Segment API identifier. The segment_id for a given segment can be found on the API Keys page in your Braze account, or you can use the Export segment list endpoint. |
| length | Export Segment Analytics, Export Monthly Active Users for Last 30 Days, Export Campaign Analytics | Maximum number of days before ending_at to include in the returned series. Must be between 1 and 100 (inclusive). |
| ending_at | Export Segment Analytics, Export Monthly Active Users for Last 30 Days, Export Campaign Analytics | Date on which the data series should end. Defaults to time of the request. |
| app_id | Export Monthly Active Users for Last 30 Days | App API identifier retrieved from the API Keys page. If excluded, results for all apps in workspace will be returned. |
| page | Export Campaign List | The page of campaigns to return, defaults to 0 (returns the first set of up to 100). |
| include_archived | Export Campaign List | Whether or not to include archived campaigns, defaults to false. |
| last_edit.time[gt] | Export Campaign List | Filters the results and only returns campaigns that were edited greater than the time provided till now. Format is yyyy-MM-DDTHH:mm:ss. |
| campaign_id | Export Campaign Details, Export Campaign Analytics | Read campaign API identifier. The campaign_id for API campaigns can be found on the API Keys page and the Campaign Details page within your dashboard; or you can use the Export campaigns list endpoint. |
- Parameter Name: The name of a header parameter.
- Parameter Value: The value of the corresponding parameter.
A JSON body to include as part of a POST request. Use Custom Connector to test your endpoints work as expected before moving to Designer pipelines.You should also consult the developer documentation for the API you’re connecting to—as the developer portal may provide additional information about endpoints and requests.
A numeric value to limit the maximum number of records per page.
Destination
Select your cloud data warehouse.- Snowflake
- Databricks
- Amazon Redshift
- Snowflake: Load your data into Snowflake. You’ll need to set a cloud storage location for temporary staging of the data.
- Cloud Storage: Load your data directly into your preferred cloud storage location.
- Snowflake
- Cloud Storage
The Snowflake warehouse used to run the queries. The special value
[Environment Default] uses the warehouse defined in the environment. Read Overview of Warehouses to learn more.The Snowflake database. The special value
[Environment Default] uses the database defined in the environment. Read Databases, Tables and Views - Overview to learn more.The Snowflake schema. The special value
[Environment Default] uses the schema defined in the environment. Read Database, Schema, and Share DDL to learn more.The name of the table to be created.
- Replace: If the specified table name already exists, that table will be destroyed and replaced by the table created during this pipeline run.
- Truncate and Insert: If the specified table name already exists, all rows within the table will be removed and new rows will be inserted per the next run of this pipeline.
- Fail if Exists: If the specified table name already exists, this pipeline will fail to run.
- Append: If the specified table name already exists, then the data is inserted without altering or deleting the existing data in the table. It’s appended onto the end of the existing data in the table. If the specified table name doesn’t exist, then the table will be created, and your data will be inserted into the table.
- Yes: Staged files will be destroyed after data is loaded. This is the default setting.
- No: Staged files are retained in the staging area after data is loaded.
Select the stage access strategy. The strategies available depend on the cloud platform you select in Stage Platform.
- Credentials: Connects to the external stage (AWS, Azure) using your configured cloud provider credentials. Not available for Google Cloud Storage.
- Storage Integration: Use a Snowflake storage integration to grant access to Snowflake to read data from and write to a cloud storage location. This will reveal the Storage Integration property, through which you can select any of your existing Snowflake storage integrations.
Choose a data staging platform using the drop-down menu.
- Amazon S3: Stage your data on an AWS S3 bucket.
- Snowflake: Stage your data on a Snowflake internal stage.
- Azure Storage: Stage your data in an Azure Blob Storage container.
- Google Cloud Storage: Stage your data in a Google Cloud Storage bucket.
- Amazon S3
- Snowflake
- Azure Storage
- Google Cloud Storage
Select the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them. Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage, regardless of the cloud provider that hosts your Snowflake account.
An AWS S3 bucket to stage data into. The drop-down menu will include buckets tied to the cloud provider credentials that you have associated with your environment.
Advanced Settings
Set the severity level of logging. Choose from Error, Warn, Info, Debug, or Trace. Logs can be found in the Message field of the task details after the pipeline has been run.
Choose whether to return the entire payload or only selected data objects. Read Structure to learn how to select which data objects to include in your API response.
- No: Will return the entire payload. This is the default setting.
- Yes: Will return only the objects in Custom Connector that are marked as Selected Data in the Structure setting.
Deactivate soft delete for Azure blobs (Databricks)
If you intend to set your destination as Databricks and your stage platform as Azure Storage, you must turn off the “Enable soft delete for blobs” setting in your Azure account for your pipeline to run successfully. To do this:- In the Azure portal, navigate to your storage account.
- In the menu, under Data management, click Data protection.
- Clear the Enable soft delete for blobs checkbox. For more information, read Soft delete for blobs.

