Video example
Prerequisites
Before you begin, ensure the following requirements are met:- Export project files from the source platform and store them locally or in an accessible Git repository (such as GitHub or GitLab). For more information on integrating your Git account with , read Connect your own Git repo.
- An active Matillion account is available.
- Maia is enabled and accessible in your environment. Read Edit account details to learn how to enable Maia if it isn’t enabled for your account.
-
Your role includes the required permissions to create and deploy pipelines in the target project and environment. For more information on account roles, read Role based Access Control overview.
- Maia supports a growing set of data workload conversions from major data platforms, with support for additional platforms such as Ab Initio coming soon.
- If you need to convert from a platform that isn’t currently supported, contact your account team to discuss your requirements.
Supported file types for data workload conversions
| Platform | Supported File Types |
|---|---|
| Alteryx | .yxmd, .yxmc, .yxwz, .yxzp, .yxdb, .zip |
| Apache NiFi | .xml, .json, .gz, .zip |
| AWS Glue | .json, .py, .scala, .zip |
| Azure Data Factory (ADF) | .json, .bicep, .zip |
| dbt | .sql, .yml, .yaml, .csv, .json, .zip |
| IBM DataStage | .dsx, .zip |
| Informatica IDMC | .json, .zip, .bin |
| Informatica PowerCenter | .xml, .zip |
| Microsoft SSIS | .dtsx, .conmgr, .params, .dtsConfig, .ispac, .zip |
| Oracle ODI | .xml, .sql, .zip |
| Palantir Foundry | .py, .sql, .java, .r, .yaml, .yml, .md, .zip, .gz, .parquet, .avro, .csv |
| Qlik Sense | .qvf, .qvs, .qvd, .qvx, .zip |
| Qlik Talend | .item, .xml, .properties, .cntxt, .zip |
| SAS Enterprise Guide | .egp, .sas, .sas7bdat, .sas7bcat, .zip |
| WhereScape | .sql, .xml, .json, .zip |
Convert workloads with Maia
- Log in to and select the project you want to use.
- Click the Files icon, select Add (+) from the dropdown menu, and then select Convert workloads.
- In the Convert workloads with Maia dialog , select a workload type. All supported platforms are listed under Workload type.
-
Choose a file loading method: Git repository files or Upload files.
- If you choose Git repository files, make sure all workload files are already available in your Git repository, use the search feature to select the files you want Maia to convert, and ensure that each file is 10 MB or smaller.
- If you choose Upload files, you can upload up to 10 files at a time, provided the combined size of all selected files does not exceed 20 MB.
-
(Optional) Conversions are often more successful when additional context is provided. To add context, edit the conversion-specific context file
(conversions/user-guidance.md)displayed on screen. This file can include pipeline design standards, schema information, or any other details relevant to the conversion. Maia will also use any other context files that have already been created. - Click Submit to begin the analysis. Maia analyzes the workload files. The analysis time varies based on file size.
-
After the analysis is complete, choose how you want to proceed:
- Plan conversion with Maia: Work with Maia to create and refine a conversion plan. Once you’re happy, Maia will execute the agreed plan.
- Begin conversion without planning: Maia will begin building pipeline(s) immediately without further planning and without asking for approval. Choose this option when you have high confidence that Maia will deliver the expected outcome without additional guidance.
- Click Submit.
Validate pipeline outputs
When you use Maia to create data pipelines in a environment, validation helps ensure that the output from the newly created pipelines is consistent and comparable with the output from the source platform workloads.Load source outputs into the cloud data warehouse
If the output from the source platform workloads is not already in the same cloud data warehouse used by , load it into that cloud data warehouse to simplify comparison. Load the output into a different schema within the same database as the output. Files such as.txt, .csv, or .xlsx can be saved to cloud storage and loaded using a orchestration pipeline. Using fixed input data makes side-by-side validation easier.
You can choose not to load the output into the cloud data warehouse; however, this approach requires more manual validation and is generally less efficient.
Compare pipeline outputs
Follow these steps to compare the output of a pipeline with the output from another product.- Ensure that the output from both and the other product is available in the same cloud data warehouse.
- Create a new transformation pipeline in and add the Table Input component.
- Use the Table Input component to load the output tables from both and the other product.
- Add components such as Detect Changes, Assert View, or Aggregate to compare the results.
You can also ask Maia to compare the two tables and review the results.
Troubleshoot workload conversion errors
When converting workloads from other platforms to pipelines, you may encounter the following error messages. Review the advice column for each message to resolve the issue.| Error message | Advice |
|---|---|
No convertible pipelines found in archive {fileName} | Check the archive contains valid pipeline files for the selected platform |
Nested archive depth limit exceeded (max: 3) in file {fileName} | Extract nested archives before uploading |
Zip contains too many entries ({count} > {limit}) | Split into smaller archives or remove unnecessary files |
File exceeded size limit while reading: {actual} MB > {limit} MB | Reduce file size or split into multiple uploads |
Zip slip attack detected: path traversal in entry {entryName} | Re-create the archive without path traversal in filenames |
File {fileName} size {actual} bytes exceeds maximum allowed size of {max} bytes | Reduce file size or split into multiple uploads |
Failed to chunk file {fileName}: No chunks could be created from pipeline content | Check the file is a valid pipeline for the selected platform; if persistent, contact support |
File {fileName} is empty or contains only whitespace | Check the file has actual content |
| Too many concurrent conversion requests. Please try again later. | Wait a moment and retry; if persistent, contact support |
Failed to analyze chunk {name}: {cause} | Retry the conversion; if persistent, contact support |
Failed to generate intent analysis: {cause} | Retry the conversion; if persistent, contact support |
| Bedrock rate limit exceeded | Wait a moment and retry; if persistent, contact support |
File {fileName} not found | Check the selected git file still exists in the repository |

