By triggering the manual run option in your pipeline, you can export the data you have prepared to your destination. A manual run will run your pipeline on the existing data without refreshing, which means no new data will be imported.
Note: You can also use this option for dry runs and to test if your pipeline can be executed without any failures before automating the pipeline using a schedule or backfill run. However, please make sure you configure the pipeline for your testing environment, especially destinations as the transforms and export will happen for whichever data destinations are configured.
Info: A manual run will run on the latest version of the pipeline. A version is nothing but stages of the actions performed on a pipeline. If you want to do a manual run on any old version, you can restore the required version, and perform a manual run. Click here to know more about versions.
To execute manual run in a pipeline
1. Open your pipeline and go to the pipeline builder view. Click here if you want to know how to create a pipeline in the first place.
After data is imported, you will be redirected to the pipeline builder where you can see your data source, and a stage linked to it.
Stages are nodes created for processing data while applying data flow transforms. Every dataset imported from your data source will have a stage created by default.
3. You can right-click the stage and apply the data flow transforms.
5. Once you are done creating your data flow and apply necessary transforms in your stages, you can right-click a stage and add a destination to complete your data flow.
6. Data destination is a place where you want to export your data to. It can be your local database, business applications like Zoho CRM, Zoho Analytics, etc. You can choose your preferred destination out of 50+ data destinations in DataPrep to export your prepared data to.
Manual run
7. After adding a destination for your pipeline, click the Run drop down and choose Manual run.
Settings
-
Stop export data quality drops below 100%: You can use this toggle if you would like to stop export if data quality drops below 100 percent.
-
Order export: You can use this option when you have configured multiple destinations and would like to determine in what order the data has to be exported to destinations.
Note: This option will be visible only if you have added more than one destination in your pipeline.
8. Click on the Run button to execute the manual run. This will start processing the pipeline and the Job summary page opens. The Job summary shows the progress of a job execution. Click here to know more. You can view the status of the run on the Job summary page. There are three different status for a job in DataPrep - Initiated, Success and Failure.
If the job fails, you can identify if the error is at the import stage, transform stage, or with the destination, and target matching in the Job summary page. You can hover on the failed stage to view the error details and fix them to proceed exporting.
In the Overview tab, you can view the status of the job along with the details such as user who started the job, storage used, total rows processed, start time, end time and duration of the job. Click here to know more.
In the Stages tab, you can view the details of each pipeline stage such as Import, Transform, and Export. Click here to know more.
In the Output tab, you see the list of all exported data. You can also download the output if needed. Click here to know more.
Note: Jobs are listed under each pipeline's menu, the jobs are stored indefinitely and can be referred whenever needed.
9. When the manual run is completed, the data prepared in your pipeline will be exported to the configured destinations.
SEE ALSO
Learn about Jobs in DataPrep