A pipeline execution is called a job, a job tracks the progress of imports, transformations and exports in a pipeline among other stats.
To navigate to the Jobs page, you can click the Jobs tab from the top bar menu.

The Jobs tab shows the list of all the jobs executed across all the workspaces in your organization along with the Job ID, the associated pipeline, start and run time, status, run by, storage used, and rows processed.
The jobs list can help you identify the failed pipeline jobs and give users a way to retry the jobs.

Filter:
You can use the filter dropdown to list the jobs by type: Manual run, Manual run with data(Manual run with refreshed data), Schedule run, Backfill run, Sectional run, Zoho Flow, Webhooks and Reload data. To view sectional jobs, you can enable the Include Sectional Jobs toggle.

You can also filter and search on the jobs using filter attributes such as run by, pipeline, workspace, status, and run time. Learn more.

You can also quickly sort your jobs based on pipelines, run by etc. using the respective sort icon located at the top of the table header.

To learn more about pipelines and other entities in DataPrep, click here.
Job Summary
Click on a job in the Jobs page to navigate to the Job summary.
Note: You can also choose the Job history option from each pipeline and navigate to the job summary page.
The Job summary shows details of a job in a pipeline flow in three different tabs: Overview, Stages and Output.
Note: If your job has failed, you can identify if the error is at the import stage, transform stage, or with the destination, and target matching in the Job summary page.
Overview
The Overview tab of the Job summary page includes details such as the pipeline name, status, duration, run by, storage used, total rows processed, data interval, start time and end time of the job.
Info: You can also click on a Stage in the pipeline view to see the rules applied on that particular stage.
Stages
You can view the stages for each job executed in three different sections- Import, Transform, and Export. It includes details on the status, row count, storage used, start time and end time of the import, transforms applied, and export.

Info: You can click the View details link at the bottom of each import and export to view the data source, and destination details.
You can click the View ruleset link at the transform stage to view all the rules applied.
Stage status
Here's the snapshot of the status you will see when a job is running,
Success - When a stage runs without any error, this status appears.
Not run- Some imports or stages may not run in cases of manual (existing data) or sectional runs, or when they are not a dependent upstream node. For example, during a sectional run, changes from upstream will be loaded into downstream, but no new imports will take place. As a result, the import stage will not run.
Note: The rows from these stages will not be counted for rows processed calculation.
Cached- Some stages will be cached if the data has been already processed in a previous run, or when they are not a dependent upstream node for this run. For example, consider a pipeline with two independent stages, A and B. If you apply transforms only to A and run the pipeline, B will continue to use its existing data without any updates. In this case, B will be cached.
Note: The rows from the cached stages will not be counted for rows processed calculation.
Queued- The next batch of processing stages will be added in queue during a pipeline run.
Failure - When an erroneous stage fails, this status appears.
Output
In the Output tab, you can view the destinations added, the data quality, output stage, rows and columns exported, and the status of each export.
You can click the View details link at the bottom of each export to view the destination details.
You can also preview the output data and download the prepared data of a particular destination or download all the outputs as a zip file.
Note: To navigate to the Jobs page of the pipeline click the View Jobs link in the destination details.
If the export fails due to errors such as the ones below, the invalid values in your data must be fixed.
4. Sometimes, the sample might not contain any invalid values, yet the full data will have them. In this case, please go to the last data preparation stage > In the right-hand side pane, click on the edit icon besides Sample Strategy > Select Errorneous sample and click Apply.
This will surface invalid values in your sample data. Please fix these invalid values and retry the job.