1. Open an existing pipeline or create a pipeline from the Home Page, Pipelines tab or Workspaces tab and click the Add data option.
Info: You can also click the Import data
Note: If you had already added a Pipedrive connection earlier, click the Saved connections category from the left pane and proceed to import. To learn more about Saved connections, click here.
Note: After adding a destination to the ETL pipeline, you can try executing your pipeline using a manual run at first. Once you make sure manual run works, you can then set up schedule to automate the pipeline and data movement. Learn about the different types of runs here.1. Select the Schedule option in the pipeline builder.
2. Select a Repeat method (hourly, daily, weekly, monthly) and set frequency using Perform every dropdown. The options of the Perform every dropdown change with the Repeat method. Click here to know more.
Info: The range can be between 2-100. The default value is 2.When there are no new data in the source during incremental import,
The modified and new data will be fetched using the Update_time column from the last imported time.
The new data will be fetched using the Add_time column from the last imported time.
Note: If you have already configured a schedule from Pipedrive, data will be reloaded based on your earlier configuration under the Import configuration section when you click on the Edit schedule option and set a new schedule.Stop export if data has invalid values: Enabling this will stop the export when prepared data still has invalid values.
1) Click the Order exports toggle.
8. After you configure the schedule configuration, click Save to execute the schedule. This will start the ETL pipeline.
Each scheduled run is saved as a job. When a pipeline is scheduled, the data will be fetched from your data sources, prepared using the series of transforms you have applied in each of the stages, and then data will be exported to your destination through seamless data integration at regular intervals. This complete process is captured in the job history.
Note: If you make any further changes to the pipeline, the changes are saved as a draft version. Choose the Draft option and mark your pipeline as ready for the changes to reflect in the schedule.After you set your schedule, you can choose to Pause schedule or Resume schedule, Edit schedule and Remove schedule using the Schedule Active option in the pipeline builder.
When you edit and save a schedule, the next job will be from the last schedule run time to the next scheduled data interval.
For the Pipedrive connector, incremental import is performed on a page-wise basis, with each page containing a maximum of 500 records.
For example:
If the Deals module has 510 total records and 300 have been updated, a single page containing 500 records will be fetched.
If there are 900 total records and 600 have been updated, two pages will be imported - one with 500 records and another with the remaining 400 - based on the total record count.
This means the import is based on total record count per page, not strictly on the number of updated records.
Learn how to use the best tools for sales force automation and better customer engagement from Zoho's implementation specialists.
If you'd like a personalized walk-through of our data preparation tool, please request a demo and we'll be happy to show you how to get the best out of Zoho DataPrep.
You are currently viewing the help pages of Qntrl’s earlier version. Click here to view our latest version—Qntrl 3.0's help articles.