Incremental data fetch is a method used to import new files from a source after the previous sync. Zoho DataPrep helps you import incremental data from your local files using Zoho Databridge.
Note :
1. DataPrep also supports files without any extension or files in plain text format. However, binary files cannot be parsed during import; users will have to manually parse the file into one of the supported formats listed above.
2. Maximum local file size supported during import is 100MB. You can find the details on other technical limitations here.
1. Create a workspace or open an existing workspace. Click here to see how to create a workspace.
2. Choose the Files option from the Choose your data source to import the local files. You can also click the Files category from the left pane and select the Files option.
3. Enable the Import from local network toggle if you want to import local files incrementally.
4. Choose an active Databridge and provide the following details:
Folder path : The folder path where you want to search for files. Eg. D:\DataPrep\Datasets
Include subfolders : You can select this checkbox if you want to include subfolders while searching for a file.
File pattern : The pattern used to match the file names in the specified location. This supports regex type matching. You can also use the pattern, ".*" to match any file in the path specified.
Note: The file pattern match is a simple regex type match. For example, to fetch files with names such as Sales_2022.csv,Sales_2023.csv, Sales_2024.csv , you can input the pattern Sales_.*
Similarly to fetch files such as PublicData1.csv , PublicData2.csv , PublicData3.csv , use Public.*
If you want to import a single file, then specify the pattern using the exact file name.
Eg: leads_jan_2022.*
Parse file as : Choose the required extension to parse the file. If your file format is not a commonly used one, you can use this option to parse the file into one of the following formats before importing the data into a readable format. The available formats are CSV, TSV, JSON, XML, and TXT.
5. Click the Import button. Once you have completed importing data, your dataset will open and you can start preparing your data right away.
6. When your dataset is ready, export it to the required destination before next reload.
Schedule your dataset based on your pipeline complexity. Give enough time to import, process data and export.
7. When the dataset is scheduled for import, the imported time or the last scheduled time is recorded. Initially, only the oldest file will be fetched. During every successful sync, the last sync time is updated with the new value and the file created or modified after the sync time is imported. If there is no new or modified file in the specified location, no data will be imported. If no data were synced, the sync time will be updated since it was given a try. In the next cycle, the file created or modified after this sync time will be fetched.
8. You can verify the number of records fetched from your files in the Operations history panel on the Sync Status page.
Click the Operations history icon near each sync status to view and track the changes made to the dataset, its previous states, the import and export schedules in a timeline.
You can also verify the processed data for every sync in the Processing history panel. On clicking the Processing history option, the side pane will open up listing all the processed data IDs available for the dataset, along with the generated time.
You can also download and verify the processed data by clicking on the icon that appears when you hover over a record.
9. To fetch the next file after the last sync time manually, you can use the Reload data from source option.
From the DataPrep studio page, select the Import menu in the top bar and click Reload data from source. Using this option, you can refresh your dataset with the latest file by reloading data from your data source.
During a manual reload, only the newly added or modified file after the last sync time is imported to the dataset.
Note: All the newly added or modified files are incrementally fetched based on Greenwich MeanTime (GMT) / UTC.
For instance, there are 13 files totally in the local path. The user wants to skip files from 3 to 5. Below are the steps to skip those particular files during incremental fetch.
Follow the steps below to skip files from the middle during the incremental fetch.
1) Import the file using a generic file pattern. Eg mkFile.*
2) Initially, only the oldest file will be fetched. i.e. mkFile1.csv
During every successful sync, the last sync time is updated with the new value, and the file created/modified after the sync time is imported.
3) After importing data, click the Export now option from the Export menu on the DataPrep Studio page and export it to the required destination before reloading, or you'll lose your data.
4) From the DataPrep studio page, select the Import menu in the top bar and click Reload data from source.
5) The next file i.e. mkFile2.csv will be fetched incrementally. Again, export it to the required destination before reloading, or you'll lose your data.
6) Click the ruleset icon in the top-right corner of the DataPrep Studio page to view the Ruleset pane.
8) In the data source details page, enter the specific file pattern from where you want to import next in the File pattern field. Click
9) Go to the DataPrep studio page, select the Import menu in the top bar and click Reload data from source.
The files mkFile3, mkFile4, mkFile5 will be skipped, and the file mkFile6 will be fetched. The modified time will be tracked.
Export this file to the required destination.
10) Now again, navigate to the data source details page and change the file pattern to generic form. Eg. mkFile.*
11) Schedule the data import and export to set a pipeline.
12) To schedule the import,
a. Click the Schedule import link.
b. In the Schedule config section, select a Repeat method (Every 'N' hours, Every day, Weekly once, Monthly once). Choose a time to
13) After scheduling the import, schedule the export destination for your dataset; if not, the import will be done continuously, but
Follow the steps below to import files from the middle during incremental fetch.
1) Import the file using a specific file pattern. Eg mkFile6.*
2. Initially, only the specific file will be fetched. i.e. mkFile6.csv
6. In the data source details page, enter the generic file pattern from where you want to import next incrementally in the File pattern
6. In the data source details page, enter the generic file pattern from where you want to import next incrementally in the File pattern
8. After scheduling the import, schedule the export destination for your dataset; if not, the import will be done continuously, but without export, the data will be lost.
9. After scheduling, the new files with the same pattern will be fetched incrementally using the last synced time. Eg. mkFile7, mkFile8 etc., and all the news files will be imported incrementally and exported at regular intervals.
SEE ALSO
How to import data from local files?
How to incrementally import data from FTP servers?
How to export data to local files?
Learn more about Schedule import
Learn more about Schedule export
Learn how to use the best tools for sales force automation and better customer engagement from Zoho's implementation specialists.
If you'd like a personalized walk-through of our data preparation tool, please request a demo and we'll be happy to show you how to get the best out of Zoho DataPrep.
You are currently viewing the help pages of Qntrl’s earlier version. Click here to view our latest version—Qntrl 3.0's help articles.