In DataPrep, you can incrementally import various file types from FTP, including CSV, TSV, JSON, XML, and TXT.
1. Create a workspace or open an existing workspace. Click here to see how to create a workspace.
2. Choose the FTP option from the Choose your data source section or choose it from the Files category in the left pane.
3. Enable the Import from local network toggle if you want to import local files from your FTP server incrementally.
3. Choose an active Databridge.
4. Configure FTP connection to connect to the server and fetch the file.
5. Select one of the below options in the FTP server type drop down.
FTP - File Transfer
FTPS - File Transfer Over Implicit TLS/SSL
FTPS - File Transfer Over Explicit TLS/SSL
SFTP - SSH File Transfer Protocol
6. Enter the FTP server host. You can also enter the IP address or the FQDN (Fully Qualified Domain Name) of the server.
7. Enter the Username and Password if authentication is required.
8. Give a name to your connection in the Connection name text box.
9. Click the Connect button and provide the following details.
Folder path : The folder path where you want to search for files. Eg. /srv/ftp/filesfortest/
Include subfolders : You can select this checkbox if you want to include subfolders while searching for a file.
File pattern : The pattern used to match the file names in the specified location. This supports regex type matching. You can also use the pattern, ".*" to match any file in the path specified.
For example, to fetch files with names such as Sales_2022.csv , Sales_2023.csv , Sales_2024.csv , you can input the pattern Sales_.*
Similarly to fetch files such as PublicData1.csv , PublicData2.csv , PublicData3.csv , use Public.*
If you want to import files with a particular extension say csv files such as file1.csv , file2.csv , file3.csv , use .*.csv
If you want to import a single file, then specify the pattern using the exact file name.
Eg: leads_jan_2022.*
10. Click Import to upload data into Zoho DataPrep from the FTP server.
11. Once you have completed importing data, your dataset will open and you can start preparing your data right away.
12. When your dataset is ready, export it to the required destination before next reload.
Schedule your dataset based on your pipeline complexity. Give enough time to import, process data and export.
13. When the dataset is scheduled for import, the imported time or the last scheduled time is recorded. Initially, only the oldest file will be fetched. During every successful sync, the last sync time is updated with the new value and the file created or modified after the sync time is imported. If there is no new or modified file in the specified location, no data will be imported. If no data were synced, the sync time will be updated since it was given a try. In the next cycle, the file created or modified after this sync time will be fetched.
You can verify the number of records fetched from your files in the Operations history panel on the Sync Status page.
Click the Operations history icon near each sync status to view and track the changes made to the dataset, its previous states, the import and export schedules in a timeline.
You can also verify the processed data for every sync in the Processing history panel. On clicking the Processing history option, the side pane will open up listing all the processed data IDs available for the dataset, along with the generated time.
You can also download and verify the processed data by clicking on the icon that appears when you hover over a record.
14. To fetch the next file after the last sync time manually, you can use the Reload data from source option.
From the DataPrep studio page, select the Import menu in the top bar and click Reload data from source. Using this option, you can refresh your dataset with the latest file by reloading data from your data source.
During a manual reload, only the newly added or modified file after the last sync time is imported to the dataset.
For instance, there are 10 files totally in the FTP server path. The user wants to skip files from 3 to 5. Below are the steps to skip those particular files during incremental fetch.
Follow the steps below to skip files from the middle during the incremental fetch.
1) Import the file using a generic file pattern. Eg Test-.*
2) Initially, only the oldest file will be fetched. i.e. Test-1_csv.csv
During every successful sync, the last sync time is updated with the new value, and the file created/modified after the sync time is imported.
3) After importing data, click the Export now option from the Export menu on the DataPrep Studio page and export it to the required destination before reloading, or you'll lose your data.
4) From the DataPrep studio page, select the Import menu in the top bar and click Reload data from source.
5) The next file i.e. Test-2_csv.csv will be fetched incrementally. Again, export it to the required destination before reloading, or you'll lose your data.
6) Click the ruleset icon in the top-right corner of the DataPrep Studio page to view the Ruleset pane.
7) In the Ruleset pane, click the data source configuration icon and open the Data source details page.
8) In the data source details page, enter the specific file pattern from where you want to import next in the File pattern field. Click Update. Eg Test-6_csv.csv.*
9) Go to the DataPrep studio page, select the Import menu in the top bar and click Reload data from source.
The files Test-3, Test-4, Test-5 will be skipped, and the file Test-6 will be fetched. The modified time will be tracked.
Export this file to the required destination.
7) Now again, navigate to the data source details page and change the file pattern to generic form. Eg. Test-.*
8) Schedule the data import and export to set a pipeline.
9) To schedule the import,
a. Click the Schedule import link.
b. In the Schedule config section, select a Repeat method (Every 'N' hours, Every day, Weekly once, Monthly once). Choose a time to repeat (i.e. set a frequency) using the Perform every option.
Select the Time zone to export data. By default, your local timezone will be selected.
c. Select the checkbox if you want to Import new columns found in the source data.
d. Click Save to schedule import for your dataset.
10) After scheduling the import, schedule the export destination for your dataset; if not, the import will be done continuously, but without export, the data will be lost.
11) After scheduling, the new files with the same pattern will be fetched incrementally using the last synced time. Eg. Test-7, Test-8 etc., will be imported incrementally and exported at regular intervals.
Follow the steps below to import files from the middle during incremental fetch.
1) Import the file using a specific file pattern.
Eg Test-6_csv.csv.*
5. In the Ruleset pane, click the data source configuration icon and open the Data source details page.
6. In the data source details page, enter the generic file pattern from where you want to import next incrementally in the File pattern field. Click Update. Eg Test-.*
8. After scheduling the import, schedule the export destination for your dataset; if not, the import will be done continuously, but without export, the data will be lost.
9. After scheduling, the new files with the same pattern will be fetched incrementally using the last synced time. Eg. Test-7, Test-8 etc., and all the news files will be imported incrementally and exported at regular intervals.
How to import data from data from local files?
How to import data from FTP servers?
How to export data to FTP servers?
Learn how to use the best tools for sales force automation and better customer engagement from Zoho's implementation specialists.
If you'd like a personalized walk-through of our data preparation tool, please request a demo and we'll be happy to show you how to get the best out of Zoho DataPrep.
You are currently viewing the help pages of Qntrl’s earlier version. Click here to view our latest version—Qntrl 3.0's help articles.