1. Choose the Cloud storage category from the left pane and select Amazon S3. You can also search Amazon S3 in the search box.
2. Select an account from the saved connections if created already or add a new account using Add new option.
4. Provide necessary details in the Connection name, Access key, and Secret key fields.
5. Click the Authenticate Amazon S3 button to authenticate your account with your credentials. You will need to authenticate S3 when you try importing data for the first time.
6. Click the advanced selection link.
Advanced selection helps you perform dynamic file selection based on regex. This can be used for getting new or incremental data from your Amazon S3 bucket. The newly added or modified file that matches the file pattern after the previous sync will be taken from your S3 bucket.
The details required are :
Bucket name : The bucket name you want to import data from.
File pattern : The pattern used to match the file names in the bucket. This supports regex type matching. You can also use the pattern, ".*" to match any file in the path specified.
Info: File pattern is case-sensitive.Note: The file pattern match is a simple regex type match. For example, to fetch files with file names such as Sales_2022.csv , Sales_2023.csv , Sales_2024.csv , you can input the pattern Sales_.*Similarly to fetch files such as PublicData1.csv , PublicData2.csv , PublicData3.csv , use Public.*
If you want to import a single file, then specify the pattern using the exact file name.
Eg: leads_jan_2022.*
Include subfolders : You can also select the Include subfolders checkbox if you want to include subfolders while searching for a file.
Parse file as : Choose the required extension to parse the file. If your file format is not a commonly used one, you can use this option to parse the file into one of the following formats before importing the data into a readable format. The available formats are CSV, TSV, JSON, XML, and TXT.
7. Once you have completed importing data, your dataset will open and you can start preparing your data right away.
8. When your dataset is ready, export it to the required destination before next reload.
Note: You can choose to schedule the import using the Schedule import option available for datasets in your workspace or from the Import menu in the top bar of the DataPrep Studio page. Click here to know more.
Schedule your dataset based on your pipeline complexity. Give enough time to import, process data and export.
You can verify the number of records fetched from S3 in the Operations history panel on the Sync Status page.
Click the Operations history icon near each sync status to view and track the changes made to the dataset, its previous states, the import and export schedules in a timeline.
You can also verify the processed data for every sync in the Processing history panel. On clicking the Processing history option, the side pane will open up listing all the processed data IDs available for the dataset, along with the generated time.
You can also download and verify the processed data by clicking on the icon that appears when you hover over a record.
10. To fetch the next file after the last sync time manually, you can use the Reload data from source option.
From the DataPrep studio page, select the Import menu in the top bar and click Reload data from source. Using this option, you can refresh your dataset with the latest file by reloading data from your data source.
During a manual reload, only the newly added or modified file after the last sync time is imported to the dataset.
For instance, a bucket in Amazon S3 has 10 files in total. The user wants to skip files from 3 to 5. Below are the steps to skip those particular files during incremental fetch.
Follow the steps below to skip files from the middle during the incremental fetch.
1) Import the file using a generic file pattern. Eg leads.*
2) Initially, only the oldest file will be fetched. i.e. leads1_2024-01-29_13-02-04.csv
During every successful sync, the last sync time is updated with the new value, and the file created/modified after the sync time is imported.
3) After importing data, click the Export now option from the Export menu on the DataPrep Studio page and export it to the required destination before reloading, or you'll lose your data.
4) From the DataPrep studio page, select the Import menu in the top bar and click Reload data from source.
5) The next file i.e. leads2_2024-01-29_13-10-20.csv will be fetched incrementally. Again, export it to the required destination before reloading, or you'll lose your data.
6) Click the ruleset icon in the top-right corner of the DataPrep Studio page to view the Ruleset pane.
7) In the Ruleset pane, click the data source configuration icon and open the Data source details page.
8) In the data source details page, enter the specific file pattern from where you want to import next in the File patternfield. Click Update. Eg leads6_2024-02-21_12-32-51.csv.*
9) Go to the DataPrep studio page, select the Import menu in the top bar and click Reload data from source.
The files leads3, leads4, leads5 will be skipped, and the file leads6 will be fetched. The modified time will be tracked.
Export this file to the required destination.
10) Now again, navigate to the data source details page and change the file pattern to generic form. Eg. leads.*
11) Schedule the data import and export to set a pipeline.
12) To schedule the import,
a. Click the Schedule import link.
b. In the Schedule config section, select a Repeat method (Every 'N' hours, Every day, Weekly once, Monthly once). Choose a time to repeat (i.e. set a frequency) using the Perform every option.
Select the Time zone to export data. By default, your local timezone will be selected.
c. Select the checkbox if you want to Import new columns found in the source data.
d. Click Save to schedule import for your dataset.
13) After scheduling the import, schedule the export destination for your dataset; if not, the import will be done continuously, but without export, the data will be lost.
14) After scheduling, the new files with the same pattern will be fetched incrementally using the last synced time. Eg. leads7, leads8 etc., will be imported incrementally and exported at regular intervals.
Follow the steps below to import files from the middle during incremental fetch.
1) Import the file using a specific file pattern.
Eg leads6_2024-02-21_12-32-51.csv.*
2. Initially, only the specific file will be fetched. i.e. leads6_2024-02-21_12-32-51.csv
During every successful sync, the last sync time is updated with the new value, and the file created/modified after the sync time is imported.
3. After importing the data, click the Export now option from the Export menu on the DataPrep Studio page and export it to the required destination before reloading, or you'll lose your data.
4. Click the ruleset icon in the top-right corner of the DataPrep Studio page to view the Ruleset pane.
5. In the Ruleset pane, click the data source configuration icon and open the Data source details page.
6. In the data source details page, enter the generic file pattern from where you want to import next incrementally in the File pattern field. Click Update. Eg leads.*
7. Schedule the data import and export to set a pipeline.
To schedule the import,
a) Click the Schedule import link.
b) In the Schedule config section, select a Repeat method (Every 'N' hours, Every day, Weekly once, Monthly once). Choose a time to repeat (i.e. set a frequency) using the Perform every option.
Select the Time zone to export data. By default, your local timezone will be selected.
c) Select the checkbox if you want to import new columns found in the source data.
d) Click Save to schedule import for your dataset.
8. After scheduling the import, schedule the export destination for your dataset; if not, the import will be done continuously, but without export, the data will be lost.
9. After scheduling, the new files with the same pattern will be fetched incrementally using the last synced time. Eg. leads7, leads8 etc., and all the news files will be imported incrementally and exported at regular intervals.
NOTE: If you modify the data, there is a chance of duplicate records in your destination. So, we don't recommend modifying data.
Learn how to use the best tools for sales force automation and better customer engagement from Zoho's implementation specialists.
If you'd like a personalized walk-through of our data preparation tool, please request a demo and we'll be happy to show you how to get the best out of Zoho DataPrep.
You are currently viewing the help pages of Qntrl’s earlier version. Click here to view our latest version—Qntrl 3.0's help articles.