Info: You can also click the Import data
Note: If you had already added an Amazon S3 connection earlier, click the Saved connections category from the left pane and proceed to import. To learn more about Saved connections, click here.
Click the Add new link to add a new Amazon S3 account. You can create as many Amazon S3 connections as required.9. Once you have completed importing data, Pipeline builder page opens and you can start applying transforms. You can also right-click the stage and choose the Prepare data option and prepare your data in the DataPrep Studio page. Click here to know more about the transforms.
To import files using Advanced selection,

2. Provide the following details:
Bucket name : The bucket name you want to import data from.

Info: Folder path is case-sensitive.File pattern : The pattern used to match the file names in the bucket. This supports regex type matching. You can also use the pattern, ".*" to match any file in the path specified.
Info: File pattern is case-sensitive.
Note: The file pattern match is a simple regex type match. For example, to fetch files with file names such as Sales_2022.csv , Sales_2023.csv , Sales_2024.csv , you can input the pattern Sales_.*Similarly to fetch files such as PublicData1.csv , PublicData2.csv , PublicData3.csv , use Public.*
If you want to import a single file, then specify the pattern using the exact file name.
Eg: leads_jan_2022.*
Include subfolders : You can also select the Include subfolders checkbox if you want to include subfolders while searching for a file.
Info: This option can merge a maximum of only 5 files at a time.
Note: If this checkbox is unchecked then, only 1 file will be fetched at a time.
Note: If this checkbox is unchecked, then only 1 sheet will be fetched at a time.
Note: Supported formats are CSV, TSV, JSON, XML, TXT, XLS, and XLSX. You can also import files in zipped format. File Parsing is the process of interpreting and structuring a file during import so that the data is correctly organized into rows and columns for processing.
In File Parsing, there are two options: Auto Parsing and Custom Parsing.
Custom Parsing includes the following options:
File encoding: You can encode the file using character encoding methods like UTF-8 through the File Encoding option.
Text qualifiers: You can specify the characters that indicate the beginning and end of a text field, such as Single Quote (') or Double Quote (").
Delimiter: You can separate or split the data using a delimiter such as Comma (,), Semicolon (;), Space, Tab, or Pipe (|). You also have a Custom delimiter option to define your own separator.
Skip initial rows: Skip parsing a specified number of rows at the beginning of the file.
Comment character: Specifies the first character of a commented row. Commented rows will be skipped during import.
Escape character: Specifies the character used to escape delimiters or quotes so they are treated as plain text. Available options include Double Quote ("), Backslash (), Pipe (|), Carat (^), and Tilde (~).
Trim spaces automatically: Removes leading and trailing whitespaces from all columns during data import.
Data contains header: Specify the row number that should be used as the column header.
SEE ALSO
How to import data from cloud databases?
How to import data from saved data connections?
What other cloud storage options are available in Zoho DataPrep?
Learn how to use the best tools for sales force automation and better customer engagement from Zoho's implementation specialists.
If you'd like a personalized walk-through of our data preparation tool, please request a demo and we'll be happy to show you how to get the best out of Zoho DataPrep.
All-in-one knowledge management and training platform for your employees and customers.
You are currently viewing the help pages of Qntrl’s earlier version. Click here to view our latest version—Qntrl 3.0's help articles.