Expensify connector for Zoho DataPrep

Expensify connector for Zoho DataPrep [BETA]

Zoho DataPrep allows you to bring in data from Expensify, a platform for managing business expenses and transactions. With this connection, you can schedule regular imports, prepare your data, streamline expense workflows, and gain valuable financial insights. With this connector, you can streamline your ETL workflows, enable smooth data movement, and simplify data integration between Expensify and Zoho DataPrep.

To import data from Expensify

1. Open an existing pipeline or create a pipeline from the Home Page, Pipelines tab or Workspaces tab and click the Add data option.

InfoInfo: You can also click the Import data  icon at the top of the pipeline builder to bring data from multiple sources into the pipeline.


2. Choose the Finance category from the left pane and click Expensify. You can also search for Expensify in the search box.



NotesNote: If you had already added a connection earlier, click the required connection and proceed to import. You can also find your saved connections under the Saved connections category from the left pane. To learn more about Saved connections, click here.

3. Select an account from the saved connections or connect a new account using Add new option.




To authenticate Expensify account

4. Provide a unique Connection namePartner user ID and Partner user secret of your Expensify account in the respective fields. 


To get the Partner user ID and Partner user secret

2. A pair of credentials: partnerUserID and partnerUserSecret will be generated and shown on the page.



Note: Make sure to store the partnerUserID and partnerUserSecret pair you're given in a secure location, as you won't be shown them again.

5. Choose the required Modules and the corresponding Fields will be displayed. Select the Modules and Fields that you would like to import.

Here's the list of total modules supported,
  1. Policy

  2. Reports

  3. Expenses

  4. Expenses Attendees

  5. Reports Approver
    Note: Only approved modules are allowed for import

  6. Reports history and comments

  7. Categories

  8. People

  9. Tax

  10. Tags




6. Choose one of the options to import data from your Expensify account.

All Data - This option imports all data from your account.
From date - This option imports data from a specific date till the current date.
Note: The From date option is available for the following modules. For modules that do not support this option, all data will be imported.
  1. Reports

  2. Expenses

  3. Expenses Attendees

  4. Reports Approver

  5. Reports history and comments

7. Once you have completed importing data, Pipeline builder page opens and you can start applying transforms to your ETL pipeline. You can also right-click the stage and choose the Prepare data option and prepare your data in the DataPrep Studio page. Click here to know more about the transforms.



Note: When you import more than one module from your Expensify account, each dataset will be created as a stage in DataPrep as above.
8. Once you are done creating your data flow and applying necessary transforms in your stages, you can right-click a stage and add a destination to complete your data flow.
NotesNote:  After adding a destination to the ETL pipeline, you can try executing your pipeline using a manual run at first. Once you make sure manual run works, you can then set up schedule to automate the pipeline and data movement. Learn about the different types of runs here.

Schedule

You can schedule your ETL pipeline using the Schedule option. 

Schedule configuration

1. Select the Schedule option in the pipeline builder.

2. Select a Repeat method (hourly, daily, weekly, monthly) and set frequency using Perform every dropdown. The options of the Perform every dropdown change with the Repeat method. Click here to know more.




3. Select the GMT at which you want to import new data found in the source. By default, your local time zone will be selected.


4. Pause schedule afterThis option allows you to choose to pause the schedule after n number of failures.
InfoInfo: The range can be between 2-100. The default value is 2.

Import configuration

You can configure how to import and fetch data from your Expensify account using the Import configuration option. Refer to the sheet below to view the available import configuration options for each modules and corresponding run types.
Note: The import configuration needs to be mandatorily setup for all the sources in the pipeline. Without setting up the import configuration, the schedule cannot be saved.


5. Select the Click here link to set the import configuration. 

6. Select the required option from the How to import data from source? drop down. You can choose to import all data, modified and new data, or do not import based on the module you imported.

Import all data  

If you want to import all data, select the date from when data needs to be imported in the From field. This option will import all available data from the selected date.



Note: The From option is available for selected modules (refer the above sheet). For modules that do not support this option, all data will be imported.

Incremental data fetch

Only modified and new data

To import the modified and new data incrementally from the last imported time, select Only modified and new data option 
from the drop-down.



Use existing data if new data is not available: 

During incremental import,

  1. If the checkbox is checked: When there is no new data in the source, the last fetched data will be imported again.
  2. If the checkbox is unchecked: When there is no new data in the source, the import will fail and no files will be imported. This will, in turn, cause the entire pipeline job to fail.

Do not import data 

The data is imported only once. The second time, the rules get applied to the same data and get exported.



7. Click Save to schedule import for your data.
NotesNote: If you have already configured a schedule from Expensify, data will be reloaded based on your earlier configuration under the Import configuration section when you click on the Edit schedule option and set a new schedule.

Schedule settings

Stop export if data has invalid values: Enabling this will stop the export when prepared data still has invalid values.



Order exports

You can use this option when you have configured multiple destinations and would like to determine in what order the data has to be exported to destinations.

If not enabled, export will run in the default order.
Note: This option will be visible only if you have added more than one destination in your pipeline.

To rearrange the order of your export destinations

1) Click the Order exports toggle.

2) You can drag and drop to change the order of the destinations and then click Save.



Note: Click the Edit order link if you want to rearrange the order again.


8. After you configure the schedule configuration, click 
Save to execute the schedule. This will start the ETL pipeline.



Each scheduled run is saved as a job. When a pipeline is scheduled, the data will be fetched from your data sources, prepared using the series of transforms you have applied in each of the stages, and then data will be exported to your destination through seamless data integration at regular intervals. This complete process is captured in the job history.

9. To go to the jobs list of a particular pipeline, go to the  ellipses icon in the pipeline builder, and click on the Job history menu to check the job status of your pipeline.

10. Click the required job ID in the Jobs history page to navigate to the Job summary of a particular job.

The Job summary shows the history of a job executed in a pipeline flow. Click here to know more.

11. When the schedule is completed, the data prepared in your pipeline will be exported to the configured destinations.
Info: You can also view the status of your schedules later on the Jobs page.
NotesNote: If you make any further changes to the pipeline, the changes are saved as a draft version. Choose the Draft option and mark your pipeline as ready for the changes to reflect in the schedule.




After you set your schedule, you can choose to Pause schedule or Resume scheduleEdit schedule and Remove schedule using the Schedule Active option in the pipeline builder.

When you edit and save a schedule, the next job will be from the last schedule run time to the next scheduled data interval.

Important: Adding Expensify as destination and pushing data to Expensify from DataPrep is not supported yet.

Limitations

  1. Incremental fetch is supported only at the date level; fetching data based on a specific time in a day is not supported.

SEE ALSO