Data-driven test cases are designed to run with varying input data to ensure comprehensive validation under diverse conditions. Learn more about data-driven test cases. Once executed, Zoho QEngine provides real-time visibility into the process, allowing for swift identification of any errors or defects within the tested system. As the test case runs through each data set, it automatically applies the data, executes it, and logs the results. Monitoring these real-time logs allows you to detect issues as they happen, making it easier to confirm that the tested service or application meets expected outcomes across all scenarios. To run and preview test cases, configure a test run using either the Zoho Cloud or a local agent. After execution, a comprehensive summary displays the status of passed and failed iterations within the test case, the overall status, along with the overall outcome and key details. Users can also access detailed logs for each iteration for further analysis. For example, you're testing a form where users enter data, and you want to verify if it accepts different input types like integers, strings, and boolean values. Inputs of different data types are tested in separate iterations. As the test runs, data is entered, and results are logged in real time, showing whether each input is successfully accepted or rejected. With live preview, you instantly see the results, allowing you to catch issues like unsupported input types as they happen. After the test completes, a summary shows the number of successful and failed entries, with detailed logs for each iteration.
2. Live preview of data-driven test cases
To preview the real-time executions of both web and mobile data-driven test cases, customize the run configurations using the steps below.
Click Run on a test case to initiate a preview run.
Note: In order to run a live preview, the test case has to be saved.Select the desired Agent, Environment, and other required details.
customize your run for a web test case
customize your run for an android or iOS test case

Fields | Description |
Agents | Zoho QEngine provides two options to run the test cases - Zoho Cloud and Agent. By default, Zoho Cloud will be selected. Using Zoho Cloud, you can select the desired environment for your test preview run.
Note: Zoho Cloud is currently not supported for Android and iOS. Note: In order to perform testing on physical devices, they must be linked and configured with the local agent.
|
Environment - Browser, Browser Version, Version, Device | This field lets you select the platform-specific settings for testing under a chosen configuration. For web, it's the browser and version, and for Android and iOS, it's the device name and version. Note: During execution, if you opt for Zoho Cloud, the test case will run in headless mode, meaning the browser's graphical user interface (UI) won't be visible. However, with local agents, both headless and standard browser modes (such as Chrome, Firefox, or Safari) are supported, allowing you to see the testing steps live in action if you choose the non-headless option. Regardless of the mode, the test case will execute, and results will be displayed. |
Screen Resolution | This field is applicable for the web browser platform. Specifies the device's resolution in pixels (e.g., 1920x1080), ensuring the test case reflects preferred user screen sizes. |
APK / IPA File | This field is applicable for Android or iOS platforms. Select the file format used to install applications on Android (APK) and iOS (IPA) devices. |
Environment Variable | This field allows you to use values specific to different environments. The field is optional, so if no environment variable is selected, the default global variable value set during test case creation is used. Environment variables allow you to override global values with environment-specific ones. |
Check Save as preferred configuration to avoid configuring the run every time. If not, Zoho QEngine will ask for your choice every time you click Run.
Note: If you set a preferred configuration, click the dropdown next to Run, then click Customize to change the configurations again.- Once the required run configuration is set up, click Run to execute the test case and display the results for each data set.
After execution is complete, you can view the Preview Run. Use this summary to analyze results, debug the results for each data set.
3. Preview run for data-driven test cases
The preview run showcases the execution results of each iterated data set; it comprises details such as iteration logs, details of the agent, device, and screen resolution used, and other details of the test case execution.
3.1 Components
The preview run is composed of the following components:

3.1.1 Run Summary
This section displays the overall result of the test case, including whether the test case was stopped, passed, or failed. It displays the test execution duration, the version of the test case run, and the selected agent for the run, and the platform details like OS, its version, and screen display measurements.
Results
The overall result is calculated based on the results of individual iterations and whether forecast failure is enabled for each dataset. Forecast failure allows you to mark certain datasets as expected to fail, which changes how their results affect the overall outcome.
Scenarios:
If forecast failure is enabled for a dataset and it fails: The failure will not cause the overall result to fail. This failure was expected and is treated as a success during the overall result calculation.
If forecast failure is enabled for a dataset and it passes: The success of this iteration will cause the overall result to fail. This is because the system expected this dataset to fail, and a pass is considered an unexpected outcome.
If forecast failure is disabled for a dataset: The results are calculated normally. A success means the iteration passed, and a failure means the overall result will fail.
For example, consider a test case that checks if an email field accepts valid emails and rejects invalid ones. The data source includes both valid and invalid emails. All invalid emails that are expected to fail have forecast failure enabled.

When the test case functions as expected, the overall result is marked as passed, and the results of every iteration are listed as follows.

When at least one of the iterations doesn't function as expected, the overall result is marked as failed.

As seen in the image, this data set didn't favour the expected outcome as per the forecast failure, due to which the overall result failed.
3.1.2 Individual Iteration Results
This list contains the iteration results of each data set.
Data Set NameThis unique name is used to identify logs of each data sets.
Result
This represents the status of the each data set. The status includes Yet to start, Running, Stopped, Passed, or Failed.
Yet to start: The iteration has not begun processing and is queued for execution.
Running: The iteration is actively being executed, with data being processed in real time.
Stopped: The iteration was halted manually and did not complete its execution, leaving the execution in an incomplete state.
Terminated: The iteration was stopped due to an unexpected issue, forcing the execution to halt.
Passed: The iteration successfully completed with all expected outcomes met.
Failed: The iteration completed but did not meet the expected outcomes, indicating an error or issue.
Forecast Failure
This displays whether forecast failure is enabled or disabled for the respective dataset. <Learn more>
Start Time
This contains the date and time when the data set in the test case started its execution.
Duration
Displays the total execution time taken (in seconds) by the data set with the test case.
On selecting the respective data sets, you can view each iteration's logs. Their preview looks similar to that of any test case's preview. Click Restart/Stop to stop or restart the preview run for the same configuration chosen during the original run.
Note: If you click Show Less, the test case editor screen comes up. The preview screen minimizes to the bottom, showing the percentage of the completion (100%), the test step under execution, and mentions the final test step executed. Click Show More to bring the live preview results back on screen.
3.2 Iteration Logs
In addition to the components discussed earlier, the logs consist of the following:

Logs
They show the individual test steps of the selected iteration that are successfully executed or failed.
Console log
This displays real-time warnings and errors for failed test cases, providing valuable debugging information necessary to troubleshoot and improve the execution of a test script.
Note: Test cases experience failures as a result of console log errors or warnings, but this occurs only when these issues impact the execution of subsequent steps.Self-healing
This is a mechanism used to stabilize the test cases over broken locators. On enabling self-healing, it allows broken locators to be replaced with the next available locator from the prioritized list of locators for that particular element. Learn more about self-healing
Note: For this feature, you have to enable Self-Healing in the Preferences section of the settings.Filter logs
You can filter logs for errors or warnings by filtering them using the options, All, Show Errors, and Show Warnings.
Screenshots
This section displays the preview (visual navigation of the screens accessed and actions performed on the web or mobile). You can either play this as a video or navigate through the various images using the "Next" and "Previous" controls. There is also an option to replay the video preview at specific playback speeds (0.5x, 1x, 2x, 3x) as well.

To view the logs of other iterations, select the required one from the drop-down. To navigate back to the overview page of the test case's execution, click the back arrow in the left-hand corner.
5. What's Next?
Next Steps
These test cases can now be added to a test suite and then, to a test plan. Once added, their results can be viewed in the results tab. Previous Steps
Prepare and configure the test cases, ensuring all required data inputs and dependencies are in place for the data-driven test case execution.