Test Case Summary for DDT

Test Case Summary for DDT

1.  View data-driven test case execution summary from test plan results   

Data-driven test cases automate test execution by running the same test case multiple times with different input data to validate various scenarios. Learn more about data-driven test cases. When included in a test plan, they can be scheduled for regular execution, allowing you to track performance over time, across different environments, and platforms. Managing the results of these test cases can become complex, especially with large data sets.

In the results section, data-driven test cases are consolidated into a single entity, with a comprehensive summary capturing the outcomes of all test runs, providing valuable insights for further analysis.

 2. Steps to view the test case summary 

After the execution of data-driven test cases within a test plan is complete, you can view their results in the Results tab. 

Follow these steps to view the summary of data-driven test cases from the Results page:
  1. Navigate to Results, then select the environment for which you want to view the results.

  2. Click on the required data-driven test case depicted by the icon { }. This allows you to view the list of their iterations.

NotesNote: You can click the Run History icon to view its execution history. To navigate to the test case, click the View Test Case icon.
  1. View the overall execution overview of the test case.


The key components on this page are:

 

Annotation Number

Description

1

The name of the test case being viewed.

2

The count of the total iterations in the selected test case.

3

The count of the passed iterations in the selected test case.

4

The count of the failed iterations in the selected test case.

5

The success rate of the selected test case in percentage. Typically calculated considering the overall passed and failed count of the test iterations.

6

The list of row iterations that happened within the selected test case.

7

The result status of the iterations that happened in the selected test case.

8

The status indicator displays whether forecast failure was enabled or not for the iterations.

NotesNote: Forecast failure allows you to mark certain datasets as expected to fail, which changes how their results affect the overall outcome. Learn more

Scenarios:
  •  If forecast failure is enabled for a dataset and it fails: The failure will not cause the overall result to fail. This failure was expected and is treated as a success for the overall result.
  • If forecast failure is enabled for a dataset and it passes: The success of this iteration will cause the overall result to fail. This is because the system expected this dataset to fail, and a pass is considered an unexpected outcome.
  • If forecast failure is disabled for a dataset: The results are calculated normally. A success means the iteration passed, and a failure means the overall result will fail.

 

9

The time when the iteration started its execution.

10

The total execution time for the iterations.

11

The button is used to initiate re-run for the selected test case.

12

The comprehensive execution history of the selected test case.

13

The navigation control, which allows you to move between other test cases in the test plan.

 

4. Click on the required iteration to view its detailed logs. 

The iteration's test summary page contains the following components:

 

Annotation Number

Description

1

The name of the test case being viewed.

2

This indicates that the current test case is a data driven test case.

3

The drop-down allows you to view the logs of other iterations.

4

The status (Passed or Failed) of the selected iteration.

5

The button initiates a re-run for the selected iteration.

6

The time when the test case started its execution.

7

The time taken to execute the iteration.

8

The version of the test case that has been executed.

9

The agent which was used to execute the test case.

10

The environment chosen to execute the test case.

11

The individual test steps of the selected iteration that were executed.

12

The details that are required to track how test inputs are processed as a means of ensuring the correctness of the test case execution.

13

The self-healed element locators.

Note: For this feature, you need to enable Self-Healing in the Preferences section of the settings.

14

The filter is used to filter errors or warnings using the options – All, Show Errors, and Show Warnings.

15

The option allows you to export logs for the iteration being viewed.

16

Displays the screenshot of the step-by-step execution of the selected iteration. It can be viewed in full screen.

17

The button navigates to the contents of the viewed iteration's test case in a separate tab.

18

The comprehensive execution history of the selected test case.

 

 3. Related Links