Improving video quality using an automated video testing approach | HCLTech

Improving video quality using an automated video testing approach

Improving video quality using an automated video testing approach
August 24, 2021

Co-authored by: Banumathi Palanichamy

Any software release must follow the testing process for high-quality deliverables, and this testing process can be automated by various methods. The methodologies for automated testing are different based on the system and the testing process. Automated video testing environments such as the automotive domain, broadcast verification, and gaming software development must go hand-in-hand with the video testing process. These real-time scenarios necessitate testing through recorded video files or with live streaming videos.

Any software release must follow the testing process for the quality deliverable, and this testing process is automated in various methods

Moreover, video verification is very difficult for finding distorted or error frames in a large-sized video and is time-consuming with a high-cost process. In order to overcome the burden of manual testing, automated testing emerges in today’s world. Automated testing is conducted using software tools, so it works without getting tired and fatigued, unlike humans in manual testing. Besides, manual testing is less reliable than automated testing. Thus, automated video testing came into the world intensely in its own way.

This blog post focuses on automating the video verification process. For example, if a particular video contains a black frame/frozen frame, then this video cannot be processed/validated for further video quality testing. Here, the video quality testing focuses on:

  • Black frame identification (by any color)
  • Frozen frame existence

A Detailed View

The below picture depicts the general overview of video source processing for video quality testing:

Detailed View

Figure 1: Overview of Video Source Processing

Phase description in the above image:

Phase 1: Input video to test PC

Phase 2: Frame generation, compare images using the Compare Pixel tool, and data validation

Phase 3: Output (Black/frozen frames)

The components of black frame identification are as follows.

The first component will be a configurable input item such as recorded/live video with the corresponding pixel index as 0,1, and up to 255 to be validated according to frame color which has to be identified in a video. Here, “pixel index” refers to a value between zero to 255. Since RGB pixels are usually saved as integers (32-bit), thereby, each channel is represented by an 8-bit value (giving a range between zero and 255).

The second component will have frame generation, the process tool (Histogram), RGB validation, and the resultant frame/report (Figure 1):

  • Frame generation: The video is split into frames according to the video framerate for processing.
  • Process tool (Histogram): This will find the histogram values for each image. This histogram is generated for each and every frame, and this histogram value is compared with each RGB pixel value. These values would be the input to the validation phase.
  • RGB validation: This will validate the output of the process tool having array values for RGB.

The second component will read configurable items and analyze the images for histogram values, along with data validation, and provide the consolidated result. The flow chart for identifying the black frames in a video is mentioned below with steps (Figure 2).

The components of frozen frame identification are as follows.

The first component will have configurable input items such as recorded/live video with the corresponding frozen frame value in seconds, for example, 2.

The second component will have frame generation, the process tool (Compare Pixel), validation, and the resultant frame/report:

  • Frame generation: The video is split into frames according to the video framerate for processing.
  • Process tool (Compare Pixel): This is a module that will make the image pixel comparison for the current and next image. Subsequently, this would be the input to the validation phase.
  • Validation: This is a module that will validate the output of the process tool having the comparison result.           

The second component will read configurable items and analyze the images for frame pixel comparison, along with validation, and provide the consolidated result. The flow chart for identifying the frozen frames in a video is mentioned below with steps (Figure 3)

The Way Forward

This blog post focuses on how video testing quality can be boosted up faster. The given video can be tested from the Device Under Test (DUT) using the following stated approaches:

Black Frame Identification

Detailed View

Figure 2: Flow Chart for Black Frame Identification

This helps a user find the error/distorted frames in a video, and this approach is capable of finding the distorted frames such as black/any colored frame having the same RGB values, or a frozen frame in a video during the automation. The black field detection algorithm involves validating each frame using the histogram tool added with data validation.

Frozen Frame Identification

The frozen frame detection algorithm involves comparing the frames to each other using data validation. This guides the user by providing the error frame details in a video efficiently.

Detailed View

Figure 3: Flow Chart for Frozen Frame Identification

Conclusion

This approach can capably perform test automation from a single computer with different videos processed in parallel. It is the most cost-effective and reliable solution with less complexity in both setup and operation.

The time is nigh to recognize the value of video testing from diverse angles.

Get HCLTech Insights and Updates delivered to your inbox