Skip to main content Skip to main navigation Skip to search Skip to footer

Aerial Surveillance & Enhanced Vision System

Computer vision and image processing offers multifold opportunity for development of product in aerospace industry. Specific applications are developed with core functions that extend across civilian and military application areas. Using our proprietary technology, we provide image processing solutions to unmanned Aerial vehicle (UAV) from real time video capture and image exploitation using embedded technology for surveillance & reconnaissance application, Enhanced fusion vision for situational awareness application, Automatic vision inspection system for quick inspection of components in a manufacturing industry. We also have expertise for image processing of remote sensing images of high resolution data from satellite. These technologies translate into real world applications that meet our clients’ individual needs and provide proven solutions.

  • Aerial Video Surveillance
  • Multi-sensor Fusion
  • Automatic Fuselage Vision Inspection
  • Satellite Image processing

Aerial Video Surveillance

Airborne surveillance has been widely used in different range of applications in civilian and military applications, such as search and rescue missions, border security, resource exploration, wildfire and oil spill detection, target tracking, surveillance, etc. The unmanned airborne vehicle (UAV) is equipped with special sensors (day / night) to image objects in ground and assigns the actual recognition task (surveillance) to the crew or record image data and analyze them off-line on the ground. Pilot less airborne vehicle with sensor carrying platforms transmit data to a ground control station for analysis and data interpretation.

Aerial Image Surveillance by HCL Tech

 

HCL utilizes its deep domain knowledge to offer a full suite of real-time aerial video surveillance capabilities, from blur noise images into clear imagery to large-area visualization and video processing. We offer Image processing technologies for real time image acquisition, image pre-processing for noise removal and adaptive enhancement to create a comprehensive, panoramic field of view display. Our real-time capabilities besides enhancement involves geo-registration capabilities with reference imagery to improve accuracy, search area, moving object tracking, region of interest processing, geo-location identification, image area measurement, creation of Image database of targets under different resolution with feature details.

 

Embedded Vision Engine

We have designed and developed a embedded vision engine known as Ground Image exploitation system (GIES) for a aeronautical agency. The system consists of multiple embedded vision processors on an industrial PC with Frame grabber, graphic card for multiple display with developed GUI software using image processing algorithms for aerial surveillance study. Image exploitation is a process carried out by acquisition and processing of sensory information of a scene or targets for surveillance applications. This process involves processing large volume of image data acquired by multi sensor such as optical, infrared and radar data. For Information extraction and quick analysis the image data has to be processed using fast computational image processing algorithms and efficient embedded processor. The image exploitation system has built in tools with functional features to acquire, store, retrieve, process, analyze, interpret, display information from imagery during a vehicle mission. The data captured by the camera is transferred to work station and analyzed to provide imagery information.

Imagery information is in the form of video clips, video frames and the corresponding flight data, the calculated location of the targets and related information. The extraction and exploitation of imagery intelligence from aerial surveillance enhances understanding and interpretation of scene contents, allows vehicle to see distant targets, and enhances surveillance capabilities. Snapshot of image processing is given below.
 

Target area Image processing by HCL Tech

Mission Video Image Display by HCL Tech

 

Multi-Sensor Fusion

 

Multi-sensor data fusion seeks to combine information from multiple sensors and sources to achieve inferences that are not feasible from a single sensor or source. The fusion of information from sensors with different physical characteristics enhances the understanding of our surroundings and provides the basis for planning, decision-making, and control of autonomous and intelligent machines. In the past decades it has been applied to different fields such as pattern recognition, visual enhancement, object detection, area surveillance and so forth. Image fusion is a process of combining images, obtained by sensors of different wavelengths simultaneously viewing of the same scene, to form a composite image. The composite image is formed to improve image content and to make it easier for the user to detect, recognize, and identify targets and increase his situational awareness.

HCL is conducting research activities mainly in the area of developing fusion algorithms that improves the information content of the composite imagery, and for making the system robust to the variations in the scene, such as dust or smoke, and environmental conditions, i.e. day or and night. The fused image has enhanced information that is more understandable and decipherable for human perception and, preferably, for machine learning and computer vision. One of the multi-sensor applications developed for Civilian study is Enhanced fusion vision system (EVFS).

Enhanced Fusion Vision System

In poor visibility such as rain, snow or fog or in adverse weather conditions it is difficult for the aircraft pilot to land or take off. To handle this situation and as a aid to pilot we have developed Enhanced fusion vision system that combines two sensor sources, visible and infra-red sensor images obtained using CCD camera of the scene and it is, processed using enhanced embedded fusion vision processor. The core function of the system is to enhance and fuse the sensor data to increase the information content and quality of the image for display. These operations are performed in real time for the pilot to use while flying. The embedded vision processor consists of image processing algorithms for pre-processing the input image for noise, image enhancement, and registration and image fusion logic. The processing logic and a sample Image of the enhanced input image and the results of image fusion are given below.

Enhanced Fusion Vision System by HCL Tech

 

Automatic Fuselage Vision Inspection

 

Inspection of aircraft components in an aircraft is very tedious and time consuming process during the assembly or production process and if it has to be performed by humans. Visual inspections of components are performed by humans either in a block wise or compartment wise to check the assembly process or during parts integration. Repetitive inspection of production lines is a labor-intensive activity. Automated systems are built for inspection using vision sensors which are not only cost-effective, but give consistency of judgment and documented traceability to the inspection process. One of the aircraft main body section is the Fuselage which accommodates the crew and passengers or cargo. Most fuselages are long, cylindrical tubes or rectangle body shapes and all components of the aircraft are attached to Fuselage. During the assembly process many parts or items such as gasket, nuts, rivets, etc has debris collected and if they not removed or inspected properly it will short circuit the lines which may lead to disaster. One of the solution for automatic inspection is by Machine vision camera based inspection systems which have the ability to acquire the visual imagery of the components and inspect the parts automatically by pattern matching with prior images of the aircraft components in the fuselage area and reduce the cost of removing the debris at a faster turnaround time.

In the past three years significant progress has been made toward new systems that use remote electronic sensors and cameras for nondestructive inspection (NDI) of aircraft. Functionality has been demonstrated for “autonomous'' operation scenarios. These advances have been made in the civilian sector primarily towards ANDI (Automated Nondestructive Inspector). Currently these inspections are carried out by highly trained aircraft maintenance personnel in a straightforward manual manner. An airplane is taken out of service, scaffolding and other means of access to all parts of the airplane's surface is arranged, safety harnesses and other safety gear are deployed, and a direct visual inspection is done. This is in fact one of the most complexes, difficult, unreliable and time consuming, non optimal solution. HCL has developed expertise in nondestructive inspection and surface defects detection using machine vision cameras and image processing techniques. Defective parts are automatically detected using machine vision image processing technology. The system consists of CCD camera and optics, Frame grabber, lighting, part sensor, PC and inspection Image processing software with hardware interfaces. Inspection software consist of developed image processing software to detect defects of parts manufactured in a production process (Real time ) automatically. Some of the function of the software is to detect defects such as rust, scratch, parts presence / absence, measurements & gauging studies, etc. A sample fuselage inspection of aircraft parts are given below.

Partial View of the Fuselage Interior

Automatic Fuselage Vision by HCL Tech

 

A machine vision computer system design is presented for automatic camera based inspection of the aircraft fuselage to improve both the efficiency and effectiveness of the inspection process by incorporating visible and infrared range information. Critical inspection tasks that will be investigated include parts missing, bearing component wear, incipient failure of electrical systems, and identification of missing equipment, etc. In addition, a process is designed to detect foreign objects underneath the fuselage in the aircraft.

 

We have developed Image processing software consist of algorithms for image enhancement, edge detection, filters, Geometric pattern matching, blob detection, part Positioning, Measuring, Barcode reading, Object Recognition and Flaw detection, Gauging tools and Color tools. Sample Processed results with GUI are given below.

 

Radiator Cap Inspection by HCL Tech
 

 

Satellite Image Processing

During the last decade, remote sensing applications of satellite imagery have been investigated through an 'experimental' approach: a few imaging satellites have been launched and exploited by national and international space agencies in order to demonstrate the feasibility of remote sensing applications in the field of cartography, resource or disaster monitoring, etc. Image processing is a key technology for operational exploitation of satellite images. Satellites can provide huge amounts of data that in principle could be processed and provide very useful information in areas such as agriculture. Frequent types of analyses of these images are classification (e.g., to identify roads, urban areas, types of cultivation, etc.) and rectification and clustering. The size of the image to be processed by remote sensing end users is typically 20-40 Mbytes per spectral band. Digital image processing involves the implementation of computer algorithms aimed to fulfill several tasks in acquisition, management, and enhancement and processing of images in digital format. Thus, with the widespread development of computer technology, it has become the subject of many useful computer applications, with a remarkable technological impact.

Digital image processing includes the detection, perception, interpretation or enhancement of targets within images. HCL has developed expertise in Image processing techniques which involves development of set of tools and techniques such as histogram correction and equalization, convolution and morphological filtering, spectral processing, segmentation, description, classification and so on for image interpretation and analysis. Among these techniques, the user of a given application must choose the most adequate, and apply it with the convenient parameters. These choices (the processing tool and the corresponding parameters) are mostly done on a trial and error basis. In particular we have already implemented several advanced image processing techniques such as Fourier transform, geometric segmentation, and classification. We have also developed data fusion techniques by taking the best attributes from multiple sensors and merging them into one product. The most common form is fusing a high spatial resolution panchromatic image with a set of lower spatial resolution spectral images. Sample results of satellite processing are given below.

Satellite Image Processing by HCL Tech

Site Section: