Enhanced Vision System
The Situation Today
Computer vision and image processing, offers multifold opportunities for product development in the aerospace industry. Specific applications are developed with core functions that extend across civilian and military application areas. Using proprietary technology, HCL provides image processing solutions to unmanned aerial vehicles (UAV) from real time video capture and image exploitation using embedded technology for: aerial surveillance and reconnaissance applications, enhanced fusion vision for situational awareness applications, and automatic vision inspection systems for quick inspection of components in a manufacturing industry. HCL also has image processing expertise of high resolution remote sensing images from the satellite. These technologies translate into real world applications that meet our clients’ individual needs and provide innovative solutions.
How HCL Can Help
HCL offers the following solutions in the area of enhanced vision systems:
- Aerial Video Surveillance
- Embedded Vision Engine
- Multi-sensor Fusion
- Enhanced Fusion Vision System
- Automatic Fuselage Vision Inspection
- Satellite Image processing
Aerial Video Surveillance
Airborne surveillance is widely used in a range of civilian and military applications, such as search and rescue missions, border security, resource exploration, wildfire and oil spill detection, target tracking, aerial surveillance systems, and more. The unmanned airborne vehicle (UAV) is equipped with special sensors (day / night) to image objects on the ground and assign the recognition task (surveillance) to the crew, or record image data and interpret it off-line. Pilotless airborne vehicles with sensors carrying platforms transmit the data to a ground control station, for analysis and data interpretation.
HCL offers a full suite of real-time aerial video surveillance capabilities, from blur noise images to clear imagery to large-area visualization and video processing. We offer image processing technologies for real time image acquisition, image pre-processing for noise removal and adaptive enhancements, to create a comprehensive, panoramic view display. HCL has real-time enhancement and geo-registration capabilities. This helps improve accuracy in search, moving object tracking, region of interest processing, geo-location identification, image area measurement, of image database creation of targets under various resolutions with feature details.
Embedded Vision Engine
HCL has designed and developed an embedded vision engine known as the Ground Image Exploitation System (GIES) for an aeronautical agency. The system consists of multiple embedded vision processors on an industrial PC with Frame grabber, graphic card for multiple display with developed GUI software using image processing algorithms for aerial surveillance study. Image exploitation is a process carried out by acquisition and processing of sensory information of a scene or targets for aerial surveillance applications. This process involves processing large volume of image data acquired by multi sensor such as optical, infrared and radar data. For Information extraction and quick analysis the image data has to be processed using fast computational image processing algorithms and efficient embedded processor. The image exploitation system has built in tools with functional features to acquire, store, retrieve, process, analyze, interpret, display information from imagery during a vehicle mission. The data captured by the camera is transferred to work station and analyzed to provide imagery information.
Imagery information is in the form of video clips, video frames and the corresponding flight data, the calculated location of the targets and related information. The extraction and exploitation of imagery intelligence from aerial surveillance enhances understanding and interpretation of scene contents, allows vehicle to see distant targets, and enhances surveillance capabilities. Snapshot of image processing is given below.
Multi-sensor data fusion seeks to combine information from multiple sensors and sources to achieve inferences that are not feasible from a single sensor or source. The fusion of information from sensors with different physical characteristics enhances the understanding of our surroundings and provides the basis for planning, decision-making, and control of autonomous and intelligent machines. In the past decades it has been applied to different fields such as pattern recognition, visual enhancement, object detection, area surveillance and so forth. Image fusion is a process of combining images, obtained by sensors of different wavelengths simultaneously viewing of the same scene, to form a composite image. The composite image is formed to improve image content and to make it easier for the user to detect, recognize, and identify targets and increase his situational awareness.
HCL is conducting research activities mainly in the area of developing fusion algorithms that improves the information content of the composite imagery, and for making the system robust to the variations in the scene, such as dust or smoke, and environmental conditions, i.e. day or and night. The fused image has enhanced vision system information that is more understandable and decipherable for human perception and, preferably, for machine learning and computer vision. One of the multi-sensor applications developed for Civilian study is Enhanced fusion vision system (EVFS).
Enhanced Fusion Vision System
In poor visibility such as rain, snow or fog or in adverse weather conditions it is difficult for the aircraft pilot to land or take off. To handle this situation and as a aid to pilot we have developed Enhanced fusion vision system that combines two sensor sources, visible and infra-red sensor images obtained using CCD camera of the scene and it is, processed using enhanced embedded fusion vision processor. The core function of the system is to enhance and fuse the sensor data to increase the information content and quality of the image for display. These operations are performed in real time for the pilot to use while flying. The embedded vision processor consists of image processing algorithms for pre-processing the input image for noise, image enhancement, and registration and image fusion logic. The processing logic and a sample Image of the enhanced input image and the results of image fusion are given below.
Automatic Fuselage Vision Inspection
Inspection of aircraft components in an aircraft is very tedious and time consuming process during the assembly or production process and if it has to be performed by humans. Visual inspections of components are performed by humans either in a block wise or compartment wise to check the assembly process or during parts integration. Repetitive inspection of production lines is a labor-intensive activity. Automated systems are built for inspection using vision sensors which are not only cost-effective, but give consistency of judgment and documented traceability to the inspection process. One of the aircraft main body section is the Fuselage which accommodates the crew and passengers or cargo. Most fuselages are long, cylindrical tubes or rectangle body shapes and all components of the aircraft are attached to Fuselage. During the assembly process many parts or items such as gasket, nuts, rivets, etc has debris collected and if they not removed or inspected properly it will short circuit the lines which may lead to disaster. One of the solution for automatic inspection is by Machine vision camera based inspection systems which have the ability to acquire the visual imagery of the components and inspect the parts automatically by pattern matching with prior images of the aircraft components in the fuselage area and reduce the cost of removing the debris at a faster turnaround time.
In the past three years significant progress has been made toward new systems that use remote electronic sensors and cameras for nondestructive inspection (NDI) of aircraft. Functionality has been demonstrated for “autonomous'' operation scenarios. These advances have been made in the civilian sector primarily towards ANDI (Automated Nondestructive Inspector). Currently these inspections are carried out by highly trained aircraft maintenance personnel in a straightforward manual manner. An airplane is taken out of service, scaffolding and other means of access to all parts of the airplane's surface is arranged, safety harnesses and other safety gear are deployed, and a direct visual inspection is done. This is in fact one of the most complexes, difficult, unreliable and time consuming, non optimal solution. HCL has developed expertise in nondestructive inspection and surface defects detection using machine vision cameras and image processing techniques. Defective parts are automatically detected using machine vision image processing technology. The system consists of CCD camera and optics, Frame grabber, lighting, part sensor, PC and inspection Image processing software with hardware interfaces. Inspection software consist of developed image processing software to detect defects of parts manufactured in a production process (Real time ) automatically. Some of the function of the software is to detect defects such as rust, scratch, parts presence / absence, measurements gauging studies, etc. A sample fuselage inspection of aircraft parts are given below.
Partial View of the Fuselage Interior
A machine vision computer system design is presented for automatic camera based inspection of the aircraft fuselage to improve both the efficiency and effectiveness of the inspection process by incorporating visible and infrared range information. Critical inspection tasks that will be investigated include parts missing, bearing component wear, incipient failure of electrical systems, and identification of missing equipment, etc. In addition, a process is designed to detect foreign objects underneath the fuselage in the aircraft.
We have developed Image processing software consist of algorithms for image enhancement, edge detection, filters, Geometric pattern matching, blob detection, part Positioning, Measuring, Barcode reading, Object Recognition and Flaw detection, Gauging tools and Color tools. Sample Processed results with GUI are given below.
Satellite Image Processing
During the last decade, remote sensing applications of satellite imagery have been investigated through an 'experimental' approach: a few imaging satellites have been launched and exploited by national and international space agencies in order to demonstrate the feasibility of remote sensing applications in the field of cartography, resource or disaster monitoring, etc. Image processing is a key technology for operational exploitation of satellite images. Satellites can provide huge amounts of data that in principle could be processed and provide very useful information in areas such as agriculture. Frequent types of analyses of these images are classification (e.g., to identify roads, urban areas, types of cultivation, etc.) and rectification and clustering. The size of the image to be processed by remote sensing end users is typically 20-40 Mbytes per spectral band. Digital image processing involves the implementation of computer algorithms aimed to fulfill several tasks in acquisition, management, and enhancement and processing of images in digital format. Thus, with the widespread development of computer technology, it has become the subject of many useful computer applications, with a remarkable technological impact.
Digital image processing includes the detection, perception, interpretation or enhancement of targets within images. HCL has developed expertise in Image processing techniques which involves development of set of tools and techniques such as histogram correction and equalization, convolution and morphological filtering, spectral processing, segmentation, description, classification and so on for image interpretation and analysis. Among these techniques, the user of a given application must choose the most adequate, and apply it with the convenient parameters. These choices (the processing tool and the corresponding parameters) are mostly done on a trial and error basis. In particular we have already implemented several advanced image processing techniques such as Fourier transform, geometric segmentation, and classification. We have also developed data fusion techniques by taking the best attributes from multiple sensors and merging them into one product. The most common form is fusing a high spatial resolution panchromatic image with a set of lower spatial resolution spectral images. Sample results of satellite processing are given below.