SlideShare uma empresa Scribd logo
1 de 5
Baixar para ler offline
International Conference on Aerospace Science and Technology
26-28 June 2008, Bangalore, India
APPLICATION OF ENHANCED FUSION VISION SYSTEM IN AVIONICS
Karunakaran .P, M.S.Mohanan, R.M. Bhoopathy, Dr T.K.Sateesh
Aerospace Practice, HCL Technologies Limited, Bangalore -560068, India, Karunakaran.p@hcl.in
ABSTRACT:
In poor visibility and bad weather conditions such as rain, snow, fog and also at night it is difficult for the
pilot to perform operations like takeoff, landing and taxiing. Scene content in poor visibility images is
improved fusing multi-sensor images (Visible & IR) which generates and presents cascaded image
information content for a pilot’s aid and for improved situational awareness. Multi-sensor image fusion is
a process of combining data and information from different sources to maximize the useful information
content. The images to be fused can come from different sensors of the same basic type or from different
types of sensors. Three image fusion algorithms (Principle component analysis, wavelet transforms and
pyramid techniques) were developed and tested. Principle component analysis has found to be more useful
as it compresses the multi-sensor image data and optimal weights were derived. Algorithms were tested
with simulated video images obtained from visible and infra-red camera of an aircraft approaching
runway at night, and a Prototype technology of enhanced flight vision system is demonstrated.
1. INTRODUCTION:
In avionics developments, increased situational awareness and overall safety is the main concern for
pilots. Yet modern, well-equipped aircraft still have been involved in fatal accidents, particularly in poor
visibility or at night. In bad weather conditions such as rain, snow, fog and at night it is inherently difficult
for the pilot to navigate and airlines forced to cancel/divert services as safe landing /take off and taxing is
cumbersome. Though automated instrument landing (CAT 111) allows landing, it requires both the airport
and aircraft to be equipped with special equipment. Scene content in poor visibility images can be
improved using multi-sensor images with a fusion display system for a pilot’s aid which will have more
information content provided by better detection capability. This system will be useful to see the terrain,
obstacles, and other aircraft in total darkness or poor visibility due to whether for situation awareness [2, 3,
and 4]. Image fusion denotes a process of generating a single image which contains a more accurate
description of the scene than any of the individual source images. The fused image should be more useful
for human visual or machine perception [6]. Fusion may be useful for several objectives such as detection,
recognition, identification, tracking, change detection and decision making. These objectives may be
encountered in many application domains as Defense, Robotics, Medicine, Space, etc. Using an efficient
fusion scheme, one may expect significant advantages as:
• Improved confidence in decisions due to the use of complementary information (e.g. silhouette of
objects from visible image, active/non-active status from Infra-Red image, speed and range from
radar, etc.)
• Improved performance to countermeasures (it is very hard to camouflage an object in all possible
wave-bands)
• Improved performance in adverse environmental conditions. Typically smoke or fog cause bad
visible contrast and some weather conditions (rain) cause low thermal contrast (Infra Red
imaging), combining both types of sensors should give better overall performance.
A system architecture is described which involves algorithms for Image enhancement, registration, and
image fusion logic
INCAST 2008-014
2. IMAGE FUSION ARCHITECTURE
Image processing using Image fusion technique allows us to combine the complimentary information
from each sensor into a single superior image for interpretation and analysis. This provides the basis
for planning, decision-making, and control of autonomous / intelligent machines. Architecture is a
first step towards guiding and facilitating the construction and evaluation of fusion system study.
Simulation Tools are used to automate the software development and component integration process.
Image fusion processing consists of tools for image acquisition by cameras and algorithms for pre-
processing; image registration and image fusion. Sample prototype image fusion architecture is given
as below (figure 1).
Figure 1 Image Fusion Architecture
2.1 Image Acquisition
Image acquisition module consists of the acquisition buffers, communication channels and the camera. After
initialization is completed, the acquisition module start grabbing the video image data from two sensors and
sends it to image pre-processing module.
2.2 Image Pre-processing
Image pre-processing module consist of pre-processing function such as noise removal, gamma corrections
as real world images obtained from multiple sensors are degraded by atmospheric distortions such as fog,
smoke, rain etc. Image enhancement module performs image enhancement either adaptively or selectively.
The selective method shall override the adaptive method of selection of enhancement algorithm.
2.3 Image Registration
Image registration is the process of overlaying two or more images of the same scene taken at different
times, from different viewpoints, and/or by different sensors. It geometrically aligns two images of the
same scene which are acquired by different sensors the reference and sensed images [1, 5]. The present
differences between images are introduced due to different imaging conditions. Image registration is a
crucial step in all image analysis tasks.. The aim is to integrate the information obtained from different
source streams to gain more complex and detailed scene representation as fusion of information from
sensors with different characteristics. In our study two camera images (one with visible sensor data and
another Infrared sensor images) are taken over a same area. Registration is required to remove the FOV
differences in the cameras and to correct bore-sighting inaccuracies. The visible data is used as the
baseline. The IR data is registered to the visible data by applying an affine transform to the IR image data.
A general representation of an affine transform is [y1, y2, 1] = [x1, x2, 1] * T where
a11 a12 0
T = a21 a22 1 (1)
a31 a32 1
x1 and x2 reference the input coordinate system, y1 and y2 reference the output coordinate system, and
aij are transform coefficients. The mapping functions are given as
y1 = a11x1 + a21x2 + a31 and (2)
y2 = a12x1 + a22x2 + a32.
Prior to flight, a set of control points are selected based on corresponding features from sample images
acquired at the same time from the cameras. The control points are analyzed using multiple linear
regressions to approximate the transform coefficients aij, which is then applied to the visible image. The
transformed IR image is then resampled using bilinear interpolation to align the IR image to the same
grid as the visible image.
2.3 Image Fusion
The image fusion is done by taking a linear weighted average of the two source images:
(3)
Where are scalar. The pixel averaging method is easily implemented as it fast and easy to
execute and has the advantage of suppressing any noise present in the source imagery. Weights are
arrived through the selection of ‘optimal’ weights using Eigen values in principle component analysis.
Principal component analysis (PCA) is a popular approach to finding such weights which maximizes the
intensity variance in the fused image (subject to some constraint on the size of the weights). The first
step is to calculate the covariance matrix for the two images:
The eigen values of the covariance matrix C are then found by solving the characteristic equation
det (C − λI) = 0, where I is the identity matrix. The ‘optimal’ weights WA, wB are the elements of the
normalized eigenvector corresponding to the largest eigen value with being the usual
choice for the normalization constraint). In general, the fused image will have a different dynamic range
than the source images and hence linear re-scaling is done. In terms of performance, the PCA method is
easy to implement and for on-board applications.
3. SIMULATION OF ENHANCED FLIGHT VISION SYSTEM
In an aircraft, Head up Display (HUD) is often used by the pilot to view the terrain /airport environment
during landing and recognize runways landing lights, etc. In poor visibility or adverse weather conditions
images acquired by camera are blurred and objects in ground are not clearly visible. Enhanced flight fusion
vision (EFFV) is display system which uses two sensor images fused using image fusion algorithms and the
resultant image displayed using HUD or a cock pit display. This display facility is of great use to pilot for
and enhanced visual perception of the runway during aircraft landing /takeoff and situational awareness.
Simulation tool MATLAB & Simulink is used where two video streams of image data of visible and
infra-red sensor is acquired and processed. The image data is processed through various modules such as
pre-processing, registration, Image fusion and image display as given below (figure 2).
Figure 2 Image Fusion Simulations
4. RESULTS & DISCUSSION
The input to the software is two streams of video images captured by visible CCD and infra-red cameras
which is pre-processed first by image processing algorithms to remove noise and the images are enhanced
using gamma correction and image enhancement algorithm .Initially 5 Ground control points are selected
from both images and passed on to image registration module which generates warping coefficient and
resample’s the IR image to visible image for co registration. Infra-red sensor images are very useful as it
can detect fractional differences in the temperatures of distant objects, almost regardless of intervening fog
or precipitation, especially in the long wave region. The enhanced and registered images are fused using
three fusion logic with principle component analysis, wavelet transform and pyramid algorithms. Simple
averaging technique is used for image fusion, where optimal weights are determined using Principal
component analysis as other methods computation is large. The advantage of this technique is that the data
is compressed and dimension is reduced without loss of information. Optimal weights are derived using
Eigen values from covariance. The eigenvectors represent information about the patterns with the given
variables. The eigenvector that is associated with highest Eigen value represents the vector in the data that
provides the strongest pattern or relationship amongst the original data set. Further image information
content (entropy) obtained from the fused image using principle component technique gives better result.
Two Video images of visible and IR sensor taken at night of aircraft runway and the resultant image fusion
result is given in figure 3.
Figure 3 Image Fusion of Two sensor data
5. CONCLUSIONS
In this paper, we have demonstrated the usage of multi-sensor image fusion by fusing video streams of
visible and infra-red sensor monochrome images using PCA algorithm fusion logic for development of
prototype Enhanced fusion vision system. PCA algorithm is used for image fusion compared to other
techniques such as wavelet transform, pyramid techniques as it is cost effective and easy for real time
environment.. Efforts in integrating enhanced fusion vision system coupled with synthetic vision (computer
generated image of external scene topography, terrain information) and well equipped satellite navigation
systems will improve the positional accuracy of the landing requirements. Further the challenge in the
development of enhanced flight vision system is that of real time video registration of multi-sensor images
and dynamic warping algorithms which determines the positional accuracy. Research is in progress to
develop integrated enhanced vision system with additional features such as obstacle detection, runway
object classification and airport database with synthetic features which may be a useful aid to pilot for
aircraft landing in poor visibility.
REFERENCES
[1] Barnea D. I and H. F. Silverman, "A class of algorithms for fast digital registration,” IEEE Trans. Computer, vol.
C- 21, pp. 179-186, 1972.
[2] Glenn D. Hines, Zia-ur Rahman, Daniel J. Jobsona, Glenn A. Woodall, “Real-time Enhancement, Registration,
and Fusion for a Multi-Sensor Enhanced Vision System”, SPIE 6226, 2006
[3] Glenn D. Hines , Z. Rahman, D. J. Jobson, G. A. Woodell, and S. D. Harrah, “Real-time enhanced vision system,”
in Enhanced and Synthetic Vision, Proceedings of SPIE 5802, J. G. Verly, ed., March 2005
[4] Hines .G. D, Z. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-sensor image registration for an enhanced
Vision system,” in Visual Information Processing XII, Proceedings of SPIE 5108, 2003.
[5] Lisa G. Brown. A survey of image registration techniques.ACM Computing Surveys, 24(4):325–376, December
1992.
[6] Tiana C. L. M, J. R. Kerr, and S. D. Harrah, “Multispectral uncooled infrared enhanced vision system for flight
test,” in Proceedings of SPIE, 4363, April 2000.
.

Mais conteúdo relacionado

Mais procurados

imageprocessing-abstract
imageprocessing-abstractimageprocessing-abstract
imageprocessing-abstract
Jagadeesh Kumar
 
3.introduction onwards deepa
3.introduction onwards deepa3.introduction onwards deepa
3.introduction onwards deepa
Safalsha Babu
 
Introduction to Machine Vision
Introduction to Machine VisionIntroduction to Machine Vision
Introduction to Machine Vision
Nasir Jumani
 

Mais procurados (20)

PROGRAMMED TARGET RECOGNITION FRAMEWORKS FOR UNDERWATER MINE CLASSIFICATION
PROGRAMMED TARGET RECOGNITION FRAMEWORKS FOR UNDERWATER MINE CLASSIFICATIONPROGRAMMED TARGET RECOGNITION FRAMEWORKS FOR UNDERWATER MINE CLASSIFICATION
PROGRAMMED TARGET RECOGNITION FRAMEWORKS FOR UNDERWATER MINE CLASSIFICATION
 
Development of portable automatic number plate recognition (ANPR) system on R...
Development of portable automatic number plate recognition (ANPR) system on R...Development of portable automatic number plate recognition (ANPR) system on R...
Development of portable automatic number plate recognition (ANPR) system on R...
 
Unit 1 notes
Unit 1 notesUnit 1 notes
Unit 1 notes
 
2010TDC_light
2010TDC_light2010TDC_light
2010TDC_light
 
Design and development of DrawBot using image processing
Design and development of DrawBot using image processing Design and development of DrawBot using image processing
Design and development of DrawBot using image processing
 
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
 
30120140506012 2
30120140506012 230120140506012 2
30120140506012 2
 
Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time Video
 
Simultaneous Mapping and Navigation For Rendezvous in Space Applications
Simultaneous Mapping and Navigation For Rendezvous in Space ApplicationsSimultaneous Mapping and Navigation For Rendezvous in Space Applications
Simultaneous Mapping and Navigation For Rendezvous in Space Applications
 
IRJET-Cleaner Drone
IRJET-Cleaner DroneIRJET-Cleaner Drone
IRJET-Cleaner Drone
 
Background subtraction
Background subtractionBackground subtraction
Background subtraction
 
Digital Image Processing and gis software systems
Digital Image Processing and gis software systemsDigital Image Processing and gis software systems
Digital Image Processing and gis software systems
 
imageprocessing-abstract
imageprocessing-abstractimageprocessing-abstract
imageprocessing-abstract
 
06 robot vision
06 robot vision06 robot vision
06 robot vision
 
3.introduction onwards deepa
3.introduction onwards deepa3.introduction onwards deepa
3.introduction onwards deepa
 
IRJET- Dynamic Traffic Management System
IRJET- Dynamic Traffic Management SystemIRJET- Dynamic Traffic Management System
IRJET- Dynamic Traffic Management System
 
Applications of Image Processing and Real-Time embedded Systems in Autonomous...
Applications of Image Processing and Real-Time embedded Systems in Autonomous...Applications of Image Processing and Real-Time embedded Systems in Autonomous...
Applications of Image Processing and Real-Time embedded Systems in Autonomous...
 
Introduction to Machine Vision
Introduction to Machine VisionIntroduction to Machine Vision
Introduction to Machine Vision
 
Sub ecs 702_30sep14
Sub ecs 702_30sep14Sub ecs 702_30sep14
Sub ecs 702_30sep14
 
International Journal of Computational Engineering Research(IJCER)
 International Journal of Computational Engineering Research(IJCER)  International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 

Semelhante a INCAST_2008-014__2_

Semelhante a INCAST_2008-014__2_ (20)

Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
Multiresolution SVD based Image Fusion
Multiresolution SVD based Image FusionMultiresolution SVD based Image Fusion
Multiresolution SVD based Image Fusion
 
Automated traffic sign board
Automated traffic sign boardAutomated traffic sign board
Automated traffic sign board
 
Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...
 
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
 
Design and implementation of video tracking system based on camera field of view
Design and implementation of video tracking system based on camera field of viewDesign and implementation of video tracking system based on camera field of view
Design and implementation of video tracking system based on camera field of view
 
Ay33292297
Ay33292297Ay33292297
Ay33292297
 
Ay33292297
Ay33292297Ay33292297
Ay33292297
 
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...
IRJET- Design the Surveillance Algorithm and Motion Detection of Objects for ...
 
Medical Image Fusion Using Discrete Wavelet Transform
Medical Image Fusion Using Discrete Wavelet TransformMedical Image Fusion Using Discrete Wavelet Transform
Medical Image Fusion Using Discrete Wavelet Transform
 
Background differencing algorithm for moving object detection using system ge...
Background differencing algorithm for moving object detection using system ge...Background differencing algorithm for moving object detection using system ge...
Background differencing algorithm for moving object detection using system ge...
 
IRJET-Vision Based Occupant Detection in Unattended Vehicle
IRJET-Vision Based Occupant Detection in Unattended VehicleIRJET-Vision Based Occupant Detection in Unattended Vehicle
IRJET-Vision Based Occupant Detection in Unattended Vehicle
 
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
 
D018112429
D018112429D018112429
D018112429
 
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus images
 
Human Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision TechniqueHuman Motion Detection in Video Surveillance using Computer Vision Technique
Human Motion Detection in Video Surveillance using Computer Vision Technique
 
Robot Machine Vision
Robot Machine VisionRobot Machine Vision
Robot Machine Vision
 
Improved Weighted Least Square Filter Based Pan Sharpening using Fuzzy Logic
Improved Weighted Least Square Filter Based Pan Sharpening using Fuzzy LogicImproved Weighted Least Square Filter Based Pan Sharpening using Fuzzy Logic
Improved Weighted Least Square Filter Based Pan Sharpening using Fuzzy Logic
 

INCAST_2008-014__2_

  • 1. International Conference on Aerospace Science and Technology 26-28 June 2008, Bangalore, India APPLICATION OF ENHANCED FUSION VISION SYSTEM IN AVIONICS Karunakaran .P, M.S.Mohanan, R.M. Bhoopathy, Dr T.K.Sateesh Aerospace Practice, HCL Technologies Limited, Bangalore -560068, India, Karunakaran.p@hcl.in ABSTRACT: In poor visibility and bad weather conditions such as rain, snow, fog and also at night it is difficult for the pilot to perform operations like takeoff, landing and taxiing. Scene content in poor visibility images is improved fusing multi-sensor images (Visible & IR) which generates and presents cascaded image information content for a pilot’s aid and for improved situational awareness. Multi-sensor image fusion is a process of combining data and information from different sources to maximize the useful information content. The images to be fused can come from different sensors of the same basic type or from different types of sensors. Three image fusion algorithms (Principle component analysis, wavelet transforms and pyramid techniques) were developed and tested. Principle component analysis has found to be more useful as it compresses the multi-sensor image data and optimal weights were derived. Algorithms were tested with simulated video images obtained from visible and infra-red camera of an aircraft approaching runway at night, and a Prototype technology of enhanced flight vision system is demonstrated. 1. INTRODUCTION: In avionics developments, increased situational awareness and overall safety is the main concern for pilots. Yet modern, well-equipped aircraft still have been involved in fatal accidents, particularly in poor visibility or at night. In bad weather conditions such as rain, snow, fog and at night it is inherently difficult for the pilot to navigate and airlines forced to cancel/divert services as safe landing /take off and taxing is cumbersome. Though automated instrument landing (CAT 111) allows landing, it requires both the airport and aircraft to be equipped with special equipment. Scene content in poor visibility images can be improved using multi-sensor images with a fusion display system for a pilot’s aid which will have more information content provided by better detection capability. This system will be useful to see the terrain, obstacles, and other aircraft in total darkness or poor visibility due to whether for situation awareness [2, 3, and 4]. Image fusion denotes a process of generating a single image which contains a more accurate description of the scene than any of the individual source images. The fused image should be more useful for human visual or machine perception [6]. Fusion may be useful for several objectives such as detection, recognition, identification, tracking, change detection and decision making. These objectives may be encountered in many application domains as Defense, Robotics, Medicine, Space, etc. Using an efficient fusion scheme, one may expect significant advantages as: • Improved confidence in decisions due to the use of complementary information (e.g. silhouette of objects from visible image, active/non-active status from Infra-Red image, speed and range from radar, etc.) • Improved performance to countermeasures (it is very hard to camouflage an object in all possible wave-bands) • Improved performance in adverse environmental conditions. Typically smoke or fog cause bad visible contrast and some weather conditions (rain) cause low thermal contrast (Infra Red imaging), combining both types of sensors should give better overall performance. A system architecture is described which involves algorithms for Image enhancement, registration, and image fusion logic INCAST 2008-014
  • 2. 2. IMAGE FUSION ARCHITECTURE Image processing using Image fusion technique allows us to combine the complimentary information from each sensor into a single superior image for interpretation and analysis. This provides the basis for planning, decision-making, and control of autonomous / intelligent machines. Architecture is a first step towards guiding and facilitating the construction and evaluation of fusion system study. Simulation Tools are used to automate the software development and component integration process. Image fusion processing consists of tools for image acquisition by cameras and algorithms for pre- processing; image registration and image fusion. Sample prototype image fusion architecture is given as below (figure 1). Figure 1 Image Fusion Architecture 2.1 Image Acquisition Image acquisition module consists of the acquisition buffers, communication channels and the camera. After initialization is completed, the acquisition module start grabbing the video image data from two sensors and sends it to image pre-processing module. 2.2 Image Pre-processing Image pre-processing module consist of pre-processing function such as noise removal, gamma corrections as real world images obtained from multiple sensors are degraded by atmospheric distortions such as fog, smoke, rain etc. Image enhancement module performs image enhancement either adaptively or selectively. The selective method shall override the adaptive method of selection of enhancement algorithm. 2.3 Image Registration Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints, and/or by different sensors. It geometrically aligns two images of the same scene which are acquired by different sensors the reference and sensed images [1, 5]. The present differences between images are introduced due to different imaging conditions. Image registration is a crucial step in all image analysis tasks.. The aim is to integrate the information obtained from different source streams to gain more complex and detailed scene representation as fusion of information from sensors with different characteristics. In our study two camera images (one with visible sensor data and another Infrared sensor images) are taken over a same area. Registration is required to remove the FOV differences in the cameras and to correct bore-sighting inaccuracies. The visible data is used as the baseline. The IR data is registered to the visible data by applying an affine transform to the IR image data.
  • 3. A general representation of an affine transform is [y1, y2, 1] = [x1, x2, 1] * T where a11 a12 0 T = a21 a22 1 (1) a31 a32 1 x1 and x2 reference the input coordinate system, y1 and y2 reference the output coordinate system, and aij are transform coefficients. The mapping functions are given as y1 = a11x1 + a21x2 + a31 and (2) y2 = a12x1 + a22x2 + a32. Prior to flight, a set of control points are selected based on corresponding features from sample images acquired at the same time from the cameras. The control points are analyzed using multiple linear regressions to approximate the transform coefficients aij, which is then applied to the visible image. The transformed IR image is then resampled using bilinear interpolation to align the IR image to the same grid as the visible image. 2.3 Image Fusion The image fusion is done by taking a linear weighted average of the two source images: (3) Where are scalar. The pixel averaging method is easily implemented as it fast and easy to execute and has the advantage of suppressing any noise present in the source imagery. Weights are arrived through the selection of ‘optimal’ weights using Eigen values in principle component analysis. Principal component analysis (PCA) is a popular approach to finding such weights which maximizes the intensity variance in the fused image (subject to some constraint on the size of the weights). The first step is to calculate the covariance matrix for the two images: The eigen values of the covariance matrix C are then found by solving the characteristic equation det (C − λI) = 0, where I is the identity matrix. The ‘optimal’ weights WA, wB are the elements of the normalized eigenvector corresponding to the largest eigen value with being the usual choice for the normalization constraint). In general, the fused image will have a different dynamic range than the source images and hence linear re-scaling is done. In terms of performance, the PCA method is easy to implement and for on-board applications.
  • 4. 3. SIMULATION OF ENHANCED FLIGHT VISION SYSTEM In an aircraft, Head up Display (HUD) is often used by the pilot to view the terrain /airport environment during landing and recognize runways landing lights, etc. In poor visibility or adverse weather conditions images acquired by camera are blurred and objects in ground are not clearly visible. Enhanced flight fusion vision (EFFV) is display system which uses two sensor images fused using image fusion algorithms and the resultant image displayed using HUD or a cock pit display. This display facility is of great use to pilot for and enhanced visual perception of the runway during aircraft landing /takeoff and situational awareness. Simulation tool MATLAB & Simulink is used where two video streams of image data of visible and infra-red sensor is acquired and processed. The image data is processed through various modules such as pre-processing, registration, Image fusion and image display as given below (figure 2). Figure 2 Image Fusion Simulations 4. RESULTS & DISCUSSION The input to the software is two streams of video images captured by visible CCD and infra-red cameras which is pre-processed first by image processing algorithms to remove noise and the images are enhanced using gamma correction and image enhancement algorithm .Initially 5 Ground control points are selected from both images and passed on to image registration module which generates warping coefficient and resample’s the IR image to visible image for co registration. Infra-red sensor images are very useful as it can detect fractional differences in the temperatures of distant objects, almost regardless of intervening fog or precipitation, especially in the long wave region. The enhanced and registered images are fused using three fusion logic with principle component analysis, wavelet transform and pyramid algorithms. Simple averaging technique is used for image fusion, where optimal weights are determined using Principal component analysis as other methods computation is large. The advantage of this technique is that the data is compressed and dimension is reduced without loss of information. Optimal weights are derived using
  • 5. Eigen values from covariance. The eigenvectors represent information about the patterns with the given variables. The eigenvector that is associated with highest Eigen value represents the vector in the data that provides the strongest pattern or relationship amongst the original data set. Further image information content (entropy) obtained from the fused image using principle component technique gives better result. Two Video images of visible and IR sensor taken at night of aircraft runway and the resultant image fusion result is given in figure 3. Figure 3 Image Fusion of Two sensor data 5. CONCLUSIONS In this paper, we have demonstrated the usage of multi-sensor image fusion by fusing video streams of visible and infra-red sensor monochrome images using PCA algorithm fusion logic for development of prototype Enhanced fusion vision system. PCA algorithm is used for image fusion compared to other techniques such as wavelet transform, pyramid techniques as it is cost effective and easy for real time environment.. Efforts in integrating enhanced fusion vision system coupled with synthetic vision (computer generated image of external scene topography, terrain information) and well equipped satellite navigation systems will improve the positional accuracy of the landing requirements. Further the challenge in the development of enhanced flight vision system is that of real time video registration of multi-sensor images and dynamic warping algorithms which determines the positional accuracy. Research is in progress to develop integrated enhanced vision system with additional features such as obstacle detection, runway object classification and airport database with synthetic features which may be a useful aid to pilot for aircraft landing in poor visibility. REFERENCES [1] Barnea D. I and H. F. Silverman, "A class of algorithms for fast digital registration,” IEEE Trans. Computer, vol. C- 21, pp. 179-186, 1972. [2] Glenn D. Hines, Zia-ur Rahman, Daniel J. Jobsona, Glenn A. Woodall, “Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System”, SPIE 6226, 2006 [3] Glenn D. Hines , Z. Rahman, D. J. Jobson, G. A. Woodell, and S. D. Harrah, “Real-time enhanced vision system,” in Enhanced and Synthetic Vision, Proceedings of SPIE 5802, J. G. Verly, ed., March 2005 [4] Hines .G. D, Z. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-sensor image registration for an enhanced Vision system,” in Visual Information Processing XII, Proceedings of SPIE 5108, 2003. [5] Lisa G. Brown. A survey of image registration techniques.ACM Computing Surveys, 24(4):325–376, December 1992. [6] Tiana C. L. M, J. R. Kerr, and S. D. Harrah, “Multispectral uncooled infrared enhanced vision system for flight test,” in Proceedings of SPIE, 4363, April 2000. .