SlideShare uma empresa Scribd logo
1 de 5
Baixar para ler offline
ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012



      Feature Tracking of Objects in Underwater Video
                         Sequences
                                           Prabhakar C J & Praveen Kumar P U
                                 Department of P.G. Studies and Research in Computer Science
                                       Kuvempu University, Shankaraghatta - 577451
                                                       Karnataka, India
                                      psajjan@yahoo.com, praveen577302@gmail.com


Abstract—Feature tracking is a key, underlying component in            efficiency of the feature tracking techniques depends upon
many approaches to 3D reconstruction, detection, localization          the invariant nature of the extracted features. Further the
and recognition of underwater objects. In this paper, we               extracted distinctive invariant features can be used to perform
proposed to adapt SIFT technique for feature tracking in               reliable matching between different views of an object or scene.
underwater video sequences. Over the past few years the
                                                                       The features extracted should be invariant to image scale and
underwater vision is attracting researchers to investigate
suitable feature tracking techniques for underwater
                                                                       rotation, and are shown to provide robust matching across a
applications. The researchers have developed many feature              substantial range of affine distortion, change in 3D viewpoint,
tracking techniques such as KLT, SIFT, SURF etc., to track             addition of noise, and change in illumination. The features are
the features in video sequence for general applications. The           highly distinctive, in the sense that a single feature can be
literature survey reveals that there is no standard feature            correctly matched with high probability against a large
tracker suitable for underwater environment. We proposed to            database of features from many images. In underwater
adapt SIFT technique for tracking features of objects in               environment, extracting the invariant features from the
underwater video sequence. The SIFT extracts features, which           sequence of images is a challenging due to optical properties
are invariant to scale, rotation and affine transformations.
                                                                       of the water, which varies the same feature in sequence of
We have compared and evaluated SIFT with popular techniques
such as KLT and SURF on captured video sequence of
                                                                       images.
underwater objects. The experimental results shows that                    3D reconstruction for underwater applications is a
adapted SIFT works well for underwater video sequence.                 relatively recent research area with a higher complexity than
                                                                       the 3D reconstruction for general applications. This is a very
Index Terms—Features tracking, SIFT, 3D reconstruction,                active and growing field of research that brings together
Underwater Video                                                       challenging problems from the underwater environment and
                                                                       powerful 3D reconstruction techniques from computer vision.
                         I. INTRODUCTION                               Many articles have been published during the last years
    Feature tracking is an important step in 3D surface                involving feature tracking from video sequences for 3D
reconstruction of underwater objects using video sequences.            reconstruction of underwater objects demonstrates the
The researchers have developed many techniques for feature             increasing interest in this topic and its wide range of
tracking, as many algorithms rely on accurate computation              applications. E. Trucco et al. [1] have developed a robust
of correspondences through a sequence of images. These                 tracker based on efficient outlier rejection rule suitable for
techniques works well for clean, out-of-water images;                  subsea video sequences. They extended Shi-Tomasi-Kanade
however, when imaging underwater, even an image of the                 tracker in several ways: first by introducing an automatic
same object can be drastically different due to varying water          scheme for rejecting bad features, using a simple, efficient,
conditions. Moreover many parameters can modify the optical            model-free outlier rejection rule, called X84. Secondly, they
properties of the water and underwater image sequences show            developed a system for running indefinite length of time and
large spatial and temporal variations. As a result, descriptors        the tracker runs in real time on non-dedicated hardware. The
of the same point on an object may be completely different             Shi-Tomasi-Kanade [10][9] feature tracker is based on Sum-
between different underwater video images taken under                  of-Squared-Difference (SSD) matching and assuming affine
varying imaging conditions. This makes feature tracking                frame-to-frame warping. The system tracks small image
between such images is a very challenging problem.                     regions and classifies them as good (reliable) or bad
    The 3D reconstruction of underwater objects such as                (unreliable) according to the residual of the match between
underwater mines, ship wreckage, coral reefs, pipes and                the associated image regions in the first and current frames.
telecommunication cables is an essential requirement for               K. Plakas [2] have adapted Shi-Tomasi-Kanade as a feature
recognition, localization of these objects using video                 tracker for 3D shape reconstruction from uncalibrated
sequences. Feature tracking is applied to recover the 3D               underwater video sequences.
structure and motion of objects in video sequence. To recover              Anne Sedlazeck et al. [3] have adapted KLT (Kanade-
the 3D structure and motion, it is required to find out the            Lucas-Tomasi) feature tracker to reconstruct 3D surface of
same features in different frames and match them. The                  ship wreck using underwater monocular video sequence.
© 2012 ACEEE                                                      42
DOI: 01.IJIT.02.01.529
ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012


Andrew Hogue and Michael Jenkin [4] also adapted KLT as                                               1        2   2
                                                                                                                        2 2
a feature tracker for 3D shape reconstruction of coral reefs                      G ( x, y,  )        2
                                                                                                          e( x  y )          .
using stereo image sequences. The KLT tracker is a good                                             2
solution for tracking problem in the case where the input                To efficiently detect stable keypoint locations in scale space
image sequence consists of dense video i.e. the displacement             extrema in the difference-of-Gaussian function convolved with
between two consecutive images is very small. The main                   the image, D ( x, y,  ), which can be computed from the
drawback of KLT is that it cannot be applied for stereo image            difference of two nearby scales separated by a constant
sequences with large baseline. The 3D reconstruction of coral
reefs is developed by V. Brandou et al. [5] using Scale-                 multiplicative factor k :
Invariant Feature Transform (SIFT) technique [6] for extracting          D ( x, y,  )  (G ( x, y, k )  G ( x, y ,  ))  I ( x, y )
and matching features in stereo video sequences. They have
captured coral reefs using two video cameras, which are                                 L( x, y , k )  L( x, y ,  ).           (2)
aligned to capture stereo video sequences. P Jasiobedzki et              which is just be different from the Gaussian-blurred images
al., [8] have adapted SIFT feature tracker for extraction of             at scales  and k .
feature from images and recognizing the object in subsequent
                                                                             The first step toward the detection of interest points is
images.
                                                                         the convolution of the image with Gaussian filters at different
     In this paper, we proposed to adapt SIFT method for
                                                                         scales, and the generation of difference-of-Gaussian images
extraction of features from underwater monocular video
                                                                         from the difference of adjacent blurred images. The rotated
sequences. SIFT is an algorithm in computer vision to detect
                                                                         images are grouped by octave (an octave corresponds to
and describe local features in images. The SIFT method is
very suitable in the case where the interest points are invariant        doubling the value of  ), and the value of k is selected so
to image scaling and rotation, and partially invariant to                that we can obtain a fixed number of blurred images per octave.
changes in illumination and 3D camera viewpoints. For every              This also ensures that we obtain the same figure of Difference-
keypoint a descriptor is defined, which is a vector with 128             of-Gaussian images per octave (Fig. 1).
dimensions based on local image gradient directions in the
keypoint neighborhood. SIFT is more robust than other
feature tracking techniques for images with large translation
and rotation it still can track the features.
     The remaining sections of the paper are organized as
follows: section II describes adapted SIFT feature tracker in
detail. The experimental results are presented in the section
III. Finally, the section IV concludes the paper.

                             II. SIFT
    This algorithm is first proposed by David Lowe in 1999,
and then further developed and improved [7]. SIFT features
have many advantages. SIFT features are all natural features
of images. They are favorably invariant to image translation,
scaling, rotation, illumination, viewpoint, noise etc. Good
speciality, rich in information, suitable for fast and exact
matching in a mass of feature database. Relatively fast speed.                Figure 1. The blurred images at different scales, and the
The extraction SIFT features from video sequences can be                         computation of the difference-of-Gaussian images
done by applying sequentially steps such as Scale-space                  Interest points (called keypoints in the SIFT framework) are
extrema detection, Keypoint localization, Orientation                    identified as local maxima or minima of the DoG images across
assignment and finally Generation of keypoint descriptors.               scales. Each pixel in the DoG images is compared to its 8
                                                                         neighbors at the same scale, plus the 9 corresponding
A. Scale-space extrema detection
                                                                         neighbors at neighboring scales. If the pixel is a local maximum
    The first stage of computation searches over all scales              or minimum, it is selected as a candidate keypoint (Fig. 2).
and image locations. It is implemented efficiently by means              For each candidate keypoint:
of a Difference-of-Gaussian function to identify potential               1) Interpolation of nearby data is used to accurately
interest points that are invariant to orientation and scale.             determine its position;
Interest points for SIFT features correspond to local extrema            2) Keypoints with low contrast are removed;
of Difference-of-Gaussian filters at different scales. Given a           3) Responses along edges are eliminated;
Gaussian-blurred image described as the formula                          4) The keypoint is assigned an orientation.
   L( x, y,  )  G ( x, y,  )  I ( x, y ),           (1)
Where  is the convolution operation in x and y , and
© 2012 ACEEE                                                        43
DOI: 01.IJIT.02.01. 529
ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012




Figure 2. Local extrema detection, the pixel marked X is compared
 against its 26 neighbors in a    3 3 3   neighborhood that spans
                                                                                  Figure 3. The diagram of keypoints location process
                          adjacent DoG images
To determine the keypoint orientation, a gradient orientation              6) The extremas of Difference-of-Gaussian have large principal
histogram is computed in the neighborhood of the keypoint                  curvatures along edges, it can be reduced by checking
(using the Gaussian image at the closest scale to the                         Tr ( H ) 2 (r  1)2
keypoint’s scale). The contribution of each neighboring pixel                                    .                          (4)
is weighted by the gradient magnitude and a Gaussian window                   Det ( H )     r
with a  that is 1.5 times the scale of the keypoint. Peaks in             Evidently, H in (4) is a 2  2 Hessian matrix, r is the ratio
the histogram correspond to dominant orientations. A                       between the largest magnitude and the smallest one.
separate keypoint is rerated for the direction corresponding               7) To achieve invariance to rotation, the gradient magnitude
to the histogram maximum, and any other direction within
80% of the maximum value. All the properties of the keypoint               m( x, y ) and orientation  ( x, y ) are pre-computed as the
are measured relative to the keypoint orientation, this provides           following equations.
invariance to rotation.
                                                                           m( x, y )  [( L( x  1, y )  L( x  1, y )) 2
B. Locating Keypoints
     At each candidate location, a detailed model is fit to
                                                                                         ( L( x, y  1)  L( x, y  1))2 ]1/ 2 (5)
determine scale and location. Keypoints are selected on basis
of measures of their stability. The Fig. 3 shows a whole process                                L( x, y  1)  L( x, y  1) 
                                                                            ( x, y )  tan 1                                 (6)
on how to find and describe the SIFT feature points. In Fig. 3,                                 L( x  1, y )  L( x  1, y ) 
we can find that, if we want to find and describe the SIFT
feature points, we should follow these steps:                              8) Take a feature point and its 16  16 neighbors round it.
1) Input an image ranges from [0, 1].                                      Then divide them into 4  4 subregions, histogram every
2) Use a variable-scale Gaussian kernel G ( x, y ,  ) to create           subregion with 8 bins.

scale space L( x, y ,  ) .                                                C. SIFT feature representation
3) Calculate Difference-of-Gaussian function as an                             Once a keypoint orientation has been selected, the feature
approximate to the normalized Laplacian is invariant to the                descriptor is computed as a set of orientation histograms on
scale change.                                                               4  4 pixel neighborhoods. The orientation histograms are
4) Find the maxima or minima of Difference-of-Gaussian                     relative to the keypoint orientation, the orientation data comes
function value by comparing one of the pixels to its above,                from the Gaussian image closest in scale to the keypoint’s
current and below scales in 3  3 regions.                                 scale. The contribution of each pixel is weighted by the
5) Accurate the keypoints locations by discarding points                   gradient magnitude, and by a Gaussian with  1.5 times the
below a predetermined value.                                               scale of the keypoint. Histograms contain 8 bins each, and
                                                                           each descriptor contains an array of 4 histograms around the
                              T
       ˆ         1 D ˆ                                                    keypoint (Fig. 4). This leads to a SIFT feature vector with
    D( X )  D       X.                              (3)
                 2 x                                                       4  4  4  128 elements. This vector is normalized to
                                                                           enhance invariance to changes in illumination.
      ˆ
Where X is calculated by setting the derivative
D ( x, y ,  ) to zero.

© 2012 ACEEE                                                          44
DOI: 01.IJIT.02.01.529
ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012


                                                                         surface level of water. The captured monocular underwater
                                                                         video sequence is diminished due to optical properties of the
                                                                         light in underwater environment. The captured video images
                                                                         are suffered from non-uniform illumination, low contrast,
                                                                         blurring and diminished colors. The Fig. 5 shows one pair of
                                                                         video frames suffered from these effects. These effects are
                                                                         preprocessed sequentially by applying homomorphic
                                                                         filtering, wavelet denoising and anisotropic filtering technique
                                                                         [11]. The Fig. 6 shows result of preprocessing on a pair of
 Figure 4. The keypoint descriptor is generated and weighted by a        video frames. The proposed SIFT technique is employed on
                  Gaussian window (blue circle)                          preprocessed pair of video frames to track the same features.
                                                                         The Fig. 7 shows the extracted features using SIFT method.
D. Orientation assignment                                                The features extracted from each frame is stored in a feature
    Direction parameters to the keypoints are determined to              descriptor, the euclidian distance is employed as a similarity
quantize the description. Lowe formulated the determination              measurement for each feature descriptor using suitable
with the norm and the angle in Euclidean space, with the                 threshold found empirically. Finally, the features are matched
direction of key points used as normalized the gradient                  based on Best-Bin-First nearest neighbor approach. The Fig.
direction of the key point operator in the following step. After         8 shows matched features between corresponding
an image revolvement, the identical directions demanded can              consecutive image pair. Table I gives the comparison of
be worked out.                                                           matched feature points obtained using SIFT, KLT and SURF.
E. Keypoint matching                                                     From Table I we can see that SIFT method is best compared
                                                                         to KLT and SURF, because the number of matched feature
    The next step is to apply these SIFT methods to video                points is 419 using SIFT compared to matched feature points
frame sequences for object tracking. SIFT features are                   397 and 403 obtained using KLT, SURF respectively. This
extracted through the input video frame sequences and stored             due to the fact that SIFT extracts most invariant features
by their keypoints descriptors. Each key point assigns 4                 from the video sequence where the disparity between the
parameters, which are 2D location (x coordinate and y                    frames is very high. The underwater video sequence has such
coordinate), orientation and scale. Each object is tracked in a          property that the lighting variations between the frames are
new video frame sequences by separately comparing each                   very high due to optical properties of the water. Therefore
feature point found from the new video frame sequences to                the SIFT is most suitable feature tracker for underwater video
those on the target object. The Euclidean distance is                    sequence.
introduced as a similarity measurement of feature characters.
The candidates can be preserved when the two features
Euclidean distance is larger than the threshold specified
previous. So the best matches can be picked out by the
parameters value, in the other way, consistency of their
location, orientation and scale. Each cluster of three or more
features that agree on an object and its pose is then subject
to further detailed model verification and subsequently
outliers are throwed away. Finally the probability that a
particular set of features indicates the presence of an object             Figure 5. Successive video frames of captured underwater video
                                                                                                      Sequence
is computed, considering the accuracy of fit and number of
probable false matches. Object matches that pass all of the
above tests can be recognized as correct with high
confidence.

                  III. EXPERIMENTAL RESULTS
    The adapted SIFT approach is compared with other
popular feature tracking techniques such as KLT [9][10],
SURF [12] for tracking features of objects in underwater video            Figure 6. The result of pre-processing on diminished video images
sequence. The comparison is based on evaluation of number
of feature points matched in pair of successive frames. The
video sequence of underwater objects is captured in a small
water body in natural light. The scene includes several objects
at distance [1m, 2m], near the corner of the water body. The
monocular video sequence of underwater objects is captured
using Canon D10 water proof camera at a depth of 2m from the
© 2012 ACEEE                                                        45
DOI: 01.IJIT.02.01. 529
ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012


                                                                                                     ACKNOWLEDGMENT
                                                                                 This work has been supported by the Grants NRB/SC/
                                                                              158/2008-2009, Naval Research Board, DRDO, New Delhi,
                                                                              India.

                                                                                                        REFERENCES
                                                                              [1] E. Trucco, Y. R. Petillot, I. Tena Ruiz, K. Plakas, and D. M.
                                                                              Lane, “Feature Tracking in Video and Sonar Subsea Sequences with
                                                                              Applications”, Computer Vision and Image Understanding, vol.
                                                                              79, pp. 92-122, February 2000.
  Figure 7. Extracted features using SIFT on a pair of successive
                                                                              [2] K. Plakas, E. Trucco, and A. Fusiello, “Uncalibrated vision
                           video frames
                                                                              for 3-D underwater applications”, In Proceedings of OCEANS
                                                                              ’98, vol. 1, pp. 272 - 276, October 1998.
                                                                              [3] Anne Sedlazeck, Kevin Koser, and Reinhard Koch, “3D
                                                                              Reconstruction Based on Underwater Video from ROV Kiel 6000
                                                                              Considering Underwater Imaging Conditions”, OCEANS 2009 -
                                                                              EUROPE, pp. 1-10, 2009.
                                                                              [4] Andrew Hogue and Michael Jenkin, “Development of an
                                                                              Underwater Vision Sensor for 3D Reef Mapping”, In Proceedings
                                                                              of the IEEE/RSJ International Conference on Intelligent Robots
                                                                              and Systems, pp. 5351- 5356, October 2006.
                                                                              [5] V. Brandou, A. G. Allais, M. Perrier, E. Malis, P. Rives, J.
                                                                              Sarrazin, and P. M. Sarradin, “3D Reconstruction of Natural
                                                                              Underwater Scenes Using the Stereovision System IRIS”, OCEANS
                                                                              2007 - Europe , pp. 1-6, 2007.
                                                                              [6] David G Lowe, “Distinctive image features from scale-invariant
  Figure 8. The result of SIFT for feature matching on a pair of a            keypoints”, International Journal of Computer Vision, vol. 60(2),
                            video frames                                      pp. 91-110, 2004.
 TABLE I. COMPARISON OF FEATURE TRACKING TECHNIQUES BASED ON NUMBER OF        [7] David G Lowe, “Object recognition from local scale-invariant
                           FEATURES MATCHED                                   features”, Proceedings of the International Conference on Computer
                                                                              Vision, vol. 2, pp. 1150-1157, 1997.
                                                                              [8] Piotr Jasiobedzki, Stephen Se, Michel Bondy, and Roy Jakola,
                                                                              “Underwater 3D mapping and pose estimation for ROV
                                                                              operations”, OCEANS 2008, pp. 1-6, September 2008.
                                                                              [9] C. Tomasi and T. Kanade, “Detection and tracking of point
                                                                              features”, Technical Report CMU-CS-91-132, Carnegie Mellon
                                                                              University, Pittsburg, PA, April 1991.
                                                                              [10] J. Shi and C. Tomasi, “Good features to track”, Proceedings of
                                                                              the IEEE Conference on Computer Vision and Pattern Recognition,
                          III. CONCLUSIONS                                    pp. 593-600, June 1994.
                                                                              [11] Prabhakar C. J. and Praveen Kumar P.U., “Underwater image
       In this paper, we proposed to adapt SIFT technique as                  denoising using adaptive wavelet subband thresholding”, In
a feature tracker in the video sequence of underwater objects                 Proceedings of IEEE International Conference on Signal and Image
for the purpose of 3D surface reconstruction. Since there is                  Processing (ICSIP), 2010, pp. 322-327, December 2010.
no benchmark video data available for comparing proposed                      [12] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van
                                                                              Gool, “SURF: Speeded Up Robust Features”, Computer Vision
technique with other popular techniques. We have conducted
                                                                              and Image Understanding, vol. 110(3), pp. 346-359, 2008.
experiment using captured video sequence obtained from
monocular video camera. To evaluate the efficacy of the
proposed method, we have compared with popular tracker
such as KLT and SURF techniques based on number of feature
points matched. The experimental results shows that SIFT
method gives comparably better performance than KLT and
SURF tracker. SIFT features are invariant to rotation, scale
changes and affine transformations. KLT is fast and good for
illumination changes, but not suitable for affine
transformation. SURF is fast and has good performance as
the same as SIFT, but it is not stable to rotation and illumination
changes, which is common in underwater environment.


© 2012 ACEEE                                                             46
DOI: 01.IJIT.02.01. 529

Mais conteúdo relacionado

Mais procurados

Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...c.choi
 
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape Assessment
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape AssessmentGOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape Assessment
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape AssessmentTomohiro Fukuda
 
Availability of Mobile Augmented Reality System for Urban Landscape Simulation
Availability of Mobile Augmented Reality System for Urban Landscape SimulationAvailability of Mobile Augmented Reality System for Urban Landscape Simulation
Availability of Mobile Augmented Reality System for Urban Landscape SimulationTomohiro Fukuda
 
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...Tomohiro Fukuda
 
A New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationA New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationIRJET Journal
 
Action Recognitionの歴史と最新動向
Action Recognitionの歴史と最新動向Action Recognitionの歴史と最新動向
Action Recognitionの歴史と最新動向Ohnishi Katsunori
 
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...Tomohiro Fukuda
 
Self Attested Images for Secured Transactions using Superior SOM
Self Attested Images for Secured Transactions using Superior SOMSelf Attested Images for Secured Transactions using Superior SOM
Self Attested Images for Secured Transactions using Superior SOMIDES Editor
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)theijes
 
Image Denoising Based On Wavelet for Satellite Imagery: A Review
Image Denoising Based On Wavelet for Satellite Imagery: A  ReviewImage Denoising Based On Wavelet for Satellite Imagery: A  Review
Image Denoising Based On Wavelet for Satellite Imagery: A ReviewIJMER
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...csandit
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
 
Sudha radhika to upload in slide share [compatibility mode]
Sudha radhika to upload in slide share [compatibility mode]Sudha radhika to upload in slide share [compatibility mode]
Sudha radhika to upload in slide share [compatibility mode]radhikasabareesh
 

Mais procurados (19)

Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
Real-time 3D Object Pose Estimation and Tracking for Natural Landmark Based V...
 
Lightspeed SIGGRAPH talk
Lightspeed SIGGRAPH talkLightspeed SIGGRAPH talk
Lightspeed SIGGRAPH talk
 
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape Assessment
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape AssessmentGOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape Assessment
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape Assessment
 
Availability of Mobile Augmented Reality System for Urban Landscape Simulation
Availability of Mobile Augmented Reality System for Urban Landscape SimulationAvailability of Mobile Augmented Reality System for Urban Landscape Simulation
Availability of Mobile Augmented Reality System for Urban Landscape Simulation
 
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...
 
A New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationA New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow Estimation
 
Action Recognitionの歴史と最新動向
Action Recognitionの歴史と最新動向Action Recognitionの歴史と最新動向
Action Recognitionの歴史と最新動向
 
Kelm überblick 2013
Kelm überblick 2013Kelm überblick 2013
Kelm überblick 2013
 
Ei2004 presentation
Ei2004 presentationEi2004 presentation
Ei2004 presentation
 
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...
 
Self Attested Images for Secured Transactions using Superior SOM
Self Attested Images for Secured Transactions using Superior SOMSelf Attested Images for Secured Transactions using Superior SOM
Self Attested Images for Secured Transactions using Superior SOM
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
Image Denoising Based On Wavelet for Satellite Imagery: A Review
Image Denoising Based On Wavelet for Satellite Imagery: A  ReviewImage Denoising Based On Wavelet for Satellite Imagery: A  Review
Image Denoising Based On Wavelet for Satellite Imagery: A Review
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
 
0812 blanche ieee4
0812 blanche ieee40812 blanche ieee4
0812 blanche ieee4
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
med_poster_spie
med_poster_spiemed_poster_spie
med_poster_spie
 
Two
TwoTwo
Two
 
Sudha radhika to upload in slide share [compatibility mode]
Sudha radhika to upload in slide share [compatibility mode]Sudha radhika to upload in slide share [compatibility mode]
Sudha radhika to upload in slide share [compatibility mode]
 

Semelhante a Feature Tracking of Objects in Underwater Video Sequences

IRJET - Underwater Object Identification using Matlab and Machine
IRJET - Underwater Object Identification using Matlab and MachineIRJET - Underwater Object Identification using Matlab and Machine
IRJET - Underwater Object Identification using Matlab and MachineIRJET Journal
 
IRJET- Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolu...
IRJET- Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolu...IRJET- Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolu...
IRJET- Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolu...IRJET Journal
 
IRJET- A Review on Moving Object Detection in Video Forensics
IRJET- A Review on Moving Object Detection in Video Forensics IRJET- A Review on Moving Object Detection in Video Forensics
IRJET- A Review on Moving Object Detection in Video Forensics IRJET Journal
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...cscpconf
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)npinto
 
The Effectiveness of 2D-3D Converters in Rendering Natural Water Phenomena
The Effectiveness of 2D-3D Converters in Rendering Natural Water PhenomenaThe Effectiveness of 2D-3D Converters in Rendering Natural Water Phenomena
The Effectiveness of 2D-3D Converters in Rendering Natural Water Phenomenaidescitation
 
Under water object classification using sonar signal
Under water object classification using sonar signalUnder water object classification using sonar signal
Under water object classification using sonar signalIRJET Journal
 
Human action recognition using local space time features and adaboost svm
Human action recognition using local space time features and adaboost svmHuman action recognition using local space time features and adaboost svm
Human action recognition using local space time features and adaboost svmeSAT Publishing House
 
Application of feature point matching to video stabilization
Application of feature point matching to video stabilizationApplication of feature point matching to video stabilization
Application of feature point matching to video stabilizationNikhil Prathapani
 
Human Action Recognition in Videos
Human Action Recognition in VideosHuman Action Recognition in Videos
Human Action Recognition in VideosIRJET Journal
 
Underwater Image Enhancement
Underwater Image EnhancementUnderwater Image Enhancement
Underwater Image Enhancementijtsrd
 
Automatic identification of animal using visual and motion saliency
Automatic identification of animal using visual and motion saliencyAutomatic identification of animal using visual and motion saliency
Automatic identification of animal using visual and motion saliencyeSAT Publishing House
 
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...ijtsrd
 
IRJET-A Review of Underwater Image Enhancement By Wavelet Decomposition using...
IRJET-A Review of Underwater Image Enhancement By Wavelet Decomposition using...IRJET-A Review of Underwater Image Enhancement By Wavelet Decomposition using...
IRJET-A Review of Underwater Image Enhancement By Wavelet Decomposition using...IRJET Journal
 
PREVENTING COPYRIGHTS INFRINGEMENT OF IMAGES BY WATERMARKING IN TRANSFORM DOM...
PREVENTING COPYRIGHTS INFRINGEMENT OF IMAGES BY WATERMARKING IN TRANSFORM DOM...PREVENTING COPYRIGHTS INFRINGEMENT OF IMAGES BY WATERMARKING IN TRANSFORM DOM...
PREVENTING COPYRIGHTS INFRINGEMENT OF IMAGES BY WATERMARKING IN TRANSFORM DOM...ijistjournal
 

Semelhante a Feature Tracking of Objects in Underwater Video Sequences (20)

IRJET - Underwater Object Identification using Matlab and Machine
IRJET - Underwater Object Identification using Matlab and MachineIRJET - Underwater Object Identification using Matlab and Machine
IRJET - Underwater Object Identification using Matlab and Machine
 
IRJET- Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolu...
IRJET- Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolu...IRJET- Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolu...
IRJET- Human Fall Detection using Co-Saliency-Enhanced Deep Recurrent Convolu...
 
IRJET- A Review on Moving Object Detection in Video Forensics
IRJET- A Review on Moving Object Detection in Video Forensics IRJET- A Review on Moving Object Detection in Video Forensics
IRJET- A Review on Moving Object Detection in Video Forensics
 
Csit3916
Csit3916Csit3916
Csit3916
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
 
3 video segmentation
3 video segmentation3 video segmentation
3 video segmentation
 
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
MIT 6.870 - Template Matching and Histograms (Nicolas Pinto, MIT)
 
The Effectiveness of 2D-3D Converters in Rendering Natural Water Phenomena
The Effectiveness of 2D-3D Converters in Rendering Natural Water PhenomenaThe Effectiveness of 2D-3D Converters in Rendering Natural Water Phenomena
The Effectiveness of 2D-3D Converters in Rendering Natural Water Phenomena
 
Under water object classification using sonar signal
Under water object classification using sonar signalUnder water object classification using sonar signal
Under water object classification using sonar signal
 
Human action recognition using local space time features and adaboost svm
Human action recognition using local space time features and adaboost svmHuman action recognition using local space time features and adaboost svm
Human action recognition using local space time features and adaboost svm
 
Application of feature point matching to video stabilization
Application of feature point matching to video stabilizationApplication of feature point matching to video stabilization
Application of feature point matching to video stabilization
 
3 video segmentation
3 video segmentation3 video segmentation
3 video segmentation
 
Human Action Recognition in Videos
Human Action Recognition in VideosHuman Action Recognition in Videos
Human Action Recognition in Videos
 
Underwater Image Enhancement
Underwater Image EnhancementUnderwater Image Enhancement
Underwater Image Enhancement
 
Automatic identification of animal using visual and motion saliency
Automatic identification of animal using visual and motion saliencyAutomatic identification of animal using visual and motion saliency
Automatic identification of animal using visual and motion saliency
 
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
 
IRJET-A Review of Underwater Image Enhancement By Wavelet Decomposition using...
IRJET-A Review of Underwater Image Enhancement By Wavelet Decomposition using...IRJET-A Review of Underwater Image Enhancement By Wavelet Decomposition using...
IRJET-A Review of Underwater Image Enhancement By Wavelet Decomposition using...
 
538 207-219
538 207-219538 207-219
538 207-219
 
PREVENTING COPYRIGHTS INFRINGEMENT OF IMAGES BY WATERMARKING IN TRANSFORM DOM...
PREVENTING COPYRIGHTS INFRINGEMENT OF IMAGES BY WATERMARKING IN TRANSFORM DOM...PREVENTING COPYRIGHTS INFRINGEMENT OF IMAGES BY WATERMARKING IN TRANSFORM DOM...
PREVENTING COPYRIGHTS INFRINGEMENT OF IMAGES BY WATERMARKING IN TRANSFORM DOM...
 
Ba36317323
Ba36317323Ba36317323
Ba36317323
 

Mais de IDES Editor

Power System State Estimation - A Review
Power System State Estimation - A ReviewPower System State Estimation - A Review
Power System State Estimation - A ReviewIDES Editor
 
Artificial Intelligence Technique based Reactive Power Planning Incorporating...
Artificial Intelligence Technique based Reactive Power Planning Incorporating...Artificial Intelligence Technique based Reactive Power Planning Incorporating...
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
 
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
 
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
 
Line Losses in the 14-Bus Power System Network using UPFC
Line Losses in the 14-Bus Power System Network using UPFCLine Losses in the 14-Bus Power System Network using UPFC
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
 
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
 
Assessing Uncertainty of Pushover Analysis to Geometric Modeling
Assessing Uncertainty of Pushover Analysis to Geometric ModelingAssessing Uncertainty of Pushover Analysis to Geometric Modeling
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
 
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
 
Selfish Node Isolation & Incentivation using Progressive Thresholds
Selfish Node Isolation & Incentivation using Progressive ThresholdsSelfish Node Isolation & Incentivation using Progressive Thresholds
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
 
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
 
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
 
Cloud Security and Data Integrity with Client Accountability Framework
Cloud Security and Data Integrity with Client Accountability FrameworkCloud Security and Data Integrity with Client Accountability Framework
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
 
Genetic Algorithm based Layered Detection and Defense of HTTP Botnet
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetGenetic Algorithm based Layered Detection and Defense of HTTP Botnet
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
 
Enhancing Data Storage Security in Cloud Computing Through Steganography
Enhancing Data Storage Security in Cloud Computing Through SteganographyEnhancing Data Storage Security in Cloud Computing Through Steganography
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
 
Low Energy Routing for WSN’s
Low Energy Routing for WSN’sLow Energy Routing for WSN’s
Low Energy Routing for WSN’sIDES Editor
 
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
 
Rotman Lens Performance Analysis
Rotman Lens Performance AnalysisRotman Lens Performance Analysis
Rotman Lens Performance AnalysisIDES Editor
 
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesBand Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
 
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
 
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
 

Mais de IDES Editor (20)

Power System State Estimation - A Review
Power System State Estimation - A ReviewPower System State Estimation - A Review
Power System State Estimation - A Review
 
Artificial Intelligence Technique based Reactive Power Planning Incorporating...
Artificial Intelligence Technique based Reactive Power Planning Incorporating...Artificial Intelligence Technique based Reactive Power Planning Incorporating...
Artificial Intelligence Technique based Reactive Power Planning Incorporating...
 
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...
 
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...
 
Line Losses in the 14-Bus Power System Network using UPFC
Line Losses in the 14-Bus Power System Network using UPFCLine Losses in the 14-Bus Power System Network using UPFC
Line Losses in the 14-Bus Power System Network using UPFC
 
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...
 
Assessing Uncertainty of Pushover Analysis to Geometric Modeling
Assessing Uncertainty of Pushover Analysis to Geometric ModelingAssessing Uncertainty of Pushover Analysis to Geometric Modeling
Assessing Uncertainty of Pushover Analysis to Geometric Modeling
 
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...
 
Selfish Node Isolation & Incentivation using Progressive Thresholds
Selfish Node Isolation & Incentivation using Progressive ThresholdsSelfish Node Isolation & Incentivation using Progressive Thresholds
Selfish Node Isolation & Incentivation using Progressive Thresholds
 
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...
 
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...
 
Cloud Security and Data Integrity with Client Accountability Framework
Cloud Security and Data Integrity with Client Accountability FrameworkCloud Security and Data Integrity with Client Accountability Framework
Cloud Security and Data Integrity with Client Accountability Framework
 
Genetic Algorithm based Layered Detection and Defense of HTTP Botnet
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetGenetic Algorithm based Layered Detection and Defense of HTTP Botnet
Genetic Algorithm based Layered Detection and Defense of HTTP Botnet
 
Enhancing Data Storage Security in Cloud Computing Through Steganography
Enhancing Data Storage Security in Cloud Computing Through SteganographyEnhancing Data Storage Security in Cloud Computing Through Steganography
Enhancing Data Storage Security in Cloud Computing Through Steganography
 
Low Energy Routing for WSN’s
Low Energy Routing for WSN’sLow Energy Routing for WSN’s
Low Energy Routing for WSN’s
 
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...
 
Rotman Lens Performance Analysis
Rotman Lens Performance AnalysisRotman Lens Performance Analysis
Rotman Lens Performance Analysis
 
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesBand Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
 
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...
 
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...
 

Último

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 

Último (20)

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 

Feature Tracking of Objects in Underwater Video Sequences

  • 1. ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012 Feature Tracking of Objects in Underwater Video Sequences Prabhakar C J & Praveen Kumar P U Department of P.G. Studies and Research in Computer Science Kuvempu University, Shankaraghatta - 577451 Karnataka, India psajjan@yahoo.com, praveen577302@gmail.com Abstract—Feature tracking is a key, underlying component in efficiency of the feature tracking techniques depends upon many approaches to 3D reconstruction, detection, localization the invariant nature of the extracted features. Further the and recognition of underwater objects. In this paper, we extracted distinctive invariant features can be used to perform proposed to adapt SIFT technique for feature tracking in reliable matching between different views of an object or scene. underwater video sequences. Over the past few years the The features extracted should be invariant to image scale and underwater vision is attracting researchers to investigate suitable feature tracking techniques for underwater rotation, and are shown to provide robust matching across a applications. The researchers have developed many feature substantial range of affine distortion, change in 3D viewpoint, tracking techniques such as KLT, SIFT, SURF etc., to track addition of noise, and change in illumination. The features are the features in video sequence for general applications. The highly distinctive, in the sense that a single feature can be literature survey reveals that there is no standard feature correctly matched with high probability against a large tracker suitable for underwater environment. We proposed to database of features from many images. In underwater adapt SIFT technique for tracking features of objects in environment, extracting the invariant features from the underwater video sequence. The SIFT extracts features, which sequence of images is a challenging due to optical properties are invariant to scale, rotation and affine transformations. of the water, which varies the same feature in sequence of We have compared and evaluated SIFT with popular techniques such as KLT and SURF on captured video sequence of images. underwater objects. The experimental results shows that 3D reconstruction for underwater applications is a adapted SIFT works well for underwater video sequence. relatively recent research area with a higher complexity than the 3D reconstruction for general applications. This is a very Index Terms—Features tracking, SIFT, 3D reconstruction, active and growing field of research that brings together Underwater Video challenging problems from the underwater environment and powerful 3D reconstruction techniques from computer vision. I. INTRODUCTION Many articles have been published during the last years Feature tracking is an important step in 3D surface involving feature tracking from video sequences for 3D reconstruction of underwater objects using video sequences. reconstruction of underwater objects demonstrates the The researchers have developed many techniques for feature increasing interest in this topic and its wide range of tracking, as many algorithms rely on accurate computation applications. E. Trucco et al. [1] have developed a robust of correspondences through a sequence of images. These tracker based on efficient outlier rejection rule suitable for techniques works well for clean, out-of-water images; subsea video sequences. They extended Shi-Tomasi-Kanade however, when imaging underwater, even an image of the tracker in several ways: first by introducing an automatic same object can be drastically different due to varying water scheme for rejecting bad features, using a simple, efficient, conditions. Moreover many parameters can modify the optical model-free outlier rejection rule, called X84. Secondly, they properties of the water and underwater image sequences show developed a system for running indefinite length of time and large spatial and temporal variations. As a result, descriptors the tracker runs in real time on non-dedicated hardware. The of the same point on an object may be completely different Shi-Tomasi-Kanade [10][9] feature tracker is based on Sum- between different underwater video images taken under of-Squared-Difference (SSD) matching and assuming affine varying imaging conditions. This makes feature tracking frame-to-frame warping. The system tracks small image between such images is a very challenging problem. regions and classifies them as good (reliable) or bad The 3D reconstruction of underwater objects such as (unreliable) according to the residual of the match between underwater mines, ship wreckage, coral reefs, pipes and the associated image regions in the first and current frames. telecommunication cables is an essential requirement for K. Plakas [2] have adapted Shi-Tomasi-Kanade as a feature recognition, localization of these objects using video tracker for 3D shape reconstruction from uncalibrated sequences. Feature tracking is applied to recover the 3D underwater video sequences. structure and motion of objects in video sequence. To recover Anne Sedlazeck et al. [3] have adapted KLT (Kanade- the 3D structure and motion, it is required to find out the Lucas-Tomasi) feature tracker to reconstruct 3D surface of same features in different frames and match them. The ship wreck using underwater monocular video sequence. © 2012 ACEEE 42 DOI: 01.IJIT.02.01.529
  • 2. ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012 Andrew Hogue and Michael Jenkin [4] also adapted KLT as 1 2 2 2 2 a feature tracker for 3D shape reconstruction of coral reefs G ( x, y,  )  2 e( x  y ) . using stereo image sequences. The KLT tracker is a good 2 solution for tracking problem in the case where the input To efficiently detect stable keypoint locations in scale space image sequence consists of dense video i.e. the displacement extrema in the difference-of-Gaussian function convolved with between two consecutive images is very small. The main the image, D ( x, y,  ), which can be computed from the drawback of KLT is that it cannot be applied for stereo image difference of two nearby scales separated by a constant sequences with large baseline. The 3D reconstruction of coral reefs is developed by V. Brandou et al. [5] using Scale- multiplicative factor k : Invariant Feature Transform (SIFT) technique [6] for extracting D ( x, y,  )  (G ( x, y, k )  G ( x, y ,  ))  I ( x, y ) and matching features in stereo video sequences. They have captured coral reefs using two video cameras, which are  L( x, y , k )  L( x, y ,  ). (2) aligned to capture stereo video sequences. P Jasiobedzki et which is just be different from the Gaussian-blurred images al., [8] have adapted SIFT feature tracker for extraction of at scales  and k . feature from images and recognizing the object in subsequent The first step toward the detection of interest points is images. the convolution of the image with Gaussian filters at different In this paper, we proposed to adapt SIFT method for scales, and the generation of difference-of-Gaussian images extraction of features from underwater monocular video from the difference of adjacent blurred images. The rotated sequences. SIFT is an algorithm in computer vision to detect images are grouped by octave (an octave corresponds to and describe local features in images. The SIFT method is very suitable in the case where the interest points are invariant doubling the value of  ), and the value of k is selected so to image scaling and rotation, and partially invariant to that we can obtain a fixed number of blurred images per octave. changes in illumination and 3D camera viewpoints. For every This also ensures that we obtain the same figure of Difference- keypoint a descriptor is defined, which is a vector with 128 of-Gaussian images per octave (Fig. 1). dimensions based on local image gradient directions in the keypoint neighborhood. SIFT is more robust than other feature tracking techniques for images with large translation and rotation it still can track the features. The remaining sections of the paper are organized as follows: section II describes adapted SIFT feature tracker in detail. The experimental results are presented in the section III. Finally, the section IV concludes the paper. II. SIFT This algorithm is first proposed by David Lowe in 1999, and then further developed and improved [7]. SIFT features have many advantages. SIFT features are all natural features of images. They are favorably invariant to image translation, scaling, rotation, illumination, viewpoint, noise etc. Good speciality, rich in information, suitable for fast and exact matching in a mass of feature database. Relatively fast speed. Figure 1. The blurred images at different scales, and the The extraction SIFT features from video sequences can be computation of the difference-of-Gaussian images done by applying sequentially steps such as Scale-space Interest points (called keypoints in the SIFT framework) are extrema detection, Keypoint localization, Orientation identified as local maxima or minima of the DoG images across assignment and finally Generation of keypoint descriptors. scales. Each pixel in the DoG images is compared to its 8 neighbors at the same scale, plus the 9 corresponding A. Scale-space extrema detection neighbors at neighboring scales. If the pixel is a local maximum The first stage of computation searches over all scales or minimum, it is selected as a candidate keypoint (Fig. 2). and image locations. It is implemented efficiently by means For each candidate keypoint: of a Difference-of-Gaussian function to identify potential 1) Interpolation of nearby data is used to accurately interest points that are invariant to orientation and scale. determine its position; Interest points for SIFT features correspond to local extrema 2) Keypoints with low contrast are removed; of Difference-of-Gaussian filters at different scales. Given a 3) Responses along edges are eliminated; Gaussian-blurred image described as the formula 4) The keypoint is assigned an orientation. L( x, y,  )  G ( x, y,  )  I ( x, y ), (1) Where  is the convolution operation in x and y , and © 2012 ACEEE 43 DOI: 01.IJIT.02.01. 529
  • 3. ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012 Figure 2. Local extrema detection, the pixel marked X is compared against its 26 neighbors in a 3 3 3 neighborhood that spans Figure 3. The diagram of keypoints location process adjacent DoG images To determine the keypoint orientation, a gradient orientation 6) The extremas of Difference-of-Gaussian have large principal histogram is computed in the neighborhood of the keypoint curvatures along edges, it can be reduced by checking (using the Gaussian image at the closest scale to the Tr ( H ) 2 (r  1)2 keypoint’s scale). The contribution of each neighboring pixel  . (4) is weighted by the gradient magnitude and a Gaussian window Det ( H ) r with a  that is 1.5 times the scale of the keypoint. Peaks in Evidently, H in (4) is a 2  2 Hessian matrix, r is the ratio the histogram correspond to dominant orientations. A between the largest magnitude and the smallest one. separate keypoint is rerated for the direction corresponding 7) To achieve invariance to rotation, the gradient magnitude to the histogram maximum, and any other direction within 80% of the maximum value. All the properties of the keypoint m( x, y ) and orientation  ( x, y ) are pre-computed as the are measured relative to the keypoint orientation, this provides following equations. invariance to rotation. m( x, y )  [( L( x  1, y )  L( x  1, y )) 2 B. Locating Keypoints At each candidate location, a detailed model is fit to  ( L( x, y  1)  L( x, y  1))2 ]1/ 2 (5) determine scale and location. Keypoints are selected on basis of measures of their stability. The Fig. 3 shows a whole process  L( x, y  1)  L( x, y  1)   ( x, y )  tan 1   (6) on how to find and describe the SIFT feature points. In Fig. 3,  L( x  1, y )  L( x  1, y )  we can find that, if we want to find and describe the SIFT feature points, we should follow these steps: 8) Take a feature point and its 16  16 neighbors round it. 1) Input an image ranges from [0, 1]. Then divide them into 4  4 subregions, histogram every 2) Use a variable-scale Gaussian kernel G ( x, y ,  ) to create subregion with 8 bins. scale space L( x, y ,  ) . C. SIFT feature representation 3) Calculate Difference-of-Gaussian function as an Once a keypoint orientation has been selected, the feature approximate to the normalized Laplacian is invariant to the descriptor is computed as a set of orientation histograms on scale change. 4  4 pixel neighborhoods. The orientation histograms are 4) Find the maxima or minima of Difference-of-Gaussian relative to the keypoint orientation, the orientation data comes function value by comparing one of the pixels to its above, from the Gaussian image closest in scale to the keypoint’s current and below scales in 3  3 regions. scale. The contribution of each pixel is weighted by the 5) Accurate the keypoints locations by discarding points gradient magnitude, and by a Gaussian with  1.5 times the below a predetermined value. scale of the keypoint. Histograms contain 8 bins each, and each descriptor contains an array of 4 histograms around the T ˆ 1 D ˆ keypoint (Fig. 4). This leads to a SIFT feature vector with D( X )  D  X. (3) 2 x 4  4  4  128 elements. This vector is normalized to enhance invariance to changes in illumination. ˆ Where X is calculated by setting the derivative D ( x, y ,  ) to zero. © 2012 ACEEE 44 DOI: 01.IJIT.02.01.529
  • 4. ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012 surface level of water. The captured monocular underwater video sequence is diminished due to optical properties of the light in underwater environment. The captured video images are suffered from non-uniform illumination, low contrast, blurring and diminished colors. The Fig. 5 shows one pair of video frames suffered from these effects. These effects are preprocessed sequentially by applying homomorphic filtering, wavelet denoising and anisotropic filtering technique [11]. The Fig. 6 shows result of preprocessing on a pair of Figure 4. The keypoint descriptor is generated and weighted by a video frames. The proposed SIFT technique is employed on Gaussian window (blue circle) preprocessed pair of video frames to track the same features. The Fig. 7 shows the extracted features using SIFT method. D. Orientation assignment The features extracted from each frame is stored in a feature Direction parameters to the keypoints are determined to descriptor, the euclidian distance is employed as a similarity quantize the description. Lowe formulated the determination measurement for each feature descriptor using suitable with the norm and the angle in Euclidean space, with the threshold found empirically. Finally, the features are matched direction of key points used as normalized the gradient based on Best-Bin-First nearest neighbor approach. The Fig. direction of the key point operator in the following step. After 8 shows matched features between corresponding an image revolvement, the identical directions demanded can consecutive image pair. Table I gives the comparison of be worked out. matched feature points obtained using SIFT, KLT and SURF. E. Keypoint matching From Table I we can see that SIFT method is best compared to KLT and SURF, because the number of matched feature The next step is to apply these SIFT methods to video points is 419 using SIFT compared to matched feature points frame sequences for object tracking. SIFT features are 397 and 403 obtained using KLT, SURF respectively. This extracted through the input video frame sequences and stored due to the fact that SIFT extracts most invariant features by their keypoints descriptors. Each key point assigns 4 from the video sequence where the disparity between the parameters, which are 2D location (x coordinate and y frames is very high. The underwater video sequence has such coordinate), orientation and scale. Each object is tracked in a property that the lighting variations between the frames are new video frame sequences by separately comparing each very high due to optical properties of the water. Therefore feature point found from the new video frame sequences to the SIFT is most suitable feature tracker for underwater video those on the target object. The Euclidean distance is sequence. introduced as a similarity measurement of feature characters. The candidates can be preserved when the two features Euclidean distance is larger than the threshold specified previous. So the best matches can be picked out by the parameters value, in the other way, consistency of their location, orientation and scale. Each cluster of three or more features that agree on an object and its pose is then subject to further detailed model verification and subsequently outliers are throwed away. Finally the probability that a particular set of features indicates the presence of an object Figure 5. Successive video frames of captured underwater video Sequence is computed, considering the accuracy of fit and number of probable false matches. Object matches that pass all of the above tests can be recognized as correct with high confidence. III. EXPERIMENTAL RESULTS The adapted SIFT approach is compared with other popular feature tracking techniques such as KLT [9][10], SURF [12] for tracking features of objects in underwater video Figure 6. The result of pre-processing on diminished video images sequence. The comparison is based on evaluation of number of feature points matched in pair of successive frames. The video sequence of underwater objects is captured in a small water body in natural light. The scene includes several objects at distance [1m, 2m], near the corner of the water body. The monocular video sequence of underwater objects is captured using Canon D10 water proof camera at a depth of 2m from the © 2012 ACEEE 45 DOI: 01.IJIT.02.01. 529
  • 5. ACEEE Int. J. on Information Technology, Vol. 02, No. 01, March 2012 ACKNOWLEDGMENT This work has been supported by the Grants NRB/SC/ 158/2008-2009, Naval Research Board, DRDO, New Delhi, India. REFERENCES [1] E. Trucco, Y. R. Petillot, I. Tena Ruiz, K. Plakas, and D. M. Lane, “Feature Tracking in Video and Sonar Subsea Sequences with Applications”, Computer Vision and Image Understanding, vol. 79, pp. 92-122, February 2000. Figure 7. Extracted features using SIFT on a pair of successive [2] K. Plakas, E. Trucco, and A. Fusiello, “Uncalibrated vision video frames for 3-D underwater applications”, In Proceedings of OCEANS ’98, vol. 1, pp. 272 - 276, October 1998. [3] Anne Sedlazeck, Kevin Koser, and Reinhard Koch, “3D Reconstruction Based on Underwater Video from ROV Kiel 6000 Considering Underwater Imaging Conditions”, OCEANS 2009 - EUROPE, pp. 1-10, 2009. [4] Andrew Hogue and Michael Jenkin, “Development of an Underwater Vision Sensor for 3D Reef Mapping”, In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5351- 5356, October 2006. [5] V. Brandou, A. G. Allais, M. Perrier, E. Malis, P. Rives, J. Sarrazin, and P. M. Sarradin, “3D Reconstruction of Natural Underwater Scenes Using the Stereovision System IRIS”, OCEANS 2007 - Europe , pp. 1-6, 2007. [6] David G Lowe, “Distinctive image features from scale-invariant Figure 8. The result of SIFT for feature matching on a pair of a keypoints”, International Journal of Computer Vision, vol. 60(2), video frames pp. 91-110, 2004. TABLE I. COMPARISON OF FEATURE TRACKING TECHNIQUES BASED ON NUMBER OF [7] David G Lowe, “Object recognition from local scale-invariant FEATURES MATCHED features”, Proceedings of the International Conference on Computer Vision, vol. 2, pp. 1150-1157, 1997. [8] Piotr Jasiobedzki, Stephen Se, Michel Bondy, and Roy Jakola, “Underwater 3D mapping and pose estimation for ROV operations”, OCEANS 2008, pp. 1-6, September 2008. [9] C. Tomasi and T. Kanade, “Detection and tracking of point features”, Technical Report CMU-CS-91-132, Carnegie Mellon University, Pittsburg, PA, April 1991. [10] J. Shi and C. Tomasi, “Good features to track”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, III. CONCLUSIONS pp. 593-600, June 1994. [11] Prabhakar C. J. and Praveen Kumar P.U., “Underwater image In this paper, we proposed to adapt SIFT technique as denoising using adaptive wavelet subband thresholding”, In a feature tracker in the video sequence of underwater objects Proceedings of IEEE International Conference on Signal and Image for the purpose of 3D surface reconstruction. Since there is Processing (ICSIP), 2010, pp. 322-327, December 2010. no benchmark video data available for comparing proposed [12] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool, “SURF: Speeded Up Robust Features”, Computer Vision technique with other popular techniques. We have conducted and Image Understanding, vol. 110(3), pp. 346-359, 2008. experiment using captured video sequence obtained from monocular video camera. To evaluate the efficacy of the proposed method, we have compared with popular tracker such as KLT and SURF techniques based on number of feature points matched. The experimental results shows that SIFT method gives comparably better performance than KLT and SURF tracker. SIFT features are invariant to rotation, scale changes and affine transformations. KLT is fast and good for illumination changes, but not suitable for affine transformation. SURF is fast and has good performance as the same as SIFT, but it is not stable to rotation and illumination changes, which is common in underwater environment. © 2012 ACEEE 46 DOI: 01.IJIT.02.01. 529