The document proposes a lip recognition method based on the dynamic time warping (DTW) algorithm. It involves preprocessing lip images through normalization, separation of upper and lower lips, and rotation. Feature extraction extracts lip patterns through smoothing, top-hat transformation, and binarization. Projections of lip features are obtained and matched using DTW to determine similarity. The method achieved good results and could be improved by enhancing image quality and comparing more projection angles for forensic identification applications.
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Lip recognition
1. Lips Recognition
Based on DTW Algorithm
Presented by-
Piyush Mittal
(211CS2281)
Information Security
Computer Science and
06/24/12 Engineering Department 1
2. 2
National Institute of Technology, Rourkela 06/24/12
Overview
French criminologist, Edmond Locard, first recommended
the use of lip prints for criminal identification in 1932.
Lip prints are impressions of human lips left on the objects
such as drinking glasses, cigarettes, drink containers,
aluminium foils, etc.
Study of human lips as a means of personal identification
was started in 1970s by two Japanese scientists Yasuo and
Kazuo Suzuki.
The uniqueness of lip prints makes cheiloscopy especially
effective when evidence is discovered at the crime scene such
as for example lipstick blot marks, cups or glasses and even
envelopes.
3. 3
National Institute of Technology, Rourkela 06/24/12
Overview
Similarly to fingerprint patterns, lip prints have the following
particular properties: permanence, indestructibility and
uniqueness.
Lip prints are genotypically determined and therefore and
unique and stable throughout life of human being.
Additionally, lip prints are not only unique to an individual
but also offer the potential for recognition of an individual’s
gender.
The lip imprints can be captured by special police materials
(paper, special cream and magnetic powder). Such obtained
imprint pictures are then scanned.
4. 4
National Institute of Technology, Rourkela 06/24/12
FEATURE EXTRACTION
5. 5
National Institute of Technology, Rourkela 06/24/12
1 Image normalization
6. 6
06/24/12
1.1 Detection of lip area
National Institute of Technology, Rourkela
It consists of several steps –
In the first step, normalization of the image histogram is carried out.
Then, pixels whose value is greater than the accepted threshold (180) are
converted to the white color.
Next, median filter with mask 7×7 is used to blur the image.
In the last step, binarization is conducted according to the following formula:
0.516.I ( x, y )
I BIN ( x, y ) = 1 − round( )
I AVG
where:
I ( x, y )– value of the pixel at coordinates (x,y) before Binarization,
I AVG– average value of the all image pixels before binarization,
I BIN ( x, y ) – value of the pixel at coordinates (x,y) after binarization.
The value of 0.516 in the formula was experimentally determined.
8. 8
06/24/12
National Institute of Technology, Rourkela
1.2 Separation of Upper and Lower Lip
Separation is determined by a curve that runs through the
centre of the space between the lips Designated curve divides
the lip print into an upper and lower lip.
9. 9
National Institute of Technology, Rourkela 06/24/12
1.3 Lip Print Rotation
The curve obtained in the previous stage is then approximated
by a straight line (Fig. 3a). For a given straight line equation, a
rotation angle towards the X – axis can be determined. It
allows obtaining a separation line which will be parallel to the
Cartesian OX axis. Rotated lip print image is shown in Fig. 3b.
10. 10
National Institute of Technology, Rourkela 06/24/12
Based on the data obtained in the steps (1)-(3)
we get a lip print image rotated and divided
into upper and lower lip (Fig. 4).
11. 11
National Institute of Technology, Rourkela 06/24/12
2. Lip pattern extraction
12. 12
National Institute of Technology, Rourkela 06/24/12
2.1 Lip pattern smoothing
This process aims to improve the quality level of the
lines forming the lip pattern. The smoothing masks
5×5 are depicted in the Fig. 5.
13. 13
National Institute of Technology, Rourkela 06/24/12
The procedure is repeated for the all masks depicted on the
Fig. 5. Then, the mask with the largest cumulative value of
the sum is ultimately selected. For the selected in the
previous step mask, the average value of the pixels lying on
the elements of the mask is calculated and copied to the
central point of the analyzed source image. The effect of the
image smoothing inside of the interest region is shown in
Fig. 6.
14. 14
National Institute of Technology, Rourkela 06/24/12
2.2 Top-hat transformation
The purpose of this procedure is to emphasize lines of the lip
pattern and separate them from the background. To increase
effectiveness of the algorithm, transformation is applied
twice using different mask sizes. The following masks are
used: 2×2 to highlight thin lines (up to 3 pixels) and 6×6 to
highlight thick lines (more than 3 pixels). The results of the
top-hat transformation are depicted in the Fig. 7.
15. 15
2.3 Binarization
National Institute of Technology, Rourkela 06/24/12
This procedure is applied according to the formula below for
both images resulted from the top-hat transformation. For
the thin lines binarization threshold value was set to t=15,
while for the thick lines this parameter was established to
t=100.
IBIN(x,y) = 1 for I(x,y)>t
0 for I(x,y)<=t
where:
I(x,y) – value of the pixel at the coordinates (x, y) before binarization,
t – binarization threshold,
IBIN(x,y)-value of the pixel at the coordinates (x, y) after binarization.
16. 16
National Institute of Technology, Rourkela 06/24/12
The effect of the lip print image binarization is
shown in Fig. 8.
17. 17
National Institute of Technology, Rourkela 06/24/12
In the last stage, sub-images for the thin and thick lines are
combined into a single image, and then the obtained
global image is denoised. For the noise reduction,
appropriate 7×7 dimensional masks have been designed. It is
depicted on Fig.9.
18. 18
National Institute of Technology, Rourkela 06/24/12
For each of the masks number of black pixels in the
highlighted area of the mask is counted. If the number of the
black pixels is less than 5, then the central pixel of the mask
is converted to the white color.
Additionally, the area of the 11×11 pixels around the central
point of the mask is searched. If there are less than 11 pixels
inside of defined area, then the value of the central point of
the mask is converted to the white color. Example of the
noise reduction is shown in the Fig.10.
20. 20
National Institute of Technology, Rourkela 06/24/12
3 Feature extraction
The feature extraction algorithm is carried out for both the
upper and lower lip. This process relies on determination of
the vertical, horizontal and diagonal projections of the lip
pattern image. The exemplary projections of the image lip
print pixels towards the appropriate axes are presented in
Fig.11.
Projections are one-dimensional vectors represented in a form
of specialized histograms. Each projection shows number of
the black pixels which lie towards the appropriate direction:
horizontal, vertical, oblique for 45° and 135°angles.
22. 22
National Institute of Technology, Rourkela 06/24/12
THE DTW METHOD
Two sequences Q={q1, …, qn} and U={u1, …, um} being
compared, the D matrix of the size n×m is built in the first
stage. It allows to align the two sequences Q and U. The
matrix element D(i, j) contains the distance between the
points qi and uj, so D(i, j)=d(qi,uj).
In this study, the Euclidean distance was applied.
On the basis of the elements D(i, j) so-called sequences
matching cost have to be determined. When cost matching is
lower then both sequences Q and U are more similar.
23. 23
National Institute of Technology, Rourkela 06/24/12
In the next stage, the warping path W is determined. The path
W consists of a set of the some elements of the matrix D what
allows to define a mapping between the sequences Q and U.
The warping path can be determined as follows:
W=w1,w2,...,wl , max(n,m) ≤l≤n+m−1
The wh element of the path W is defined as:
Wh =D(i ,j), h=1,....l i=1,.....,n j=1,.....,m
24. 24
National Institute of Technology, Rourkela 06/24/12
A correctly determined path W has to fulfill a few
conditions:
The first element of the sequence Q must be matched to
the first element of the sequence U:
w1 =w(1,1)=D(1,1)
The last element of the sequence Q must be matched to
the last element of the sequence U:
wl=w(n , m)=D(n , m)
Next assignments in the path cannot concern elements
of sequences that are distant from each other more than
one instant t:
it - it-1<=1 and jt - jt-1<=1
Points of the warping path W must be arranged
monotonically in time:
it - it-1 >=0 and jt - jt-1 >=0
25. 25
National Institute of Technology, Rourkela 06/24/12
The D matrix together with the warping path for
two sample sequences is shown in Fig. 12.
26. 26
06/24/12
National Institute of Technology, Rourkela
The elements wk of the path W can be found very efficiently
using dynamic programming. The path W determination starts
from the upper right corner of the populated matrix D. In the
first step i=m and j=n, so wl = D(n,m) . Then the next
coordinates of the cell of the matrix D will be fixed from the
formula:
27. 27
National Institute of Technology, Rourkela 06/24/12
Now, on the basis of the all elements w1,w2,…,wl
of the path W the total (cumulative) matching
cost γ can be calculated:
28. 28
National Institute of Technology, Rourkela 06/24/12
Comparison of the lip print projections was done using
the following algorithm:
1. Matching of horizontal, vertical and oblique (angle of 45° and
135°) projections from the tested and template lip prints using
the DTW algorithm (separately for the upper and lower lip).
2. Computation of the matching cost of all corresponding
projections by means of the formula (i,j) and averaging the
result.
29. 29
National Institute of Technology, Rourkela 06/24/12
DTW path for projections of two different sample lip
prints are shown in the Fig.13.
30. 30
National Institute of Technology, Rourkela 06/24/12
CONCLUSIONS AND FUTURE WORKS
Considering this fact it can be stated that the results obtained
by the proposed method are good and indicate the possibility
of using this approach in forensic identification systems.
In future studies, further improvement of lip print image
quality will be also performed. It is also planned to compare a
larger number of projections generated for different angles.
Additionally, are planed studies where only part of the lip
print will be analyzed .
31. 31
National Institute of Technology, Rourkela 06/24/12
REFERENCES
• Lukasz Smacki, Krzysztof Wrobel, Piotr Porwik, “Lip Print
Recognition Based on DTW Algorithm,” Department of
Computer Systems, University of Silesia, Katowice, Poland,
2011
• E.J. Keogh, and M.J. Pazzani, “Computer Derivative
Dynamic Time Warping,” Proc. First SIAM International
Conference on Data Mining, Chicago, USA, 2001, pp. 1-11.
32. 32
National Institute of Technology, Rourkela 06/24/12
Any Suggestions?
33. 33
National Institute of Technology, Rourkela 06/24/12
For more information visit- www.piyushmittal.in