SlideShare uma empresa Scribd logo
1 de 46
CHAPTER-1
INTRODUCTION
A smart environment is one that is able to identify people, interpret
their actions, and react appropriately. Thus, one of the most important building
blocks of smart environments is a person identification system. Face recognition
devices are ideal for such systems, since they have recently become fast, cheap,
unobtrusive, and, when combined with voice-recognition, are very robust against
changes in the environment. Moreover, since humans primarily recognize each
other by their faces and voices, they feel comfortable interacting with an
environment that does the same.
Facial recognition systems are built on computer programs that
analyze images of human faces for the purpose of identifying them. The programs
take a facial image, measure characteristics such as the distance between the eyes,
the length of the nose, and the angle of the jaw, and create a unique file called a
"template." Using templates, the software then compares that image with another
image and produces a score that measures how similar the images are to each
other. Typical sources of images for use in facial recognition include video camera
signals and pre-existing photos such as those in driver's license databases.
Facial recognition systems are computer-based security systems that
are able to automatically detect and identify human faces. These systems depend
on a recognition algorithm, such as eigenface or the hidden Markov model. The
first step for a facial recognition system is to recognize a human face and extract it
for the rest of the scene. Next, the system measures nodal points on the face, such
as the distance between the eyes, the shape of the cheekbones and other
distinguishable features.
1
These nodal points are then compared to the nodal points computed
from a database of pictures in order to find a match. Obviously, such a system is
limited based on the angle of the face captured and the lighting conditions present.
New technologies are currently in development to create three-dimensional models
of a person's face based on a digital photograph in order to create more nodal
points for comparison. However, such technology is inherently susceptible to error
given that the computer is extrapolating a three-dimensional model from a two-
dimensional photograph.
Principle Component Analysis is an eigenvector method designed to
model linear variation in high-dimensional data. PCA performs dimensionality
reduction by projecting the original n-dimensional data onto the k << n
-dimensional linear subspace spanned by the leading eigenvectors of the data’s
covariance matrix. Its goal is to find a set of mutually orthogonal basis functions
that capture the directions of maximum variance in the data and for which the
coefficients are pair wise decorrelated. For linearly embedded manifolds, PCA is
guaranteed to discover the dimensionality of the manifold and produces a compact
representation.
Facial Recognition Applications:
Facial recognition is deployed in large-scale citizen identification
applications, surveillance applications, law enforcement applications such as
booking stations, and kiosks.
2
1.1 Problem Definition
Facial recognition systems are computer-based security systems that are able
to automatically detect and identify human faces. These systems depend on a
recognition algorithm. But the most of the algorithm considers some what global
data patterns while recognition process. This will not yield accurate recognition
system. So we propose a face recognition system which can able to recognition
with maximum accuracy as possible.
1.2 System Environment
The front end is designed and executed with the J2SDK1.4.0 handling
the core java part with User interface Swing component. Java is robust , object
oriented , multi-threaded , distributed , secure and platform independent language.
It has wide variety of package to implement our requirement and number of classes
and methods can be utilized for programming purpose. These features make the
programmer’s to implement to require concept and algorithm very easier way in
Java.
The features of Java as follows:
Core java contains the concepts like Exception handling, Multithreading;
Streams can be well utilized in the project environment.
The Exception handling can be done with predefined exception and
has provision for writing custom exception for our application.
3
Garbage collection is done automatically, so that it is very secure in
memory management.
The user interface can be done with the Abstract Window tool KitAnd
also Swing class. This has variety of classes for components and containers. We
can make instance of these classes and this instances denotes particular object that
can be utilized in our program.
Event handling can be performed with Delegate Event model. The
objects are assigned to the Listener that observe for event, when the event takes
place the corresponding methods to handle that event will be called by Listener
which is in the form of interfaces and executed.
This application makes use of Action Listener interface and the event
click event gets handled by this. The separate method actionPerformed() method
contains details about the response of event.
Java also contains concepts like Remote method invocation;
Networking can be useful in distributed environment.
4
CHAPTER-2
SYSYTEM ANALYSIS
2.1 Existing System:
Many face recognition techniques have been developed over the past
few decades. One of the most successful and well-studied techniques to face
recognition is the appearance-based method. When using appearance-based
methods, we usually represent an image of size n *m pixels by a vector in an n *m-
dimensional space. In practice, however, these n*m dimensional spaces are too
large to allow robust and fast face recognition. A common way to attempt to
resolve this problem is to use dimensionality reduction techniques.
Two of the most popular techniques for this purpose are,
2.1.1 Principal Component Analysis (PCA).
2.1.2 Linear Discriminant Analysis (LDA).
2.1.1 Principal Component Analysis (PCA):
The purpose of PCA is to reduce the large dimensionality of the data
space (observed variables) to the smaller intrinsic dimensionality of feature space
(independent variables), which are needed to describe the data economically. This
5
is the case when there is a strong correlation between observed variables. The jobs
which PCA can do are prediction, redundancy removal, feature extraction, data
compression, etc. Because PCA is a known powerful technique which can do
something in the linear domain, applications having linear models are suitable,
such as signal processing, image processing, system and control theory,
communications, etc.
The main idea of using PCA for face recognition is to express the large 1-D
vector of pixels constructed from 2-D face image into the compact principal
components of the feature space. This is called eigenspace projection. Eigenspace
is calculated by identifying the eigenvectors of the covariance matrix derived from
a set of fingerprint images (vectors).
2.1.2 Linear Discriminant Analysis (LDA):
LDA is a supervised learning algorithm. LDA searches for the
project axes on which the data points of different classes are far from each other
while requiring data points of the same class to be close to each other. Unlike
PCA which encodes information in an orthogonal linear space, LDA encodes
discriminating information in a linearly separable space using bases that are not
necessarily orthogonal. It is generally believed that algorithms based on LDA
are superior to those based on PCA.
But the most of the algorithm considers some what global data patterns while
recognition process. This will not yield accurate recognition system.
6
 Less accurate
 Does not deal with manifold structure
 It doest not deal with biometric characteristics.
2.2 Proposed System:
PCA and LDA aim to preserve the global structure. However, in
many real-world applications, the local structure is more important. In this section,
we describe Locality Preserving Projection (LPP), a new algorithm for learning a
locality preserving subspace.
The objective function of LPP is as follows,
The manifold structure is modeled by a nearest-neighbor graph which
preserves the local structure of the image space. A face subspace is obtained by
Locality Preserving Projections (LPP).Each face image in the image space is
mapped to a low-dimensional face subspace, which is characterized by a set of
feature images, called Laplacianfaces. The face subspace preserves local
structure and seems to have more discriminating power than the PCA approach for
classification purpose. We also provide
Theoretical analysis to show that PCA, LDA, and LPP can be obtained
from different graph models. Central to this is a graph structure that is inferred on
the data points. LPP finds a projection that respects this graph structure. In our the
7
theoretical analysis, we show how PCA, LDA, and LPP arise from the same
principle applied to different choices of this graph structure.
It is worth while to highlight several aspects of the proposed approach here:
1. While the Eigenfaces method aims to preserve the global structure of the
image space, and the Fisher faces method aims to preserve the discriminating
information .Our Laplacianfaces method aims to preserve the local structure of the
image space which real -world application mostly needs.
2. An efficient subspace learning algorithm for face recognition should be
able to discover the nonlinear manifold structure of the face space. Our proposed
Laplacianfaces method explicitly considers the manifold structure which is
modeled by an adjacency graph and they reflect the intrinsic face manifold
structures.
3. LPP shares some similar properties to LLE . LPP is linear, while LLE is
nonlinear. Moreover, LPP is defined everywhere, while LLE is defined only on the
training data points and it is unclear how to evaluate the maps for new test points.
In contrast, LPP may be simply applied to any new data point to locate it in.
The algorithmic procedure of Laplacianfaces is formally stated below:
8
1. PCA projection.
We project the image set into the PCA subspace by throwing away
the smallest principal components. In our experiments, we kept 98 percent
information in the sense of reconstruction error. For the sake of simplicity, we still
use x to denote the images in the PCA subspace in the following steps. We denote
by WPCA the transformation matrix of PCA.
2. Constructing the nearest-neighbor graph.
Let G denote a graph with n nodes. The ith node corresponds to the
face image xi . We put an edge between nodes i and j if xi and xi are “close,” i.e., xi
is among k nearest neighbors of xi, or xi is among k nearest neighbors of xj. The
constructed nearest neighbor graph is an approximation of the local manifold
structure. Note that here we do not use the neighborhood to construct the graph.
This is simply because it is often difficult to choose the optimal " in the real-world
applications, while k nearest-neighbor graph can be constructed more stably. The
disadvantage is that the k nearest-neighbor search will increase the computational
complexity of our algorithm. When the computational complexity is a major
concern, one can switch to the "-neighborhood.
3. Choosing the weights. If node i and j are connected, put
9
where t is a suitable constant. Otherwise, put Sij = 0.The weight matrix S of
graph G models the face manifold structure by preserving local structure. The
justification for this choice of weights can be traced.
where D is a diagonal matrix whose entries are column (or row, since S is
symmetric) sums of S, Dii = ∑j Sji. L =D - S is the Laplacian matrix. The
ith row of matrix X is xi.
These eigenvalues are equal to or greater than zero because the matrices XLXT
and XDXT are both symmetric and positive semi definite. Thus, the embedding is
as follows:
where y is a k-dimensional vector. W is the transformation matrix. This linear
mapping best preserves the manifold’s estimated intrinsic geometry in a linear
sense. The column vectors of W are the so-called Laplacianfaces.
This principle is implemented with unsupervised learning concept with
training and test data.
10
The system must require to implement Principle Component Analysis
to reduce image in the dimension less than n and co-variance of the data.
The system must be used in Unsupervised learning algorithm . So it must be
trained properly with relevant data sets. Based on this training , input data is tested
by the application and result is displayed to the user.
2.3 System Requirement
Hardware specifications:
Processor : Intel Processor IV
RAM : 128 MB
Hard disk : 20 GB
CD drive : 40 x Samsung
Floppy drive : 1.44 MB
Monitor : 15’ Samtron color
Keyboard : 108 mercury keyboard
Mouse : Logitech mouse
Software Specification:
Operating System – Windows XP/2000
Language used – J2sdk1.4.0
11
2.4 System Analysis Methods
System analysis can be defined, as a method that is determined to use
the resources, machine in the best manner and perform tasks to meet the
information needs of an organization. It is also a management technique that helps
us in designing a new systems or improving an existing system. The four basic
elements in the system analysis are
• Output
• Input
• Files
• Process
The above-mentioned are mentioned are the four basis of the System
Analysis.
2.5 Feasibility Study
Feasibility is the study of whether or not the project is worth doing.
The process that follows this determination is called a Feasibility Study. This study
is taken in right time constraints and normally culminates in a written and oral
12
feasibility report. This feasibility study is categorized into seven different types.
They are
• Technical Analysis
• Economical Analysis
• Performance Analysis
• Control and Security Analysis
• Efficiency Analysis
• Service Analysis
2.5.1 Technical Analysis
This analysis is concerned with specifying the software that will
successfully satisfy the user requirements. The technical needs of a system are to
have the facility to produce the outputs in a given time and the response time under
certain conditions..
2.5.2 Economic Analysis
Economic Analysis is the most frequently used technique for
evaluating the effectiveness of prepared system. This is called Cost/Benefit
analysis. It is used to determine the benefits and savings that are expected from a
proposed system and compare them with costs. If the benefits overweigh the cost,
then the decision is taken to the design phase and implements the system.
2.5.3 Performance Analysis
13
The analysis on the performance of a system is also a very important
analysis. This analysis analyses about the performance of the system both before
and after the proposed system. If the analysis proves to be satisfying from the
company’s side then this analysis result is moved to the next analysis phase.
Performance analysis is nothing but invoking at program execution to pinpoint
where bottle necks or other performance problems such as memory leaks might
occur. If the problem is spotted out then it can be rectified.
2.5.4 Efficiency Analysis
This analysis mainly deals with the efficiency of the system based on
this project. The resources required by the program to perform a particular function
are analyzed in this phase. It is also checks how efficient the project is on the
system, in spite of any changes in the system. The efficiency of the system should
be analyzed in such a way that the user should not feel any difference in the way of
working. Besides, it should be taken into consideration that the project on the
system should last for a longer time.
14
CHAPTER-3
SYSTEM DESIGN
Design is concerned with identifying software components specifying
relationships among components. Specifying software structure and providing blue
print for the document phase.
Modularity is one of the desirable properties of large systems. It
implies that the system is divided into several parts. In such a manner, the
interaction between parts is minimal clearly specified.
Design will explain software components in detail. This will help the
implementation of the system. Moreover, this will guide the further changes in the
system to satisfy the future requirements.
15
3.1 Project modules:
3.1.1 Read/Write Module:
Here, the basic operations for loading and saving input and resultant
images respectively from the algorithms. The image files are read, processed and
new images are written into the output images.
3.1.2 Resizing Module:
Here, the faces are converted into equal size using linearity algorithm,
for the calculation and comparison. In this module large images or smaller images
are converted into standard sizing.
3.1.3 Image Manipulation:
Here, the face recognition algorithm using Locality Preserving
Projections (LPP) is developed for various enrolled into the database.
3.1.4 Testing Module:
Here, the Input images are resized then compared with the
Intermediate image and find the tested image then again compared with the
laplacian faces to find the aureate faces.
16
FIGURE :1
17
Designing Flow Diagram
3.2 System Development
18
This system is developed to implement Principle component analysis.
Image manipulation: This module designed to view all the faces that are
considered in our training case. Principle Component Analysis is an eigenvector
method designed to model linear variation in high-dimensional data. PCA performs
dimensionality reduction by projecting the original n-dimensional data onto the k
<< n -dimensional linear subspace spanned by the leading eigenvectors of the
data’s covariance matrix. Its goal is to find a set of mutually orthogonal basis
functions that capture the directions of maximum variance in the data and for
which the coefficients are pair wise decorrelated. For linearly embedded
manifolds, PCA is guaranteed to discover the dimensionality of the manifold and
produces a compact representation.
1)Training module:
Unsupervised learning - this is learning from observation and
discovery. The data mining system is supplied with objects but no classes are
defined so it has to observe the examples and recognize patterns (i.e. class
description) by itself. This process requires training data set .This system provides
training set as 17 faces and each contains three different poses of faces. It
undergoes iterative process stores require detail in face Template two dimension
array.
2) Test module:
After training process is over , it process the input image face for
eigenface process then can able to say whether it recognizes or not.
19
CHAPTER-4
IMPLEMENTATION
Implementation includes all those activities that take place to convert
from the old system to the new. The new system may be totally new, replacing an
existing system or it may be major modification to the system currently put into
use.
This system “Face Recognition” is a new system. Implementation as a
whole involves all those tasks that we do for successfully replacing the existing or
introduce new software to satisfy the requirement.
The entire work can be described as retrieval of faces from database,
processed for eigen faces training method and test case are executed and finally
result is displayed to the user.
The test case has performed in all aspect and the system has given
correct result in all the cases.
4.1. Implementation Details:
4.1.1 Form design
Form is a tool with a message; it is the physical carrier of data or
information. It also can constitute authority for actions. In the form design files are
used to do each module. The following are list of forms used in this project:
1) Main Form
20
Contains option for viewing face from data base. The system retrieves
the images stored in the folder called train and test folder, which is available in bin
folder of your application.
2) View database Form:
This form retrieves face available in the train folder. It is just for
viewing purpose for the user.
3) Recognition Form :
This form provides option for loading input image from test folder.
Then user has to click Train button which leads the application for training to gain
knowledge as it is of the form unsupervised learning algorithm.
Unsupervised learning - This is learning from observation and discovery. The
data mining system is supplied with objects but no classes are defined so it has to
observe the examples and recognize patterns (i.e. class description) by itself. This
system results in a set of class descriptions, one for each class discovered in the
environment. Again this is similar to cluster analysis as in statistics.
Then user can click the test button Test button to see the matching for
the faces. The matched face will be displayed in the place provided for matched
face option. In case of any difference the information will be displayed in place
provided in the form.
21
4.1.2 Input design
Accurate input data is the most common case of errors in data
processing. Errors entered by data entry operators can control by input design.
Input design is the process of converting user-originated inputs to a computer-
based format. Input data are collected and organized into group of similar data.
4.1.3 Menu Design
The menu in this application is organized into mdiform that organizes
viewing of image files from folder. Also it has option for loading image as input ,
try to perform training method and test whether it recognizes the face or not.
4.1.4 Data base design:
A database is a collection of related data. The database has following
properties:
i) Database reflects the changes of the information.
ii)A database is logically coherent collection of data with some
inherent meaning.
This application takes the images form the default folder set for this
application train and test folders. The file extension is .jpeg option.
4.1.5 Code Design
o Face Enrollment
 -a new face can be added by the user into facespace database
22
o Face Verification
 -verifies a persons face in the database with reference to his/her
identity.
o Face Recognition
 -compares a persons face with all the images in database and choose
the closest match. Here Principle Component Analysis is performed with training
data set . The result is performed from test data set.
o Face Retrieval
 -displays all the faces and its templates in the database
o Statistics
 -stores a list of recognition accuracy for analyzing the FRR (False
Rejection Rate) and FAR (False Acceptance Rate)
23
4.2 Coding:
import java.lang.*;
import java.io.*;
public class PGM_ImageFilter
{
//constructor
public PGM_ImageFilter()
{
inFilePath="";
outFilePath="";
}
//get functions
public String get_inFilePath()
{
return(inFilePath);
}
public String get_outFilePath()
{
return(outFilePath);
}
//set functions
public void set_inFilePath(String tFilePath)
{
inFilePath=tFilePath;
}
public void set_outFilePath(String tFilePath)
{
outFilePath=tFilePath;
}
//methods
public void resize(int wout,int hout)
{
PGM imgin=new PGM();
PGM imgout=new PGM();
if(printStatus==true)
{
System.out.print("nResizing...");
24
}
int r,c,inval,outval;
//read input image
imgin.setFilePath(inFilePath);
imgin.readImage();
//set output-image header
imgout.setFilePath(outFilePath);
imgout.setType("P5");
imgout.setComment("#resized image");
imgout.setDimension(wout,hout);
imgout.setMaxGray(imgin.getMaxGray());
//resize algorithm (linear)
double win,hin;
int xi,ci,yi,ri;
win=imgin.getCols();
hin=imgin.getRows();
for(r=0;r<imgout.getRows();r++)
{
for(c=0;c<imgout.getCols();c++)
{
xi=c;
yi=r;
ci=(int)(xi*((double)win/(double)wout));
ri=(int)(yi*((double)hin/(double)hout));
inval=imgin.getPixel(ri,ci);
outval=inval;
imgout.setPixel(yi,xi,outval);
}
}
if(printStatus==true)
{
System.out.println("done.");
}
//write output image
imgout.writeImage();
}
25
CHAPTER-5
SYSTEM TESTING
5.1 Software Testing
Software Testing is the process of confirming the functionality and
correctness of software by running it. Software testing is usually performed for one
of two reasons:
i) Defect detection
ii)Reliability estimation.
Software Testing contains two types of testing. They are
1) White Box Testing
2) Block Box Testing
1) White Box Testing
White box testing is concerned only with testing the software product,
it cannot guarantee that the complete specification has been implemented. White
box testing is testing against the implementation and will discover
faults of commission, indicating that part of the implementation is faulty.
26
2) Block Box Testing
Black box testing is concerned only with testing the specification, it
cannot guarantee that all parts of the implementation have been tested. Thus black
box testing is testing against the specification and will discover faults of omission,
indicating that part of the specification has not been fulfilled.
Functional testing is a testing process that is black box in nature. It is
aimed at examine the overall functionality of the product. It usually includes
testing of all the interfaces and should therefore involve the clients in the process.
The key to software testing is trying to find the myriad of failure
modes – something that requires exhaustively testing the code on all possible
inputs. For most programs, this is computationally infeasible. It is common place
to attempt to test as many of the syntactic features of the code as possible (within
some set of resource constraints) are called white box software testing technique.
Techniques that do not consider the code’s structure when test cases are selected
are called black box technique.
In order to fully test a software product both black and white box
testing are required.The problem of applying software testing to defect detection is
that software can only suggest the presence of flaws, not their absence (unless the
testing is exhaustive). The problem of applying software testing to reliability
estimation is that the input distribution used for selecting test cases may be flawed.
In both of these cases, the mechanism used to determine whether program output is
correct is often impossible to develop. Obviously the benefit of the entire software
27
testing process is highly dependent on many different pieces. If any of these parts
is faulty, the entire process is compromised.
Software is now unique unlike other physical processes where inputs
are received and outputs are produced. Where software differs is in the manner in
which it fails. Most physical systems fail in a fixed (and reasonably small) set of
ways. By contrast, software can fail in many bizarre ways. Detecting all of the
different failure modes for software is generally infeasible.
Final stage of the testing process should be System Testing. This type
of test involves examination of the whole computer system, all the software
components, all the hard ware components and any interfaces. The whole computer
based system is checked not only for validity but also to meet the objectives.
5.2 Efficiency of Laplacian Algorithm
Now, consider a simple example of image variability. Imagine that a
set of face images are generated while the human face rotates slowly. Thus, we can
say that the set of face images are intrinsically one dimensional.
Many recent works shows that the face images do reside on a low
dimensional(image space).Therefore, an effective subspace learning algorithm
should be able to detect the nonlinear manifold structure. PCA and LDA,
effectively see only the Euclidean structure; thus, they fail to detect the intrinsic
low-dimensionality. With its neighborhood preserving character, the Laplacian
faces capture the intrinsic face manifold structure .
28
FIGURE:2
Two-dimensional linear embedding of face images by Laplacianfaces
Fig. 1 shows an example that the face images with various pose and
expression of a person are mapped into two-dimensional subspace. This data set
contains face images. The size of each image is 20 _ 28 pixels, with256 gray-levels
per pixel. Thus, each face image is represented by a point in the 560-dimensional
ambientspace. However, these images are believed to come from a sub manifold
with few degrees of freedom.
The face images are mapped into a two-dimensional space with
continuous change in pose and expression. The representative face images are
shown in the different parts of the space. The face images are divided into two
parts. The left part includes the face images with open mouth, and the right part
includes the face images with closed mouth. This is because in trying to preserve
local structure .. Specifically, it makes the neighboring points in the image face
nearer in the face space. . The 10 testing samples can be simply located in the
reduced representation space by the Laplacian faces (columnvectors of the matrix
W).
29
FIGURE:3
Fig. 2. Distribution of the 10 testing samples in the reduced representation subspace. As can be seen, these testing samples
optimally find their coordinates which reflect their intrinsic properties, i.e., pose and expression.
As can be seen, these testing samples optimally find their coordinates which
reflect their intrinsic properties, i.e., pose and expression. This observation tells us
that the Laplacianfaces are capable of capturing the intrinsic face manifold
structure.
FIGURE:4
The eigenvalues of LPP and LaplacianEigenmap.
Fig. 3 shows the eigen values computed by the two methods. As can be seen,
the eigen values of LPP is consistently greater than those of Laplacian Eigenmaps.
30
5.2. 1 Experimental Results
A face image can be represented as a point in image space. However, due to
the unwanted variations resulting from changes in lighting, facial expression, and
pose, the image space might not be an optimal space for visual representation.
We can display the eigenvectors as images. These images may be
called Laplacianfaces. Using the Yale face database as the training set, we present
the first 10 Laplacianfaces in Fig. 4, together with Eigen faces and Fisher faces. A
face image can be mapped into the locality preserving subspace by using the
Laplacian faces.
FIGURE:5
Fig (a) Eigenfaces, (b) Fisher faces, and (c) Laplacianfaces calculated from the face images in the YALE database.
5.2.2 Face Recognition Using Laplacianfaces
In this section, we investigate the performance of our proposed
Laplacianfaces method for face recognition. The system performance is compared
with the Eigen faces method and the Fisher faces method.
31
In this study, three face databases were tested. The first one is the PIE
(pose, illumination, and expression) .The second one is the Yale database and the
Third one is the MSRA database.
In short, the recognition process has three steps. First, we calculate the
Laplacianfaces from the training set of face images; then the new face image to be
identified is projected into the face subspace spanned by the Laplacianfaces;
finally, the new face image is identified by a nearest neighbor classifier.
FIGURE:6
Fig 5.The original face image and the cropped image
5.2.3 Yale Database
The Yale face database was constructed at the Yale Center for
Computational Vision and Control. It contains 165 grayscale images of 15
individuals. The images demonstrate variations in lighting condition (left-light,
center-light, right light),facial expression (normal, happy, sad, sleepy, surprised,
and wink), and with/without glasses.
A random subset with six images was taken for the training set. The rest
was taken for testing. The testing samples were then projected into the low-
32
dimensional Representation. Recognition was performed using a nearest-neighbor
classifier.
In general, the performance of the Eigen faces method and the
Laplacian faces method varies with the number of dimensions. We show the best
results obtained by Fisher faces, Eigen faces, and Laplacian faces. The recognition
results are shown in Table 1. It is found that the Laplacian faces method
significantly outperforms both Eigen faces and Fisher faces methods.
TABLE :1
Performance Comparison on the Yale Database
FIGURE:7
Fig. 6 shows the plots of error rate versus dimensionality reduction.
33
5.2.4 PIE Database
Fig. 7 shows some of the faces with pose, illumination and
expression variations in the PIE database. Table 2 shows the recognition results.
As can be seen Fisher faces performs comparably to our algorithm on this
Database, while Eigenfaces performs poorly. The error rate for Laplacian faces,
Fisher faces, and Eigen faces .As can be seen, the error rate of our Laplacianfaces
method decreases fast as the dimensionality of the face subspace.
FIGURE:8
Fig. 7. The sample cropped face images of one individual from PIE database. The original face images are taken
under varying pose, illumination, and expression.
TABLE :2
Performance Comparison on the PIE Database
34
5.2.5 MSRA Database
This database was collected at Microsoft Research Asia. Sixty-
four to eighty face images were collected for each individual in each session. All
the faces are frontal. Fig. 9 shows the sample cropped face images from this
database. In this test, one session was used for training and the other was used for
testing.
FIGURE:9
Fig.8 The sample cropped face images of one individual from MISRA database. The original face images are taken
under varying pose, illumination, and expression.
35
TABLE :3
shows the recognition results. Laplacian faces method has lower error rate than
those of Eigen faces and fisher faces .
Performance Comparison on the MSRA Database
Performance comparison on the MSRA database with different number of training samples
36
CHPTER-6
CONCLUSION
Our system is proposed to use Locality Preserving Projection in Face
Recognition which eliminates the flaws in the existing system. This system makes
the faces to reduce into lower dimensions and algorithm for LPP is performed for
recognition. The application is developed successfully and implemented as
mentioned above.
This system seems to be working fine and successfully. This system
can able to provide the proper training set of data and test input for recognition.
The face matched or not is given in the form of picture image if matched and text
message in case of any difference.
37
SNOPSHOT
OUTPUT PAGE:1
38
OUTPAGE:2
39
OUTPUT PAGE:3
40
OUTPUTPAGE:4
41
OUTPUT PAGE:5
42
OUTPUT PAGE:6
43
OUTPUT PAGE:7
44
REFERENCES
1. X. He and P. Niyogi, “Locality Preserving Projections,” Proc. Conf.
Advances in Neural Information Processing Systems, 2003.
2. A.U. Batur and M.H. Hayes, “Linear Subspace for Illumination
Robust Face Recognition,” (dec2001).
3. M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral
Techniques for Embedding and Clustering,”
4. P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces
Vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Pattern Analysis and Machine Intelligence (July 1997).
5. M. Belkin and P. Niyogi, “Using Manifold Structure for Partially
Labeled Classification (2002).
6. M. Brand, “Charting a Manifold,” Proc. Conf. Advances in Neural
Information Processing Systems, 2002.
7. F.R.K. Chung, “Spectral Graph Theory,” Proc. Regional Conf. Series
in Math., no. 92, 1997.
8. Y. Chang, C. Hu, and M. Turk, “Manifold of Facial Expression,”
Proc. IEEE Int’l Workshop Analysis and Modeling of Faces and
Gestures, Oct. 2003.
9. R. Gross, J. Shi, and J. Cohn, “Where to Go with Face
Recognition,” Proc. Third Workshop Empirical Evaluation Methods
in Computer Vision, Dec. 2001.
45
10. A.M. Martinez and A.C. Kak, “PCA versus LDA,” IEEE Trans.
Pattern Analysis and Machine Intelligence, Feb. 2001.
46

Mais conteúdo relacionado

Mais procurados

Image–based face-detection-and-recognition-using-matlab
Image–based face-detection-and-recognition-using-matlabImage–based face-detection-and-recognition-using-matlab
Image–based face-detection-and-recognition-using-matlabIjcem Journal
 
Emotion detection using cnn.pptx
Emotion detection using cnn.pptxEmotion detection using cnn.pptx
Emotion detection using cnn.pptxRADO7900
 
Scene recognition using Convolutional Neural Network
Scene recognition using Convolutional Neural NetworkScene recognition using Convolutional Neural Network
Scene recognition using Convolutional Neural NetworkDhirajGidde
 
Face recognition using neural network
Face recognition using neural networkFace recognition using neural network
Face recognition using neural networkIndira Nayak
 
Three case studies deploying cluster analysis
Three case studies deploying cluster analysisThree case studies deploying cluster analysis
Three case studies deploying cluster analysisGreg Makowski
 
Image attendance system
Image attendance systemImage attendance system
Image attendance systemMayank Garg
 
ML All Chapter PDF.pdf
ML All Chapter PDF.pdfML All Chapter PDF.pdf
ML All Chapter PDF.pdfexample43
 
GPU Compute in Medical and Print Imaging
GPU Compute in Medical and Print ImagingGPU Compute in Medical and Print Imaging
GPU Compute in Medical and Print ImagingAMD
 
Automated attendance system based on facial recognition
Automated attendance system based on facial recognitionAutomated attendance system based on facial recognition
Automated attendance system based on facial recognitionDhanush Kasargod
 
Heart disease prediction
Heart disease predictionHeart disease prediction
Heart disease predictionAriful Haque
 
Machine Learning Strategies for Time Series Prediction
Machine Learning Strategies for Time Series PredictionMachine Learning Strategies for Time Series Prediction
Machine Learning Strategies for Time Series PredictionGianluca Bontempi
 
HANDWRITTEN DIGIT RECOGNITION USING k-NN CLASSIFIER
HANDWRITTEN DIGIT RECOGNITION USING k-NN CLASSIFIERHANDWRITTEN DIGIT RECOGNITION USING k-NN CLASSIFIER
HANDWRITTEN DIGIT RECOGNITION USING k-NN CLASSIFIERvineet raj
 

Mais procurados (20)

Eigenfaces
EigenfacesEigenfaces
Eigenfaces
 
Image–based face-detection-and-recognition-using-matlab
Image–based face-detection-and-recognition-using-matlabImage–based face-detection-and-recognition-using-matlab
Image–based face-detection-and-recognition-using-matlab
 
Face Recognition
Face RecognitionFace Recognition
Face Recognition
 
Emotion detection using cnn.pptx
Emotion detection using cnn.pptxEmotion detection using cnn.pptx
Emotion detection using cnn.pptx
 
Scene recognition using Convolutional Neural Network
Scene recognition using Convolutional Neural NetworkScene recognition using Convolutional Neural Network
Scene recognition using Convolutional Neural Network
 
Face recognition using neural network
Face recognition using neural networkFace recognition using neural network
Face recognition using neural network
 
Human Emotion Recognition
Human Emotion RecognitionHuman Emotion Recognition
Human Emotion Recognition
 
Three case studies deploying cluster analysis
Three case studies deploying cluster analysisThree case studies deploying cluster analysis
Three case studies deploying cluster analysis
 
image classification
image classificationimage classification
image classification
 
Image attendance system
Image attendance systemImage attendance system
Image attendance system
 
Final ppt
Final pptFinal ppt
Final ppt
 
ML All Chapter PDF.pdf
ML All Chapter PDF.pdfML All Chapter PDF.pdf
ML All Chapter PDF.pdf
 
GPU Compute in Medical and Print Imaging
GPU Compute in Medical and Print ImagingGPU Compute in Medical and Print Imaging
GPU Compute in Medical and Print Imaging
 
Automated attendance system based on facial recognition
Automated attendance system based on facial recognitionAutomated attendance system based on facial recognition
Automated attendance system based on facial recognition
 
Heart disease prediction
Heart disease predictionHeart disease prediction
Heart disease prediction
 
PhD Defense
PhD DefensePhD Defense
PhD Defense
 
Face recognisation system
Face recognisation systemFace recognisation system
Face recognisation system
 
K Nearest Neighbors
K Nearest NeighborsK Nearest Neighbors
K Nearest Neighbors
 
Machine Learning Strategies for Time Series Prediction
Machine Learning Strategies for Time Series PredictionMachine Learning Strategies for Time Series Prediction
Machine Learning Strategies for Time Series Prediction
 
HANDWRITTEN DIGIT RECOGNITION USING k-NN CLASSIFIER
HANDWRITTEN DIGIT RECOGNITION USING k-NN CLASSIFIERHANDWRITTEN DIGIT RECOGNITION USING k-NN CLASSIFIER
HANDWRITTEN DIGIT RECOGNITION USING k-NN CLASSIFIER
 

Semelhante a Face recognition using laplacianfaces

Face Identification Project Abstract 2017
Face Identification Project Abstract 2017Face Identification Project Abstract 2017
Face Identification Project Abstract 2017ioshean
 
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...
IRJET-  	  Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...IRJET-  	  Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...IRJET Journal
 
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptxReview A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptxAravindHari22
 
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...
Survey on Supervised Method for Face Image Retrieval  Based on Euclidean Dist...Survey on Supervised Method for Face Image Retrieval  Based on Euclidean Dist...
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...Editor IJCATR
 
IRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCVIRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCVIRJET Journal
 
Automatic Attendance Management System Using Face Recognition
Automatic Attendance Management System Using Face RecognitionAutomatic Attendance Management System Using Face Recognition
Automatic Attendance Management System Using Face RecognitionKathryn Patel
 
Face recogntion using PCA algorithm
Face recogntion using PCA algorithmFace recogntion using PCA algorithm
Face recogntion using PCA algorithmAshwini Awatare
 
Criminal Detection System
Criminal Detection SystemCriminal Detection System
Criminal Detection SystemIntrader Amit
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPIRJET Journal
 
Comparative Study of Enchancement of Automated Student Attendance System Usin...
Comparative Study of Enchancement of Automated Student Attendance System Usin...Comparative Study of Enchancement of Automated Student Attendance System Usin...
Comparative Study of Enchancement of Automated Student Attendance System Usin...IRJET Journal
 
Image recognition
Image recognitionImage recognition
Image recognitionJoel Jose
 
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object TrackingIntegrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object Trackingijsrd.com
 
IRJET- Face Recognition of Criminals for Security using Principal Component A...
IRJET- Face Recognition of Criminals for Security using Principal Component A...IRJET- Face Recognition of Criminals for Security using Principal Component A...
IRJET- Face Recognition of Criminals for Security using Principal Component A...IRJET Journal
 
Full biometric eye tracking
Full biometric eye trackingFull biometric eye tracking
Full biometric eye trackingVinoth Barithi
 
Real time multi face detection using deep learning
Real time multi face detection using deep learningReal time multi face detection using deep learning
Real time multi face detection using deep learningReallykul Kuul
 
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET Journal
 
IRJET - A Review on: Face Recognition using Laplacianface
IRJET - A Review on: Face Recognition using LaplacianfaceIRJET - A Review on: Face Recognition using Laplacianface
IRJET - A Review on: Face Recognition using LaplacianfaceIRJET Journal
 
Virtual Contact Discovery using Facial Recognition
Virtual Contact Discovery using Facial RecognitionVirtual Contact Discovery using Facial Recognition
Virtual Contact Discovery using Facial RecognitionIRJET Journal
 
Matlab image processing_2013_ieee
Matlab image processing_2013_ieeeMatlab image processing_2013_ieee
Matlab image processing_2013_ieeeIgslabs Malleswaram
 

Semelhante a Face recognition using laplacianfaces (20)

Face Identification Project Abstract 2017
Face Identification Project Abstract 2017Face Identification Project Abstract 2017
Face Identification Project Abstract 2017
 
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...
IRJET-  	  Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...IRJET-  	  Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...
 
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptxReview A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
 
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...
Survey on Supervised Method for Face Image Retrieval  Based on Euclidean Dist...Survey on Supervised Method for Face Image Retrieval  Based on Euclidean Dist...
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...
 
IRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCVIRJET- Face Detection and Recognition using OpenCV
IRJET- Face Detection and Recognition using OpenCV
 
Automatic Attendance Management System Using Face Recognition
Automatic Attendance Management System Using Face RecognitionAutomatic Attendance Management System Using Face Recognition
Automatic Attendance Management System Using Face Recognition
 
Face recogntion using PCA algorithm
Face recogntion using PCA algorithmFace recogntion using PCA algorithm
Face recogntion using PCA algorithm
 
Criminal Detection System
Criminal Detection SystemCriminal Detection System
Criminal Detection System
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
 
Comparative Study of Enchancement of Automated Student Attendance System Usin...
Comparative Study of Enchancement of Automated Student Attendance System Usin...Comparative Study of Enchancement of Automated Student Attendance System Usin...
Comparative Study of Enchancement of Automated Student Attendance System Usin...
 
Image recognition
Image recognitionImage recognition
Image recognition
 
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object TrackingIntegrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
 
IRJET- Face Recognition of Criminals for Security using Principal Component A...
IRJET- Face Recognition of Criminals for Security using Principal Component A...IRJET- Face Recognition of Criminals for Security using Principal Component A...
IRJET- Face Recognition of Criminals for Security using Principal Component A...
 
Real time facial expression analysis using pca
Real time facial expression analysis using pcaReal time facial expression analysis using pca
Real time facial expression analysis using pca
 
Full biometric eye tracking
Full biometric eye trackingFull biometric eye tracking
Full biometric eye tracking
 
Real time multi face detection using deep learning
Real time multi face detection using deep learningReal time multi face detection using deep learning
Real time multi face detection using deep learning
 
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
 
IRJET - A Review on: Face Recognition using Laplacianface
IRJET - A Review on: Face Recognition using LaplacianfaceIRJET - A Review on: Face Recognition using Laplacianface
IRJET - A Review on: Face Recognition using Laplacianface
 
Virtual Contact Discovery using Facial Recognition
Virtual Contact Discovery using Facial RecognitionVirtual Contact Discovery using Facial Recognition
Virtual Contact Discovery using Facial Recognition
 
Matlab image processing_2013_ieee
Matlab image processing_2013_ieeeMatlab image processing_2013_ieee
Matlab image processing_2013_ieee
 

Mais de StudsPlanet.com

Hardware enhanced association rule mining
Hardware enhanced association rule miningHardware enhanced association rule mining
Hardware enhanced association rule miningStudsPlanet.com
 
Hardware enhanced association rule mining
Hardware enhanced association rule miningHardware enhanced association rule mining
Hardware enhanced association rule miningStudsPlanet.com
 
Face recognition using laplacianfaces
Face recognition using laplacianfaces Face recognition using laplacianfaces
Face recognition using laplacianfaces StudsPlanet.com
 
Worldwide market and trends for electronic manufacturing services
Worldwide market and trends for electronic manufacturing servicesWorldwide market and trends for electronic manufacturing services
Worldwide market and trends for electronic manufacturing servicesStudsPlanet.com
 
World electronic industry 2008
World electronic industry 2008World electronic industry 2008
World electronic industry 2008StudsPlanet.com
 
Trompenaars cultural dimensions
Trompenaars cultural dimensionsTrompenaars cultural dimensions
Trompenaars cultural dimensionsStudsPlanet.com
 
The building of the toyota car factory
The building of the toyota car factoryThe building of the toyota car factory
The building of the toyota car factoryStudsPlanet.com
 
The International legal environment of business
The International legal environment of businessThe International legal environment of business
The International legal environment of businessStudsPlanet.com
 
Roles of strategic leaders
Roles  of  strategic  leadersRoles  of  strategic  leaders
Roles of strategic leadersStudsPlanet.com
 
Resolution of intl commr disputes
Resolution of intl commr disputesResolution of intl commr disputes
Resolution of intl commr disputesStudsPlanet.com
 
Presentation on india's ftp
Presentation on india's ftpPresentation on india's ftp
Presentation on india's ftpStudsPlanet.com
 
Philips case study analysis
Philips case study analysisPhilips case study analysis
Philips case study analysisStudsPlanet.com
 

Mais de StudsPlanet.com (20)

Hardware enhanced association rule mining
Hardware enhanced association rule miningHardware enhanced association rule mining
Hardware enhanced association rule mining
 
Hardware enhanced association rule mining
Hardware enhanced association rule miningHardware enhanced association rule mining
Hardware enhanced association rule mining
 
Face recognition using laplacianfaces
Face recognition using laplacianfaces Face recognition using laplacianfaces
Face recognition using laplacianfaces
 
Worldwide market and trends for electronic manufacturing services
Worldwide market and trends for electronic manufacturing servicesWorldwide market and trends for electronic manufacturing services
Worldwide market and trends for electronic manufacturing services
 
World electronic industry 2008
World electronic industry 2008World electronic industry 2008
World electronic industry 2008
 
Weberian model
Weberian modelWeberian model
Weberian model
 
Value orientation model
Value orientation modelValue orientation model
Value orientation model
 
Value orientation model
Value orientation modelValue orientation model
Value orientation model
 
Uk intellectual model
Uk intellectual modelUk intellectual model
Uk intellectual model
 
Trompenaars cultural dimensions
Trompenaars cultural dimensionsTrompenaars cultural dimensions
Trompenaars cultural dimensions
 
The building of the toyota car factory
The building of the toyota car factoryThe building of the toyota car factory
The building of the toyota car factory
 
The International legal environment of business
The International legal environment of businessThe International legal environment of business
The International legal environment of business
 
Textile Industry
Textile IndustryTextile Industry
Textile Industry
 
Sales
SalesSales
Sales
 
Roles of strategic leaders
Roles  of  strategic  leadersRoles  of  strategic  leaders
Roles of strategic leaders
 
Role of ecgc
Role of ecgcRole of ecgc
Role of ecgc
 
Resolution of intl commr disputes
Resolution of intl commr disputesResolution of intl commr disputes
Resolution of intl commr disputes
 
Presentation on india's ftp
Presentation on india's ftpPresentation on india's ftp
Presentation on india's ftp
 
Players in ib
Players in ibPlayers in ib
Players in ib
 
Philips case study analysis
Philips case study analysisPhilips case study analysis
Philips case study analysis
 

Face recognition using laplacianfaces

  • 1. CHAPTER-1 INTRODUCTION A smart environment is one that is able to identify people, interpret their actions, and react appropriately. Thus, one of the most important building blocks of smart environments is a person identification system. Face recognition devices are ideal for such systems, since they have recently become fast, cheap, unobtrusive, and, when combined with voice-recognition, are very robust against changes in the environment. Moreover, since humans primarily recognize each other by their faces and voices, they feel comfortable interacting with an environment that does the same. Facial recognition systems are built on computer programs that analyze images of human faces for the purpose of identifying them. The programs take a facial image, measure characteristics such as the distance between the eyes, the length of the nose, and the angle of the jaw, and create a unique file called a "template." Using templates, the software then compares that image with another image and produces a score that measures how similar the images are to each other. Typical sources of images for use in facial recognition include video camera signals and pre-existing photos such as those in driver's license databases. Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. These systems depend on a recognition algorithm, such as eigenface or the hidden Markov model. The first step for a facial recognition system is to recognize a human face and extract it for the rest of the scene. Next, the system measures nodal points on the face, such as the distance between the eyes, the shape of the cheekbones and other distinguishable features. 1
  • 2. These nodal points are then compared to the nodal points computed from a database of pictures in order to find a match. Obviously, such a system is limited based on the angle of the face captured and the lighting conditions present. New technologies are currently in development to create three-dimensional models of a person's face based on a digital photograph in order to create more nodal points for comparison. However, such technology is inherently susceptible to error given that the computer is extrapolating a three-dimensional model from a two- dimensional photograph. Principle Component Analysis is an eigenvector method designed to model linear variation in high-dimensional data. PCA performs dimensionality reduction by projecting the original n-dimensional data onto the k << n -dimensional linear subspace spanned by the leading eigenvectors of the data’s covariance matrix. Its goal is to find a set of mutually orthogonal basis functions that capture the directions of maximum variance in the data and for which the coefficients are pair wise decorrelated. For linearly embedded manifolds, PCA is guaranteed to discover the dimensionality of the manifold and produces a compact representation. Facial Recognition Applications: Facial recognition is deployed in large-scale citizen identification applications, surveillance applications, law enforcement applications such as booking stations, and kiosks. 2
  • 3. 1.1 Problem Definition Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. These systems depend on a recognition algorithm. But the most of the algorithm considers some what global data patterns while recognition process. This will not yield accurate recognition system. So we propose a face recognition system which can able to recognition with maximum accuracy as possible. 1.2 System Environment The front end is designed and executed with the J2SDK1.4.0 handling the core java part with User interface Swing component. Java is robust , object oriented , multi-threaded , distributed , secure and platform independent language. It has wide variety of package to implement our requirement and number of classes and methods can be utilized for programming purpose. These features make the programmer’s to implement to require concept and algorithm very easier way in Java. The features of Java as follows: Core java contains the concepts like Exception handling, Multithreading; Streams can be well utilized in the project environment. The Exception handling can be done with predefined exception and has provision for writing custom exception for our application. 3
  • 4. Garbage collection is done automatically, so that it is very secure in memory management. The user interface can be done with the Abstract Window tool KitAnd also Swing class. This has variety of classes for components and containers. We can make instance of these classes and this instances denotes particular object that can be utilized in our program. Event handling can be performed with Delegate Event model. The objects are assigned to the Listener that observe for event, when the event takes place the corresponding methods to handle that event will be called by Listener which is in the form of interfaces and executed. This application makes use of Action Listener interface and the event click event gets handled by this. The separate method actionPerformed() method contains details about the response of event. Java also contains concepts like Remote method invocation; Networking can be useful in distributed environment. 4
  • 5. CHAPTER-2 SYSYTEM ANALYSIS 2.1 Existing System: Many face recognition techniques have been developed over the past few decades. One of the most successful and well-studied techniques to face recognition is the appearance-based method. When using appearance-based methods, we usually represent an image of size n *m pixels by a vector in an n *m- dimensional space. In practice, however, these n*m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques. Two of the most popular techniques for this purpose are, 2.1.1 Principal Component Analysis (PCA). 2.1.2 Linear Discriminant Analysis (LDA). 2.1.1 Principal Component Analysis (PCA): The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This 5
  • 6. is the case when there is a strong correlation between observed variables. The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compression, etc. Because PCA is a known powerful technique which can do something in the linear domain, applications having linear models are suitable, such as signal processing, image processing, system and control theory, communications, etc. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D face image into the compact principal components of the feature space. This is called eigenspace projection. Eigenspace is calculated by identifying the eigenvectors of the covariance matrix derived from a set of fingerprint images (vectors). 2.1.2 Linear Discriminant Analysis (LDA): LDA is a supervised learning algorithm. LDA searches for the project axes on which the data points of different classes are far from each other while requiring data points of the same class to be close to each other. Unlike PCA which encodes information in an orthogonal linear space, LDA encodes discriminating information in a linearly separable space using bases that are not necessarily orthogonal. It is generally believed that algorithms based on LDA are superior to those based on PCA. But the most of the algorithm considers some what global data patterns while recognition process. This will not yield accurate recognition system. 6
  • 7.  Less accurate  Does not deal with manifold structure  It doest not deal with biometric characteristics. 2.2 Proposed System: PCA and LDA aim to preserve the global structure. However, in many real-world applications, the local structure is more important. In this section, we describe Locality Preserving Projection (LPP), a new algorithm for learning a locality preserving subspace. The objective function of LPP is as follows, The manifold structure is modeled by a nearest-neighbor graph which preserves the local structure of the image space. A face subspace is obtained by Locality Preserving Projections (LPP).Each face image in the image space is mapped to a low-dimensional face subspace, which is characterized by a set of feature images, called Laplacianfaces. The face subspace preserves local structure and seems to have more discriminating power than the PCA approach for classification purpose. We also provide Theoretical analysis to show that PCA, LDA, and LPP can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. In our the 7
  • 8. theoretical analysis, we show how PCA, LDA, and LPP arise from the same principle applied to different choices of this graph structure. It is worth while to highlight several aspects of the proposed approach here: 1. While the Eigenfaces method aims to preserve the global structure of the image space, and the Fisher faces method aims to preserve the discriminating information .Our Laplacianfaces method aims to preserve the local structure of the image space which real -world application mostly needs. 2. An efficient subspace learning algorithm for face recognition should be able to discover the nonlinear manifold structure of the face space. Our proposed Laplacianfaces method explicitly considers the manifold structure which is modeled by an adjacency graph and they reflect the intrinsic face manifold structures. 3. LPP shares some similar properties to LLE . LPP is linear, while LLE is nonlinear. Moreover, LPP is defined everywhere, while LLE is defined only on the training data points and it is unclear how to evaluate the maps for new test points. In contrast, LPP may be simply applied to any new data point to locate it in. The algorithmic procedure of Laplacianfaces is formally stated below: 8
  • 9. 1. PCA projection. We project the image set into the PCA subspace by throwing away the smallest principal components. In our experiments, we kept 98 percent information in the sense of reconstruction error. For the sake of simplicity, we still use x to denote the images in the PCA subspace in the following steps. We denote by WPCA the transformation matrix of PCA. 2. Constructing the nearest-neighbor graph. Let G denote a graph with n nodes. The ith node corresponds to the face image xi . We put an edge between nodes i and j if xi and xi are “close,” i.e., xi is among k nearest neighbors of xi, or xi is among k nearest neighbors of xj. The constructed nearest neighbor graph is an approximation of the local manifold structure. Note that here we do not use the neighborhood to construct the graph. This is simply because it is often difficult to choose the optimal " in the real-world applications, while k nearest-neighbor graph can be constructed more stably. The disadvantage is that the k nearest-neighbor search will increase the computational complexity of our algorithm. When the computational complexity is a major concern, one can switch to the "-neighborhood. 3. Choosing the weights. If node i and j are connected, put 9
  • 10. where t is a suitable constant. Otherwise, put Sij = 0.The weight matrix S of graph G models the face manifold structure by preserving local structure. The justification for this choice of weights can be traced. where D is a diagonal matrix whose entries are column (or row, since S is symmetric) sums of S, Dii = ∑j Sji. L =D - S is the Laplacian matrix. The ith row of matrix X is xi. These eigenvalues are equal to or greater than zero because the matrices XLXT and XDXT are both symmetric and positive semi definite. Thus, the embedding is as follows: where y is a k-dimensional vector. W is the transformation matrix. This linear mapping best preserves the manifold’s estimated intrinsic geometry in a linear sense. The column vectors of W are the so-called Laplacianfaces. This principle is implemented with unsupervised learning concept with training and test data. 10
  • 11. The system must require to implement Principle Component Analysis to reduce image in the dimension less than n and co-variance of the data. The system must be used in Unsupervised learning algorithm . So it must be trained properly with relevant data sets. Based on this training , input data is tested by the application and result is displayed to the user. 2.3 System Requirement Hardware specifications: Processor : Intel Processor IV RAM : 128 MB Hard disk : 20 GB CD drive : 40 x Samsung Floppy drive : 1.44 MB Monitor : 15’ Samtron color Keyboard : 108 mercury keyboard Mouse : Logitech mouse Software Specification: Operating System – Windows XP/2000 Language used – J2sdk1.4.0 11
  • 12. 2.4 System Analysis Methods System analysis can be defined, as a method that is determined to use the resources, machine in the best manner and perform tasks to meet the information needs of an organization. It is also a management technique that helps us in designing a new systems or improving an existing system. The four basic elements in the system analysis are • Output • Input • Files • Process The above-mentioned are mentioned are the four basis of the System Analysis. 2.5 Feasibility Study Feasibility is the study of whether or not the project is worth doing. The process that follows this determination is called a Feasibility Study. This study is taken in right time constraints and normally culminates in a written and oral 12
  • 13. feasibility report. This feasibility study is categorized into seven different types. They are • Technical Analysis • Economical Analysis • Performance Analysis • Control and Security Analysis • Efficiency Analysis • Service Analysis 2.5.1 Technical Analysis This analysis is concerned with specifying the software that will successfully satisfy the user requirements. The technical needs of a system are to have the facility to produce the outputs in a given time and the response time under certain conditions.. 2.5.2 Economic Analysis Economic Analysis is the most frequently used technique for evaluating the effectiveness of prepared system. This is called Cost/Benefit analysis. It is used to determine the benefits and savings that are expected from a proposed system and compare them with costs. If the benefits overweigh the cost, then the decision is taken to the design phase and implements the system. 2.5.3 Performance Analysis 13
  • 14. The analysis on the performance of a system is also a very important analysis. This analysis analyses about the performance of the system both before and after the proposed system. If the analysis proves to be satisfying from the company’s side then this analysis result is moved to the next analysis phase. Performance analysis is nothing but invoking at program execution to pinpoint where bottle necks or other performance problems such as memory leaks might occur. If the problem is spotted out then it can be rectified. 2.5.4 Efficiency Analysis This analysis mainly deals with the efficiency of the system based on this project. The resources required by the program to perform a particular function are analyzed in this phase. It is also checks how efficient the project is on the system, in spite of any changes in the system. The efficiency of the system should be analyzed in such a way that the user should not feel any difference in the way of working. Besides, it should be taken into consideration that the project on the system should last for a longer time. 14
  • 15. CHAPTER-3 SYSTEM DESIGN Design is concerned with identifying software components specifying relationships among components. Specifying software structure and providing blue print for the document phase. Modularity is one of the desirable properties of large systems. It implies that the system is divided into several parts. In such a manner, the interaction between parts is minimal clearly specified. Design will explain software components in detail. This will help the implementation of the system. Moreover, this will guide the further changes in the system to satisfy the future requirements. 15
  • 16. 3.1 Project modules: 3.1.1 Read/Write Module: Here, the basic operations for loading and saving input and resultant images respectively from the algorithms. The image files are read, processed and new images are written into the output images. 3.1.2 Resizing Module: Here, the faces are converted into equal size using linearity algorithm, for the calculation and comparison. In this module large images or smaller images are converted into standard sizing. 3.1.3 Image Manipulation: Here, the face recognition algorithm using Locality Preserving Projections (LPP) is developed for various enrolled into the database. 3.1.4 Testing Module: Here, the Input images are resized then compared with the Intermediate image and find the tested image then again compared with the laplacian faces to find the aureate faces. 16
  • 18. Designing Flow Diagram 3.2 System Development 18
  • 19. This system is developed to implement Principle component analysis. Image manipulation: This module designed to view all the faces that are considered in our training case. Principle Component Analysis is an eigenvector method designed to model linear variation in high-dimensional data. PCA performs dimensionality reduction by projecting the original n-dimensional data onto the k << n -dimensional linear subspace spanned by the leading eigenvectors of the data’s covariance matrix. Its goal is to find a set of mutually orthogonal basis functions that capture the directions of maximum variance in the data and for which the coefficients are pair wise decorrelated. For linearly embedded manifolds, PCA is guaranteed to discover the dimensionality of the manifold and produces a compact representation. 1)Training module: Unsupervised learning - this is learning from observation and discovery. The data mining system is supplied with objects but no classes are defined so it has to observe the examples and recognize patterns (i.e. class description) by itself. This process requires training data set .This system provides training set as 17 faces and each contains three different poses of faces. It undergoes iterative process stores require detail in face Template two dimension array. 2) Test module: After training process is over , it process the input image face for eigenface process then can able to say whether it recognizes or not. 19
  • 20. CHAPTER-4 IMPLEMENTATION Implementation includes all those activities that take place to convert from the old system to the new. The new system may be totally new, replacing an existing system or it may be major modification to the system currently put into use. This system “Face Recognition” is a new system. Implementation as a whole involves all those tasks that we do for successfully replacing the existing or introduce new software to satisfy the requirement. The entire work can be described as retrieval of faces from database, processed for eigen faces training method and test case are executed and finally result is displayed to the user. The test case has performed in all aspect and the system has given correct result in all the cases. 4.1. Implementation Details: 4.1.1 Form design Form is a tool with a message; it is the physical carrier of data or information. It also can constitute authority for actions. In the form design files are used to do each module. The following are list of forms used in this project: 1) Main Form 20
  • 21. Contains option for viewing face from data base. The system retrieves the images stored in the folder called train and test folder, which is available in bin folder of your application. 2) View database Form: This form retrieves face available in the train folder. It is just for viewing purpose for the user. 3) Recognition Form : This form provides option for loading input image from test folder. Then user has to click Train button which leads the application for training to gain knowledge as it is of the form unsupervised learning algorithm. Unsupervised learning - This is learning from observation and discovery. The data mining system is supplied with objects but no classes are defined so it has to observe the examples and recognize patterns (i.e. class description) by itself. This system results in a set of class descriptions, one for each class discovered in the environment. Again this is similar to cluster analysis as in statistics. Then user can click the test button Test button to see the matching for the faces. The matched face will be displayed in the place provided for matched face option. In case of any difference the information will be displayed in place provided in the form. 21
  • 22. 4.1.2 Input design Accurate input data is the most common case of errors in data processing. Errors entered by data entry operators can control by input design. Input design is the process of converting user-originated inputs to a computer- based format. Input data are collected and organized into group of similar data. 4.1.3 Menu Design The menu in this application is organized into mdiform that organizes viewing of image files from folder. Also it has option for loading image as input , try to perform training method and test whether it recognizes the face or not. 4.1.4 Data base design: A database is a collection of related data. The database has following properties: i) Database reflects the changes of the information. ii)A database is logically coherent collection of data with some inherent meaning. This application takes the images form the default folder set for this application train and test folders. The file extension is .jpeg option. 4.1.5 Code Design o Face Enrollment  -a new face can be added by the user into facespace database 22
  • 23. o Face Verification  -verifies a persons face in the database with reference to his/her identity. o Face Recognition  -compares a persons face with all the images in database and choose the closest match. Here Principle Component Analysis is performed with training data set . The result is performed from test data set. o Face Retrieval  -displays all the faces and its templates in the database o Statistics  -stores a list of recognition accuracy for analyzing the FRR (False Rejection Rate) and FAR (False Acceptance Rate) 23
  • 24. 4.2 Coding: import java.lang.*; import java.io.*; public class PGM_ImageFilter { //constructor public PGM_ImageFilter() { inFilePath=""; outFilePath=""; } //get functions public String get_inFilePath() { return(inFilePath); } public String get_outFilePath() { return(outFilePath); } //set functions public void set_inFilePath(String tFilePath) { inFilePath=tFilePath; } public void set_outFilePath(String tFilePath) { outFilePath=tFilePath; } //methods public void resize(int wout,int hout) { PGM imgin=new PGM(); PGM imgout=new PGM(); if(printStatus==true) { System.out.print("nResizing..."); 24
  • 25. } int r,c,inval,outval; //read input image imgin.setFilePath(inFilePath); imgin.readImage(); //set output-image header imgout.setFilePath(outFilePath); imgout.setType("P5"); imgout.setComment("#resized image"); imgout.setDimension(wout,hout); imgout.setMaxGray(imgin.getMaxGray()); //resize algorithm (linear) double win,hin; int xi,ci,yi,ri; win=imgin.getCols(); hin=imgin.getRows(); for(r=0;r<imgout.getRows();r++) { for(c=0;c<imgout.getCols();c++) { xi=c; yi=r; ci=(int)(xi*((double)win/(double)wout)); ri=(int)(yi*((double)hin/(double)hout)); inval=imgin.getPixel(ri,ci); outval=inval; imgout.setPixel(yi,xi,outval); } } if(printStatus==true) { System.out.println("done."); } //write output image imgout.writeImage(); } 25
  • 26. CHAPTER-5 SYSTEM TESTING 5.1 Software Testing Software Testing is the process of confirming the functionality and correctness of software by running it. Software testing is usually performed for one of two reasons: i) Defect detection ii)Reliability estimation. Software Testing contains two types of testing. They are 1) White Box Testing 2) Block Box Testing 1) White Box Testing White box testing is concerned only with testing the software product, it cannot guarantee that the complete specification has been implemented. White box testing is testing against the implementation and will discover faults of commission, indicating that part of the implementation is faulty. 26
  • 27. 2) Block Box Testing Black box testing is concerned only with testing the specification, it cannot guarantee that all parts of the implementation have been tested. Thus black box testing is testing against the specification and will discover faults of omission, indicating that part of the specification has not been fulfilled. Functional testing is a testing process that is black box in nature. It is aimed at examine the overall functionality of the product. It usually includes testing of all the interfaces and should therefore involve the clients in the process. The key to software testing is trying to find the myriad of failure modes – something that requires exhaustively testing the code on all possible inputs. For most programs, this is computationally infeasible. It is common place to attempt to test as many of the syntactic features of the code as possible (within some set of resource constraints) are called white box software testing technique. Techniques that do not consider the code’s structure when test cases are selected are called black box technique. In order to fully test a software product both black and white box testing are required.The problem of applying software testing to defect detection is that software can only suggest the presence of flaws, not their absence (unless the testing is exhaustive). The problem of applying software testing to reliability estimation is that the input distribution used for selecting test cases may be flawed. In both of these cases, the mechanism used to determine whether program output is correct is often impossible to develop. Obviously the benefit of the entire software 27
  • 28. testing process is highly dependent on many different pieces. If any of these parts is faulty, the entire process is compromised. Software is now unique unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. Final stage of the testing process should be System Testing. This type of test involves examination of the whole computer system, all the software components, all the hard ware components and any interfaces. The whole computer based system is checked not only for validity but also to meet the objectives. 5.2 Efficiency of Laplacian Algorithm Now, consider a simple example of image variability. Imagine that a set of face images are generated while the human face rotates slowly. Thus, we can say that the set of face images are intrinsically one dimensional. Many recent works shows that the face images do reside on a low dimensional(image space).Therefore, an effective subspace learning algorithm should be able to detect the nonlinear manifold structure. PCA and LDA, effectively see only the Euclidean structure; thus, they fail to detect the intrinsic low-dimensionality. With its neighborhood preserving character, the Laplacian faces capture the intrinsic face manifold structure . 28
  • 29. FIGURE:2 Two-dimensional linear embedding of face images by Laplacianfaces Fig. 1 shows an example that the face images with various pose and expression of a person are mapped into two-dimensional subspace. This data set contains face images. The size of each image is 20 _ 28 pixels, with256 gray-levels per pixel. Thus, each face image is represented by a point in the 560-dimensional ambientspace. However, these images are believed to come from a sub manifold with few degrees of freedom. The face images are mapped into a two-dimensional space with continuous change in pose and expression. The representative face images are shown in the different parts of the space. The face images are divided into two parts. The left part includes the face images with open mouth, and the right part includes the face images with closed mouth. This is because in trying to preserve local structure .. Specifically, it makes the neighboring points in the image face nearer in the face space. . The 10 testing samples can be simply located in the reduced representation space by the Laplacian faces (columnvectors of the matrix W). 29
  • 30. FIGURE:3 Fig. 2. Distribution of the 10 testing samples in the reduced representation subspace. As can be seen, these testing samples optimally find their coordinates which reflect their intrinsic properties, i.e., pose and expression. As can be seen, these testing samples optimally find their coordinates which reflect their intrinsic properties, i.e., pose and expression. This observation tells us that the Laplacianfaces are capable of capturing the intrinsic face manifold structure. FIGURE:4 The eigenvalues of LPP and LaplacianEigenmap. Fig. 3 shows the eigen values computed by the two methods. As can be seen, the eigen values of LPP is consistently greater than those of Laplacian Eigenmaps. 30
  • 31. 5.2. 1 Experimental Results A face image can be represented as a point in image space. However, due to the unwanted variations resulting from changes in lighting, facial expression, and pose, the image space might not be an optimal space for visual representation. We can display the eigenvectors as images. These images may be called Laplacianfaces. Using the Yale face database as the training set, we present the first 10 Laplacianfaces in Fig. 4, together with Eigen faces and Fisher faces. A face image can be mapped into the locality preserving subspace by using the Laplacian faces. FIGURE:5 Fig (a) Eigenfaces, (b) Fisher faces, and (c) Laplacianfaces calculated from the face images in the YALE database. 5.2.2 Face Recognition Using Laplacianfaces In this section, we investigate the performance of our proposed Laplacianfaces method for face recognition. The system performance is compared with the Eigen faces method and the Fisher faces method. 31
  • 32. In this study, three face databases were tested. The first one is the PIE (pose, illumination, and expression) .The second one is the Yale database and the Third one is the MSRA database. In short, the recognition process has three steps. First, we calculate the Laplacianfaces from the training set of face images; then the new face image to be identified is projected into the face subspace spanned by the Laplacianfaces; finally, the new face image is identified by a nearest neighbor classifier. FIGURE:6 Fig 5.The original face image and the cropped image 5.2.3 Yale Database The Yale face database was constructed at the Yale Center for Computational Vision and Control. It contains 165 grayscale images of 15 individuals. The images demonstrate variations in lighting condition (left-light, center-light, right light),facial expression (normal, happy, sad, sleepy, surprised, and wink), and with/without glasses. A random subset with six images was taken for the training set. The rest was taken for testing. The testing samples were then projected into the low- 32
  • 33. dimensional Representation. Recognition was performed using a nearest-neighbor classifier. In general, the performance of the Eigen faces method and the Laplacian faces method varies with the number of dimensions. We show the best results obtained by Fisher faces, Eigen faces, and Laplacian faces. The recognition results are shown in Table 1. It is found that the Laplacian faces method significantly outperforms both Eigen faces and Fisher faces methods. TABLE :1 Performance Comparison on the Yale Database FIGURE:7 Fig. 6 shows the plots of error rate versus dimensionality reduction. 33
  • 34. 5.2.4 PIE Database Fig. 7 shows some of the faces with pose, illumination and expression variations in the PIE database. Table 2 shows the recognition results. As can be seen Fisher faces performs comparably to our algorithm on this Database, while Eigenfaces performs poorly. The error rate for Laplacian faces, Fisher faces, and Eigen faces .As can be seen, the error rate of our Laplacianfaces method decreases fast as the dimensionality of the face subspace. FIGURE:8 Fig. 7. The sample cropped face images of one individual from PIE database. The original face images are taken under varying pose, illumination, and expression. TABLE :2 Performance Comparison on the PIE Database 34
  • 35. 5.2.5 MSRA Database This database was collected at Microsoft Research Asia. Sixty- four to eighty face images were collected for each individual in each session. All the faces are frontal. Fig. 9 shows the sample cropped face images from this database. In this test, one session was used for training and the other was used for testing. FIGURE:9 Fig.8 The sample cropped face images of one individual from MISRA database. The original face images are taken under varying pose, illumination, and expression. 35
  • 36. TABLE :3 shows the recognition results. Laplacian faces method has lower error rate than those of Eigen faces and fisher faces . Performance Comparison on the MSRA Database Performance comparison on the MSRA database with different number of training samples 36
  • 37. CHPTER-6 CONCLUSION Our system is proposed to use Locality Preserving Projection in Face Recognition which eliminates the flaws in the existing system. This system makes the faces to reduce into lower dimensions and algorithm for LPP is performed for recognition. The application is developed successfully and implemented as mentioned above. This system seems to be working fine and successfully. This system can able to provide the proper training set of data and test input for recognition. The face matched or not is given in the form of picture image if matched and text message in case of any difference. 37
  • 45. REFERENCES 1. X. He and P. Niyogi, “Locality Preserving Projections,” Proc. Conf. Advances in Neural Information Processing Systems, 2003. 2. A.U. Batur and M.H. Hayes, “Linear Subspace for Illumination Robust Face Recognition,” (dec2001). 3. M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering,” 4. P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces Vs. Fisherfaces: Recognition Using Class Specific Linear Projection,” Pattern Analysis and Machine Intelligence (July 1997). 5. M. Belkin and P. Niyogi, “Using Manifold Structure for Partially Labeled Classification (2002). 6. M. Brand, “Charting a Manifold,” Proc. Conf. Advances in Neural Information Processing Systems, 2002. 7. F.R.K. Chung, “Spectral Graph Theory,” Proc. Regional Conf. Series in Math., no. 92, 1997. 8. Y. Chang, C. Hu, and M. Turk, “Manifold of Facial Expression,” Proc. IEEE Int’l Workshop Analysis and Modeling of Faces and Gestures, Oct. 2003. 9. R. Gross, J. Shi, and J. Cohn, “Where to Go with Face Recognition,” Proc. Third Workshop Empirical Evaluation Methods in Computer Vision, Dec. 2001. 45
  • 46. 10. A.M. Martinez and A.C. Kak, “PCA versus LDA,” IEEE Trans. Pattern Analysis and Machine Intelligence, Feb. 2001. 46