Recon Outpost system is designed to make available tools for home security and investigators that need to research surrounding ambient with video data in real time. The system can analyse and identify biometric faces in live video, and provide real time surveillance in adverse weather conditions.
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
Face detection and recognition
1. RECON OUTPOST
RESEARCH & ANALYSIS SYSTEMS
Derek Budde
Softwa re Engineering, University Lusófona Porto, Portugal
derekbudde@gmail.com
Supervised by Drª Daniela Cruz
Abstract. Recon Outpost system is designed to make available tools for home security and investigators that
need to research surrounding ambient with video data in real time. The system can analyse and identify biometric
faces in live video, and provide real time surveillance in adverse weather conditions.
Keywords: Infrared, Night Vision, Face Detection, Target Detection, Face Detection, Arduino, Biometric,
Identification, Surveillance Systems.
2. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Gi e a shelterless a tools a d he ill uil d a home.
I spire hi , e ourage hi , gi e hi
isio , a d he ill uild a e pire.
Rick Beneteau
1
3. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
TABLE OF CONTENTS
GLOSSARY...................................................................................................................................................................................4
INTRODUCTION .........................................................................................................................................................................4
WHAT IS FACE RECO GNITION? ..........................................................................................................................................5
DIFFERENT APPRO ACHES OF FACE RECOGNITION.........................................................................................................5
WHY SELECT PCA BASED EIGENFACES ALGO RITHM FOR THE PROJECT? ...................................................................6
BASIC CONCEPTS ..................................................................................................................................................................6
OBJECTIVE ..................................................................................................................................................................................7
TARGET MARKET ..................................................................................................................................................................7
SWOT ANALYSIS OF OUR PRODUCT .............................................................................................................................9
TARGET PROFILE...............................................................................................................................................................9
PLANNING .............................................................................................................................................................................9
WBS - GANTT CHART .................................................................................................................................................... 10
RESOURCES PLAN.......................................................................................................................................................... 11
Low fidelity prototype sketch ..................................................................................................................................... 13
SYSTEM AND SOFTW ARE DESIGN ....................................................................................................................................... 14
USER REQUIREMENTS ...................................................................................................................................................... 14
USE CASE ............................................................................................................................................................................ 14
FACE DETECTION........................................................................................................................................................... 15
FACE RECO GNITION...................................................................................................................................................... 15
CAMERA CONTROL ....................................................................................................................................................... 16
INFRARED VIDEO ........................................................................................................................................................... 17
GREEN FILTER ................................................................................................................................................................ 17
TRACE CO NTO URS ........................................................................................................................................................ 18
CLASS DIAGRAMS.............................................................................................................................................................. 19
SYSTEM ARCHITECTURE........................................................................................................................................................ 20
SOFTWARE ......................................................................................................................................................................... 20
HARDW ARE ........................................................................................................................................................................ 23
DEPLO YMENT DIAGRAM ................................................................................................................................................. 24
ENTITY RELATIONSHIP DIAGRAM................................................................................................................................... 25
TECHNOLO GY OPTIONS ................................................................................................................................................... 25
IMPLEMENTATION................................................................................................................................................................. 26
CLASSES............................................................................................................................................................................... 26
ALGORITHMS AND CODE................................................................................................................................................. 26
FACE DETECTION........................................................................................................................................................... 29
FACE RECO GNITION...................................................................................................................................................... 30
IR IMAGE......................................................................................................................................................................... 32
NIGHT IMAGE ................................................................................................................................................................ 32
TRACE IMAGE ................................................................................................................................................................ 33
LOGIN .............................................................................................................................................................................. 34
SENSOR CONTROLS....................................................................................................................................................... 35
RECON O UTPOST TOOL......................................................................................................................................................... 36
TESTS ........................................................................................................................................................................................ 36
TESTING OBJECTIVE ...................................................................................................................................................... 36
TESTING TOOLS ............................................................................................................................................................. 37
TESTING .......................................................................................................................................................................... 37
FUTURE OPTIONS IMPLEMENTATIONS .............................................................................................................................. 38
CONCLUSION .......................................................................................................................................................................... 38
BIBLIOGRAPHY........................................................................................................................................................................ 38
2
4. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
List of Figures
Figure 1 - Face Recognition.....................................................................................................................................................5
Figure 2 -Photometric stereo image......................................................................................................................................6
Figure 3 - Geometric facial recognition. ...............................................................................................................................6
Figure 4 - Samsung 17 Quad CCTV Observation System....................................................................................................8
Figure 5 - - TR-500 NotiFace Recognition CCTV Surveillance System (500 face version).............................................8
Figure 6 - Swot analysis............................................................................................................................................................9
Figure 7 – System Main Idea................................................................................................................................................ 10
Figure 8 - WBS ........................................................................................................................................................................ 10
Figure 9 - Main menu screen ............................................................................................................................................... 13
Figure 10 - Splash screen ...................................................................................................................................................... 13
Figure 11 - Login screen ........................................................................................................................................................ 13
Figure 12 - Detection screen ................................................................................................................................................ 13
Figure 13 - Recognition screen ............................................................................................................................................ 13
Figure 14 - Save data screen ................................................................................................................................................ 13
Figure 15 - Night vision screen ............................................................................................................................................ 13
Figure 16 - Use Case .............................................................................................................................................................. 14
Figure 17 - Class Diagram ..................................................................................................................................................... 19
Figure 18 - System ................................................................................................................................................................. 20
Figure 19 - Main System AD ................................................................................................................................................. 20
Figure 20 - Login process ...................................................................................................................................................... 21
Figure 21 - Face Detecting AD ............................................................................................................................................. 21
Figure 22 - Camera Control AD............................................................................................................................................ 22
Figure 23 - Face Recognition AD2 ....................................................................................................................................... 22
Figure 24 - Night vision AD ................................................................................................................................................... 23
Figure 25 - Arduino AD.......................................................................................................................................................... 23
Figure 26 - System and Arduino AD .................................................................................................................................... 24
Figure 27 - Deployment Diagram ........................................................................................................................................ 24
Figure 28 - All components of the system ......................................................................................................................... 24
Figure 29 - ER Chen´s notation (Peter Chen's 1976 paper). ........................................................................................... 25
Figure 30 - Research diagram .............................................................................................................................................. 28
Figure 31 - Histogram Equalization..................................................................................................................................... 29
Figure 32 - Sequence diagram for Face Detection ........................................................................................................... 30
Figure 33 - Photometric Normalization Techniques – Shahrin Azuan Nazeer and Marzuki Khalid ........................ 32
Figure 34 - Gaussian Pyramid Decomposition .................................................................................................................. 33
Figure 35 - Canny edge detector steps .............................................................................................................................. 34
3
5. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
GLOSSARY
Visual Studio C# 2008, Visual Studio supports Visual C# with a full -featured code editor, compiler, project
templates, designers, code wizards, a powerful and easy-to-use debugger, and other tools. The .NET
Framework class library provides access to many operating system services and other useful, well designed classes that speed up the development cycle significantly.
Emgu CV, cross platform .Net wrapper to the Intel OpenCV image-processing library. Allowing OpenCV
functions to be called from .NET compatible languages such as C#, VB, VC++, IronPython etc.
Arduino, Arduino programming language based on Wiring, Wiring is an open-source programming
framework for microcontroller.
Eigenfaces - Refers to an appearance-based approach to face recognition that seeks to capture the variation
in a collection of face images and use this information to encode and compare images of individual
faces in a holistic (as opposed to a parts -based or feature-based) manner.
PCA (Principal Component Analysis) - Transform each original image of the training set into a
corresponding eigenface. An important feature of PCA is that one can reconstruct any original image from the
training set by combining the eigenfaces.
Threshold - Minimum or maximum value (established for an attribute, characteristic, or parameter)
which serves as a benchmark for comparison or guidance and any breach of which may call for a
complete review of the situation or the redesign of a system.
Image Pyr amid - An image pyramid is a coll ection of images, all arising from a single original image that are
successively downsampled until some desired stopping point is reached. There are two common kinds of
image pyramids:
Gaussian pyramid: Used to downsample images.
Laplacian pyra mid: Used to r econstruct an upsampled image from an image lower in the pyramid (with
less resolution). (http://docs.opencv.org/doc/tutorials/imgproc/pyramids/pyramids.html )
INTRODUCTION
Humans have been using physical characteristics such as face, voice, etc. to recognize each other for
thousands of years. With new advances in technology, biometrics has become an emerging technology for
recognizing individuals using their biological traits. Now, biometrics is becoming part of day to day life, where
in a person is recognized by his/her personal biological characteristics. Our goal is to develop an inexpensive
security surveillance system, which will be able to detect and identify facial and body characteristics in
adverse weather conditions. There are many factors which influence this type of methods i.e. lighting
condition background noise, fog and rain.
A particular attention is given to face r ecognition. Face recognition refers to an automated or semi -automated
process of matching facial images. Many techniques are available to apply face recognition one of
them is Principle Component Analysis (PCA). PCA is a way of identifying patterns in data and expressing the
data in such a way to highlight their similarities and differences.
4
6. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
WHAT IS FACE RECOGNI TION?
Face recognition is the task of identifying an already detected object as a known or unknown face, and
i
ore ad a e ases, telli g e a tl ho s fa e it is Figure .
Often the problem of face recognition is confused with the problem of face detection.
Face Detection is to identify an object as a "face" and locate it in the input image.
Face Recognition on the other hand is to decide if the "face" is someone known, or unknown, using for this
purpose a database of faces in order to validate this input face.
Figure 1 - Face Recognition
DIFFERENT APPROACHES OF FACE RECOGNITION
There are two pr edominant approaches to the face r ecogniti on problem: Geometric (feature based) and
photometric (view based). As researcher interest in face recognition continued, many different
algorithms were developed, three of which have been well studied in face recognition literature.
Recognition algorithms can be divided into two main approaches:
1.
Geometric: Is based on geometrical relationship between facial landmarks, or in other words
the spatial configuration of facial features. That means that the main geometrical fea tures of the face
such as the eyes, nose and mouth are first located and then faces are classified on the basis of various
geometrical distances and angles between features. (Figure 3).
2.
Photometric stereo: Used to recover the shape of an objec t from a number of images taken
under different lighting conditions. The shape of the recovered object is defined by a gradient map,
which is made up of an array of surface normals (Zhao and Chellappa, 2006) (Figure 2).
Popular recognition algorithms include:
1.
2.
3.
4.
5.
Principal Component Analysis using Eigenfaces,(PCA)
Linear Discriminate Analysis,
Elastic Bunch Graph Matching using the Fisherface algorithm,
The Hidden Markov model,
The neuronal motivated dynamic link matching.
5
7. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Figure 2 -Photometric stereo image.
Figure 3 - Geometric facial recognition.
WHY SELECT PCA BASED EIGENFACES ALGORITHM FOR THE PROJECT?
PCA based Eigenface method is at the most primary level and simplest of efficient face recognition algorithms
and Eigenfaces method for recognition is as supported by EmguCV library as is Viola -Jones method for
detection. PCA based Eigenfaces method is not 100% efficient, in fact, on the average of 96% with
light variation, 85% with orientation variation, and 64% with size variation (Turk & Pentland 1991, p. 590). No
face recognition algorithm is 100% efficient.
BASIC CONCEPTS
Viola-Jones - The Viola–Jones [3] object detection framework is the first object detection framework to provide
competitive object detection rates in real -time proposed in 2001 by Paul Viola and Michael Jones. Although it
can be trained to detect a variety of object classes, it was motivated primarily by the problem of face
detection. This algorithm is implemented in EmguCV as HaarDetectObjects ().
Face detector uses a method that Paul Viola and Michael Jones published in 2001. Usually called simply
the Viola-Jones method, or even just Viola-Jones, this approach to detecting objects in images combines four
key concepts:
Simple rectangular features, called Haar features
An Integral Image for rapid feature detection
The AdaBoost machine-learning method
A cascaded classifier to combine many features effici ently
6
8. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
The features that Viola and Jones used are based on Haar wavelets. Haar wavelets are single
wavelength square waves (one high interval and one low interval). In two dimensions, a square wave is a pair
of adjacent rectangles - one light and one dark.
Haar Cascade - Haar-like features (so called because they are computed in a similar method as the coefficients
in Haar wavelet transforms) and a cascade of boosted tree classifiers as a statistical model.
Canny Edge Detector - The Canny operator was designed to be an optimal edge detector (according to
particular criteria). It takes as input a grey scale image, and produces as output an image showing the positions
of tracked intensity discontinuities. (Canny filter) for edge detection, visualize a derivative of a Gaussian
function, simulate non-classical receptive field inhibition or surround suppression and use it for object contour
detection. The basic algorithm deployed for edge detection is that of J. Canny [1]. You can use an additional
surround suppression step that will eliminate textur e edges [2]. You should use this step if you are
specifically interested in the detection of object contours and region boundaries.
MCvTermCriteria – Is a class which represents OpenCV structure for ter minating iterative algorithms. It is
composed of two numbers the first being the number of iterations and the second one is demanded
accuracy. For some algorithm it makes sense to itera te until the accuracy is not bellow certain
threshold. For eigenfaces algorithm it is the number of iterations which is important and it will impact the
number of eigenfaces being created [6].
OBJECTIVE
The goal of this project is to create a prototype to reduce the amount of climacteric factors to provide facial
recognition and facial tracking in day light surveillance and night vision surveillance for wide areas.
Also, it implements pan and tilt support to give the ability to rotate the cameras by software control.
The prototype is built to test the performance of the current face tracking and face recognition
technology in real life conditions.
The r esults will determinate if the prototype is suitable for commercialization, and set the current
parameters to determinate in which state the prototype performs in maximum efficiency. If the field
test results are acceptable for home users, the prototype will be adapted for military and law
enforcement.
For this purpose there are some basic prerequisites.
-
The prototype must be able to power up and run on a 12v dc car battery.
The prototype must be portable.
The software must have a direct and objective interface for inexperience users.
The project is divided in modules, where each module will provide one different feature to the project.
With this approach each individual module can be developed and updated separatel y from the main
module and also provide to the client the ability to choose with module best suits his needs.
TARGET MARKET
Throughout the world, there is an increase need for video surveillance systems to secure our schools,
businesses, hospitals, ports, homes and other critical environments. According to a new report from market
research firm IMS Research, the global video surveillance market is expected to grow from $12.6 billion in
7
9. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
to $
.
illio i
. That s a
ore tha
per e t i crease in the next five years and a sure sign of
even more growth to come. (http://www.imsresearch.com/).
The industry of CCTV video surveillance sometimes does not provide an adequate system.
The most common problems are:
Video surveillance cost installation (Figure 4)
Most s ste s are fi ed, a t e tra sported to other lo atio s Figure
I frared light to right or to lo a t e adjusted
Few camera have rotation or is too expensive for home users
Facial recognition software price (Figure 5)
Night rain or snow interferes infrared illumination
Figure 4 - Samsung 17 Quad CCTV Observation System
Figure 5 - - TR-500 NotiFace Recognition CCTV Surveillance System (500 face version)
The challenges remain in addressing these issues in one product, ready to operate in most case scenarios.
-
Portable and integrated system (plug and play) - Guarantees that any person, anywhere or in a mobile
vehicle can star using the product.
The special lens and software fi lters provide a clear infrared picture regardless of weather conditions .
Never seen features in other systems of video surveillance - Trace image, that provide analysis in real
time of moving objects or people.
Simple touch screen interface.
No need for separated hardware to control camera rotation.(e.g. joystick)
8
10. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
-
System software modules can be purchased individually
SWOT ANALYSIS OF OUR PRODUCT
Strengths
Weakness
Low cost Hardware
Free Updates
Video Trace Feature
Camera Rotation Control
Adjustable IR Intensity
Portable system
Home made product
No budget
No recording video
No motion detection
No Internet streaming image
Opportunities
Threats
Market increase
80 percent in the next five
year
Competitor products does
not meet costumers
demands or features of our
product
Low public appreciation
No global or local presence
in market.
Competitive advantage of
other companies
Figure 6 - Swot analysis
TARGET PROFILE
Our customer can be divided in two groups:
Residential User
Commercial Owners
Residential User
Commercial Owners
Gender: Male/Female
Age: 21/60
Education: Basic (read and write)
Location: any
IT knowledge: Low experience
Profession: any
Social Class: High and Middle class
Main User: Head of househol d
Secondary User: Dependent members
Gender: Male/Female
Age: 21/60
Education: Basic (read and write)
Location: any
IT knowledge: Low experience
Profession: any
Social Class: Low and middle business
Main User: owner, security personal,
private investigator
Secondary User: Staff members
Motivation: Prevent theft, Monitor business, Outdoor surveillance, Check on family, ID employees.
PLANNING
The project schedule identifies and organizes project tasks into a sequence of events that form a
project management plan. The process of building this schedule enabled us to identify risk points and develop
the proper linkage of events; it also assists in resource planning and allows us to establish milestones for the
development of the project.
9
11. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Our concept map represents ideas and information that the system must perform.
See images
advers
weather
ID Faces
Trace Filter
can
can
contains
Save Faces
Face
recognition
can
Night
Vision
Green Filter
image
contains
can
contains
Face
detection
can
Rotate
Cameras
can
has
has
Adjust IR
brightness
Recon
Outpost
Figure 7 – System Main Idea
WBS - GANTT CHART
Figure 8 - WBS
10
12. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Figure 8 - Gantt chart timeline
Figure 9 - Gantt chart table information
RESOURCES PLAN
Types of labor required for the project: Programmer, Robotics/Electronics and Project manager
Roles and key responsibilities for each labor type:
-
Programmer- Software Development
Robotics/Electronics – Hardware Development
Project manager - Planning, execution and closing of the project
Items of equipment to be used and their purposes:
-
Main Equipment:
PC for software development
Webcams for image acquiring
Arduino and electronics for robotics
11
13. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Software Tools:
-
Visual Studio C# 2008
EmguCV
Adobe Photoshop CS3
Complete list of equipment: Attachment 29 - Equipment list
Phase 1
Project Documenta ti on
- Ini ti a l devel opment i dea
- Requirement analysis
Phase 3
Prepa ri ng O.S
- Us i ng wi ndows res tri cti ons , twea ki ng regi s try
Project schedule
- Project tasks( WBS, Gantt )
- Available resources (Materials, Tools, etc.)
- Budget
Progra mmi ng Code
- Ma i n s tructure
- Methods to us e EmguCv
- Us i ng wi ndows l i bra ri es
- Ada pti ng control s to i nterfa ce
- Code opti mi za ti on
- Ha rdwa re code
- Prel i mi na ry tes ts
Project viability
- Research and small scale experimental
Hardware assembling
- Soldering, wiring and adapting
- Blue prints, Interface sketches, etc.
Phase 2
Phase 4
Progra mmi ng Tool s ( C#, EmguCV)
Hardware preparation
- Disassembling, modifying and researching.
Software preparation
- Researching EmguCV implementation,
parameters, routines.
- Preparing Graphical Interface and adapting for
touch screen.
Fi el d Tes ti ng
- Adjus ti ng s oftwa re pa ra meters
- Fi xi ng ha rdwa re probl ems
Project Documenta ti on
- Endi ng project
12
14. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
LOW FIDELITY PROTOTYPE SKETCH
Figure 9 - Main menu screen
Figure 11 - Login screen
Figure 10 - Splash screen
Figure 12 - Detection screen
Figure 14 - Save data screen
Figure 13 - Recognition screen
Figure 15 - Night vision screen
13
15. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
SYSTEM AND SOFTWARE DESIGN
The final system will provide the user new tools for detecting and recognizing faces in real time video stream
also provide real time video for adverse weather conditions. The user will have the ability to rotate camera
position, providing a wider range of view. All this features have the purpose to update older video
surveillances systems. To accomplish this mission we consulted an expert Americo Santos
(Surveillance
Security Expert)
the following requirement
was the result of our interview/research.
USER REQUIREMENTS
1.
Functional requirements
a. The system must be able to identify human faces in live video.
b. The system must be able to search for faces in images as an input and sea rch for a matching
face in folder, and then show the results.
c. The results should be viewed by showing the name of the face match of the input to the
most-similar face in folder.
d. The user should be able to choose (click on) to change the camera position.
e. The system must be able to draw contours of moving objects in a live video.
f. The system must be able to display a live infrared video.
2.
Non Functional Requirements
a. The user should be able to adjust (click on) Infrared intensity.
b. User should be able to apply a green spectrum filter (click on) in a live video.
c. Critical errors and information may be shown in a textbox where it has auto -scroll
when the space of the box is not enough.
d. The face should be localized by detecting inner and outer boundaries, background
must be ignored.
e.
The user should be able to save detected faces and names for future comparison.
USE CASE
Figure 16 - Use Case
Actors Description:
14
16. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Users: Authorize the system to analyses a video image and expects the system to display the results.
Use Cases
Request Face Detection
Request Face Recognition
Perform Camera Control
Description
The user requests the system to find a human face in video.
The user requests the system to match a detected face.
Allows the user to rotate the camera up, down, left and right
Request Infrared video
Request Green Filter
Request Trace Contours
The user request to view stream from Infrared camera
The user request to apply green filter on Infrared stream
The user request to apply trace filter on Infrared stream
FACE DETECTION
Use Cases
Description
Case No.
1
Name
Request Face Detection
Actor
User
Description
The user requests the system to find a human fa ce in video.
Pre-condition System must have receiving webcam stream
Post-condition System displays highlighted face(s).
Flow of Events
Steps
Description
1
User click button to start detecting faces
2
System starts webcam stream
3
System loads algorithm (Haar) to detect face in video
4
System highlight detected faces with rectangle square
5
System return video image with highlighted face(s).
Alternative Flows
Steps
Description
2.a
Webcam disconnected
5.a
No face(s) have been detected in video
Entry Conditions
User has correct authentication to system and webcam is functional
Exit Conditions
User click back button
Sequence Diagram: Attachment 3 - Face Detection SD
Activity Diagram: Attachment 4 - Face Detection AD
FACE RECOGNITION
Use Cases
Case No.
Name
Actor
Description
Precondition
Description
2
Request Face Recognition
User
The user requests the system to match a detected face.
System must have detected a face(s)
15
17. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
PostSystem displays name(s) of face(s).
condition
Flow of Events
Steps
Description
1
User click button to recognize a highlighted face
2
System loads algorithm (Haar) to detect face in image
3
System highlight detected faces with rectangle square
4
System loads algorithm (PCA) to match a stored face
5
System return name(s) of the most similar face matched.
Alternative Flows
Steps
Description
2.a
No face(s) have been detected in image
5.a
Na e does t at h fa e s
Entry Conditions
Image was previous captured by system
Exit Conditions
User click back button
Sequence Diagram: Attachment 5 - Face Recognition SD
Activity Diagram: Attachment 6 - Face Recognition AD
CAMERA CONTROL
Use Cases
Description
Case No.
3
Name
Perform Camera Control
Actor
User
Description
Allows the user to rotate the camera up, down, left and right
Pre-condition Arduino interface must be connected
PostDisplay message of the new position
condition
Flow of Events
Steps
Description
1
User click button to rotate camera position
2
System connect to Arduino by serial port
3
System send new values position to Arduino
4
Arduino rotates the camera to the new position
5
System return new position
Alternative Flows
Steps
Description
2.a
System can´t open serial port
Entry Conditions
Arduino must be connected and serial port available
Exit Conditions
Sequence Diagram: Attachment 7 - Camera Control SD
Activity Diagram: Attachment 8 - Camera Control AD
16
18. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
INFRARED VIDEO
Use Cases
Description
Case No.
4
Name
Request Infrared Video
Actor
User
Description The user request to view stream from Infrared camera
PreSystem must be receiving video stream from webcam
condition
PostSystem displays infrared image
condition
Flow of Events
Steps
Description
1
User click button to display infrared video
2
System checks camera connection
3
System starts displaying video stream
Alternative Flows
Steps
Description
2.a
Does t re ei e i age
Entry Conditions
Camera must be connected
Exit Conditions
User click back button
Sequence Diagram: Attachment 9 - Infrared SD
Activity Diagram: Attachment 10 - Infrared AD
GREEN FILTER
Use Cases
Description
Case No.
5
Name
Request Green Filter
Actor
User
Description The user request to apply green filter on Infrared stream
PreSystem must be receiving video stream from webcam
condition
PostSystem displays infrared image in green spectrum
condition
Flow of Events
Steps
Description
1
User click button to display green spectrum
2
System checks camera connection
3
System copies streaming video frame to memory
4
System loop pixels and removes red and blue colours.
5
System shows live video stream in the green spectrum
Alternative Flows
Steps
Description
2.a
Does t re ei e i age
Entry Conditions
17
19. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Camera must be connected
Exit Conditions
User click back button
Sequence Diagram: Attachment 11 - Green Filter SD
Activity Diagram: Attachment 12 - Green Filter AD
TRACE CONTOURS
Use Cases
Description
Case No.
5
Name
Request Trace Contours
Actor
User
Description The user request to apply trace filter on Infrared stream
PreSystem must be receiving video stream from webcam
condition
PostSystem displays contours of moving objects in stream
condition
Flow of Events
Steps
Description
1
User click button to display green spectrum
2
System checks camera connection
3
System copies streaming video frame to memory
4
System applies Image Pyramid algorithm to image
5
System applies Canny edge detector algorithm to image
6
System shows white contours of moving objects
Alternative Flows
Steps
Description
2.a
Does t re ei e i age
Entry Conditions
Camera must be connected
Exit Conditions
User click back button
Sequence Diagram: Attachment 13 - Trace Contours SD
Activity Diagram: Attachment 14 - Trace Contours AD
18
20. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
CLASS DIAGRAMS
Figure 17 - Class Diagram
Class Diagram: Attachment 15 - Class Diagram
19
21. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
SYSTEM ARCHITECTURE
System architecture is divided in two parts: Software and Hardware.
SOFTWARE
In the figure below has a representation of the system overview main structure, that is divide in three parts.
Figure 18 - System
1.
Main System
The main system validates the user login, and provides an interface for the other modules.
Figure 19 - Main System AD
The authentication process stage is deployed before the main interface.
20
22. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Figure 20 - Login process
2.
Module Tracking
This stage consist in detecting, recognize and adding a face to the system. We will star with the face
detection main process and mounting up the components to full scale.
Figure 21 - Face Detecting AD
All algorithms used are explained in implementation chapter.
21
23. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Figure 22 - Camera Control AD
Bellow the main diagram with all stages
Figure 23 - Face Recognition AD2
3.
Module Night Vision
Module responsible to apply filters to improve the quality of the infrared illuminated video.
22
24. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Figure 24 - Night vision AD
In the following attachments can represent a better dependency of the system internal structure.
Attachment 30 - Dependency Graphs Main
Attachment 31- Dependency Graphs MD1
Attachment 32 - Dependency Graphs MD2
HARDWARE
I this se tio e ill represe t the hard are i tera tio . Please see Deplo
the Arduino internal and mounting up the components to full scale.
e t Diagra
. “tarti g ith
Figure 25 - Arduino AD
Now we represent the full cycle interaction with Arduino and C#.
23
25. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Note: Webcams image streams are not represented here, we assume that the USB controllers processes is the
O.S. responsibility.
Figure 26 - System and Arduino AD
DEPLOYMENT DIAGRAM
Figure 27 - Deployment Diagram
Figure 28 - All components of the system
24
26. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Hardware schematic and assembling: can be viewed in (Hardware attachment).
ENTITY RELATIONSHIP DIAGRAM
Our system does not use a database to store data, why:
We don't have enormous data sets.
The appli atio ill e er use the full features of data ases soft are s.
Power loss can corrupt database, due to the nature of the prototype.
Additional cost and time associated for development and (probable) to restore corrupted database.
Plain text files in a file system
Very simple to create and edit.
Easy for users to manipulate with simple tools (i.e. text editors, grep, etc.).
Efficient storage of binary documents.
K.I.S.S: Keep It Small and Simple.
Figure 29 - ER Chen´s notation (Peter Chen's 1976 paper).
All images (faces) are stored in simple files (.bmp) in a folder (TrainedFaces). Each i mage name has a number
that orrespo ds to a a e positio ed i the file separated the deli iter. Nu La els represe ts the total
of names and images in file and in the folder.
Future implementation of a database such as SQLite or other can be developed in use with a backup system or
a use of UPS.
TECHNOLOGY OPTIONS
25
27. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
The project is developed over Microsoft Windows O.S. and Arduino electronics. Windows O.S. will provide
the capability for develop a graphical interface and easy integration with the ex ternal hardware.
Arduino electronics will provide the ability to control the hardware.
In the project it will be used Visual Studio C# language to develop the software and Arduino language. The
combinations of these two languages provide a sta ble software interface and communication with the
hardware.
EmguCV simplifies the implementation of the algorithms in C# code and provides a set of tools for
development.
IMPLEMENTATION
CLASSES
Classes generated from C# code - Attachment 1 - S.C. MD1
Attachment 2 - S.C. MD2
Attachment 3 - S.C. MD3
ALGORITHMS AND CODE
Algorithms - For the algorithms was carried out an ex tended r esearch to select the proper algorithms that are
best suited for the development of the project. Being selected EmguCV that provides the tools for their
implementation, documentation and use.
EmguCV internal architecture can be viewed: Attachment 33 - EmguCV Architecture.
Software – In this section we will point out the parts that was researched and studied to be implemented,
adjusted a d opti ized i the real orld' o ditio s to our proje t.
Splash Screen
Creation of input graph and adaptation to the prototype screen.
Adaptation and use of Windows libraries to hide the Windows taskbar (based on research ).
Creation of the activation time window.
Login Form
Creation of the graphic adaptation and routines for buttons and functions of the virtual keypad input.
Creation of the routines for the algorithm (Reading, writing and implementation of the calculation) .
Mathematical calculation (based on research).
Main Form
Creation of the interaction process between screens (Splash Screen, Login, and Main Form).
Adaptation of the parameter (WM_VSCROLL) Windows libraries (based on research).
Creation of the function that checks other modules installed in the system.
Adaptation of the function to shutdown computer. Windows command (based on research).
26
28. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Tracking Mode - Face Detection
Creation of the control functions for the position and rotation of the camera by the seria l port and
display the results.
Creation of the control function to check connection hardware (PC-Arduino).
Implementation of the process to capturing images of the camera (based on functionality EmguCV
and research for optimization and parameters use).
Implementation of the face detection algorithm (based on functionality EmguCV and research for
optimization and parameters use).
Creation of graphics video overlay. (Using the functionality provided by EmguCV).
Creation of the passage of the selected image between screens.
Tracking Mode - Face Recognition
Implementation of face recognition algorithm (based on functionality EmguCV and research for
optimization and parameters use).
Creation of graphics video overlay. (Using the functionality provided by EmguCV).
Creation of the passage of the selected image between screens.
Tracking Mode – Database
Creation of the function for the virtual keyboard and the graphical adaptation for the interface form.
Implementation process of saving data (based on functionality EmguCV and research for optimization
and parameters use).
Night Mode – Night Vision
Implementation of the process to capturing images of the camera (based on functionality EmguCV
and research for optimization and parameters use).
Implementation process of removing pixels (based on functionality EmguCV and research for
optimization).
Implementation of the process of image contours (based on functionality EmguCV and research for
optimization and parameters use).
Creation of the control functions for brightness intensity of infrared.
Arduino Code
Creation of code interaction between hardware and software (based on the functionality and use of
the Ardui o li rar a d li rar s for IC HC
N.
Artwork
All creation
Hardware – After the hardware components selected as well as a research of the characteristics, it is
concluded that the following changes shall be carried out).
Modifying the webcam hardware (based on research and previous knowledge)
Creation of a printed circuit board for the expansion input ports of Arduino (based on research and
previous knowledge).
27
29. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Creation of a system for adjusting the intensity of infrared LEDs (based on research and use of IC
74HC595N library).
Rotation control system of the motors (based on Arduino library and previous knowledge).
Creation and modification (Metrics dimensions, assembly, wiring and enclosures prototype) personal
electronics skills.
based on research for
optimization and parameters use
Splash Screen
use
user32.dll from O.S to
Hide Taskbar
Main Form
use
user32.dll WM_VSCROLL from
O.S to Scroll up and
down
contain
Recon Outpost
contain
use
contain
user32.dll from O.S to
Shutdown PC
Login Form
use
calls
calls
Face Tracking
contain
Mathematical
calculation
Face Detection
contain
Face Recognition
Night Mode
use
use
EmguCV functions for
receive image
analyses and apply
algorithms
contain
Save Face
use
use
contain
Green Filter
Trace contours
IR image
send to
Arduino
library for IC
74HC595N
Figure 30 - Research diagram
28
30. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
FACE DETECTION
The system is developed based on EmguCV that provides the libraries for the implementation of the
algorithms for face detection and recognition.
The face detector uses the responses to a series of simple filters to classify regions of an image as either a face
or not a face. The filters are called Haar filters and are calculated by taking the sum of pixels within a number
of rectangles, multiplying each sum by a weight and adding the results.
The implementation in simple steps, the system loads the Haar file(haarcascade_frontalface_default.xml) to
detect frontal faces in the image.
This file contains the metrics for detecting faces. For this metrics please see (Cascade Classifier Training –
www.docs.opencv.org/doc/user_guide/ug_traincascade .html).
After loading the metrics for the Haar filter each image (frame) acquired by the webcam is scanned by the
parameters set in the code.
Convert the frame to a grayscale image and perform Histogram Equalization.
Histogram Equalization: Equalization implies mapping one distribution (the given histogram) to another
distribution (a wider and more uniform distribution of intensity values) so the intensity values are spreaded
over the whole range (http://docs.opencv.org).
Figure 31 - Histogram Equalization
And for the detection:
Scales Increase Rate - specifies how quickly EMguCV should increase the scale for face detections with each
pass it makes over an image.
Minimum Neighbors Threshold - sets the cutoff level for discarding or keeping rectangle groups as
fa e or ot, ased on how many raw detections are in the group.
Canny Pruning Flag - skips image regions that are unlikely to contain a face, reducing computational overhead
and possibly eliminating some false detections.
Minimum Detection Scale - is the size of the smallest face to search for in a image. The default for this by
setting the scale is 25x25.
Code:
ha a r = new HaarCascade(@"haarcascade_frontalface_default.xml");
_i mgProcessado = _i mgOriginal.Convert<Gray, byte>();
_i mgProcessado._EqualizeHist();
29
31. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
va r fa ces = _i mgProcessado.DetectHaarCascade(
ha a r,
_s ca leFactor,
_s ca leDistanc,
HAAR_DETECTION_TYPE.DO_CANNY_PRUNING,
new Si ze(_imgProcessado.Width / 8, _i mgProcessado.Height / 8))[0];
Figure 32 - Sequence diagram for Face Detection
FACE RECOGNITION
For the face recognition, EmguCv uses a series of algorithms and mathematical calculations combined.
This process is implemented by the PCA algorithm please sees for more details:
(www.ehu.es/ccwintco/uploads/e/eb/PFC-IonMarques.pdf).
The first step is to load the trained faces (Folder containing a set of images of human faces) and the file
containing the name for each of these faces. The extracted face from the face detector is compared
(projected) in the set of Eigenfaces. This means the target face featur es are compared (distance) which
the features of the Eigenfaces and returns the most similar face, this return face corresponds to a
name in the file (TrainedLabels.txt).
30
32. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
To achieve this, the MCvTermCriteria (EmguCV parameter) must rec eive the total sum of trained faces
and the maxIteration (MCvTer mCriteria parameter), this parameter determines the number of
Eigenfaces to be created, please see (www.face-rec.org/algorithms/PCA/jcn.pdf).
The EigenObjectRecognizer (EmguCV) object recognizer using the PCA (Principle Components Analysis) will be
set to receive:
The array of trained faces and the array of names.
The thr esholds (EigenObjectRecognizer parameter) that compares the facial feature distance
between the target image and the set of Eigenfaces, the smaller the value number, the more likely
an examined image will be treated as unrecognized face, big values will return the most closest
match to one of the images in the Eigenface set.
The MCvTermCriteria paramenters defined above.
This will return the correspondent name of the most similar face match. Our threshold is set to always return a
most similar face of the Eigenface set despite the (distance) facial features between them.
This will ensure that even if the target face is affected by some other conditions, the algorithm will
return the most similar face in folder.
Code:
s tri ng _LoadFaces;
s tri ng _NomesInfo = File.ReadAllText(Application.StartupPath +
"/Tra i nedFaces/TrainedLabels.txt");
s tri ng[] _listNamesTxt = _NomesInfo.Split('%');
i nt _numLabels = Convert.ToInt16(_listNamesTxt[0]);
_numbTrainFaces = _numLabels;
for (i nt _tra indFaces = 1; _tra i ndFaces < _numLabels + 1; _tra i ndFaces++)
{
_tra i ningImages.Add(new Image<Gray, byte>(Application.StartupPath + "/TrainedFaces/" +
_Loa dFaces));
_l a bels.Add(_listNamesTxt[_traindFaces]);
…
Bi tma p default_ig = new Bitmap(face_PICbox.Image);
_gra y = new Image<Gray, byte>(new Bitmap(default_ig));
_res ult = _gra y.Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
MCvTermCri teri a termCri t = new MCvTermCri teria(_numbTrainFaces, 0.001);
Ei genObjectRecognizer recognizer = new Ei genObjectRecognizer(
_tra i ningImages.ToArray(),
_l a bels.ToArray(),
500000,
ref termCri t);
_na me = recognizer.Recognize(_result).Label;
Code Sequence Diagram: Attachment 17 -Face recognition code SD
Graphical representation of the process:
31
33. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Figure 33 - Photometric Normalization Techniques – Shahrin Azuan Nazeer and Marzuki Khalid
IR IMAGE
An infrared (IR) light emitting diode (LED) is an electronic method of creating infrared light. This makes
an IR LED illuminator a method of lighting up an area using infrared light. Infrared is outside the visual range of
the human eye. On the other hand, if the area is monitored by a device capable of seeing into the IR range,
then the area may be brightly lit to that device alone.
(http://www.wisegeek.com/what-is-an-ir-led-illuminator.htm) see (webcam modification).
The code just captures images from the second webcam and displays it for the user.
Code:
ca pWebCam = new Ca pture(0);
ca pWebCam.SetCa ptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, 240);
ca pWebCam.SetCa ptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, 320);
…
i mgOri ginal = ca pWebCam.QueryFrame();
vdbox_IMGBOX.Image = i mgOriginal;
Code Sequence Diagram: Attachment 22 - S.D. IR
NIGHT IMAGE
Now that our webcam is capturing the near infrared spectrum, we can apply filters to the streaming video. The
filter is (Night Mode) that removes the Red and Blue pixels from the image, displaying only the green pixels.
This is because the human eye can distinguish more shades of green than any other colour.
32
34. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
(http://www.physicscentral.com/explore/action/infraredlight.cfm). This is a simple loop cycle that interacts
over an array of the pixels (image) removing it from the video. EMguCV provides direct acess to the image
data.
Code:
vdbox_IMGBOX.Image = i mgOriginal;
i mgCa pture = i mgOriginal.Copy();
Suppress(2);
vdbox_IMGBOX.Image = i mgCapture;
Suppress(0);
vdbox_IMGBOX.Image = i mgCapture;
pri va te voi d Suppress(int s pectrum){
for (i nt i = 0; i < i mgOriginal.Height; i++){
for (i nt j = 0; j < i mgOri ginal.Width; j++){
i mgCa pture.Data[i, j, s pectrum] = 0;}
}
}
Code Sequence Diagram: Attachment 21 - S.D. Night Mode
TRACE IMAGE
For the filter (Trace Mode) the image is subject to various transformations. The first transformation is the
Image Pyramid [9]. We perform Gaussian pyramid decomposition (PyrDown) each reduction in size
removes high frequency components from the image. What we are left with are the low frequencies at
each level. Then we reverse the process (PyrUP) we take the reduced image, scale it up to the original size, and
subtract the difference. See (Figure 29).
Figure 34 - Gaussian Pyramid Decomposition
Now we apply the Canny edge detector is an edge detection operator that uses a multi -stage
algorithm to detect a wide range of edges in images . It was developed by John F. Canny in 1986. The Canny
edge detector uses two parameters:
Thresh - The threshold to find initial segments of strong edges.
ThreshLinking - The threshold used for edge linking.
To remove the background and redraw the image in black and white we use a ThresholdBinary that will
convert the intensity of the pixel, if the pixel is higher than thresh, then the new pixel intensity is set to zero,
otherwise, it is set to max value.
33
35. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
In Figure 30 we can see the steps of the image transformation.
Figure 35 - Canny edge detector steps
Code:
pri va te voi d tra ceIMG()
{
Ima ge<Gray, Byte> gra yFrame = i mgOriginal.Convert<Gray, Byte>()
.PyrDown()
.PyrUp()
.Ca nny(new Gray(180), new Gray(90));
Ima ge<Gray, byte> binaryImage = gra yFrame.ThresholdBinary(new Gra y(1), new Gray(255));
for (va r contour = bi naryImage.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_CCOMP);
contour != null;
contour = contour.HNext)
{
gra yFra me.Draw(contour, new Gray(255), -1);
}
vdbox_IMGBOX.Image = gra yFrame;
}
Code Sequence Diagram: Attachment 20 - S.D. Trace
LOGIN
The login is composed by 6 digits e.g. (233987), this digits are divided in to two groups of three digits e.g. (233)
and (987). The first group of three digits is where we apply the mathematical c alculations.
The number is duplicated e.g. (233233), now we divide the number by 13 e.g. (233233/13= 17941).This
result is stored i a i ar file as e.g.
data. i
.
To check if the login is valid, the system r etrieves the value from the file, and divides by 11 e.
g.(17941/11 = 1631), and divides the result by 7 e.g.( 1631/7 = 233). This way prevents that the first group of
digits or stored directly in the code and provides a unique identification for the soft ware. The second group of
digits e.g.
it does t i ple e t a
erifi atio i this ode, futur e ork this group ill e used to
verify hardware authenticity (Arduino).
Code:
voi d va lidatePass(){
i f (Fi le.Exists(@"data.bin")){
// pass 233987
i f (unlockBox.Text.Length == 6){
s tri ng codFile = passRead();
s tri ng cod = unlockBox.Text;
s tri ng sub = cod.Substring(0, 3);
34
36. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
s tri ng sub2 = cod.Substring(3, 3);
s tri ng codeVerificacao = "987";
i f (codFile == s ub && codeVerificacao == s ub2) { this.Close();
}el se { errorPass = true; }
}el se { errorPass = true; }
}el se{ MessageBox.Show("Please contact support: Error F24");
forea ch (Control c i n this.Controls){ c.Enabled = false; }
Appl ication.Exit(); }}
s tri ng passRead(){
s tri ng testfile = @"data.bin";
l ong va lueCalc;
Strea mReader fs = new StreamReader(testfile);
s tri ng str = fs .ReadLine();
fs .Cl ose();
va l ueCalc = Convert.ToInt64(str, 2) / 11 / 7;
s tri ng final = Convert.ToString(valueCalc);
return (fi nal);
}
Code Sequence Diagram: Attachment 23 - SD login
SENSOR CONTROLS
The sensors are responsible to rotate the camera in the X-axis and Y-axis, and provide control of the infrared
light intensity. In camera rotation, the X-axis value is (10 to 140) degrees and the Y-axis is (80 to 140) degrees.
This is due to the dimensions and the assembly position of the servos (motors) on the hardware. The default
position in the X-axis is 80 degrees and on the Y -axis is 140 degrees. With this starting position (default), the
system increases or decreases 10 degrees in the axis position when user clicks the buttons. To
accomplish this C# opens the serial port and sends the value to Arduino.
Code Rotation Up Control:
pri va te voi d btn_UP_Click(object sender, EventArgs e){
i f (conectado == true){
try{
i f (posicaoY >= 80 && pos icaoY <= 130){
pos icaoY += 10;
envi arTX = "0" + pos icaoY.ToString() + "b";
s p_COM.Wri te(enviarTX);
}el se{
envi arTX = "Ma x rota tion reached";
}
txtbox_LOG.AppendText("TX=" + enviarTX + " | " + pos i caoY.ToString() + "º Gra us" + "rn");
}ca tch{
conecta do = fa lse;
chkTi meSerial.Start();}}}
Code Sequence Diagram: Attachment 18 - S.D. Rotate Camera
This code represents the movement in the upper direction of the servo (motor), for the left, right, center and
down the code receives other values; the structure of the implementation is the same.
Code Brightness Control:
The same sequence of events of the camera rotation is followed to control the brightness of the
infrared led´s. Only in this case the maximum and the minimum value ranges are 255 and 25 units and
35
37. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
increases and decreases by 25 units each click. These units are established by the IC (Integrated Circuit). See
attachment chapter (IR Illumination array).
Code:
pri va te voi d brightDown_Click(object s ender, EventArgs e){
i f (conectado == true){
try{
i f (bri ghtX <= 250 && bri ghtX >= 125){
bri ghtX -= 25;
s tri ng enviarTX = brightX.ToString() + "e";
s p_COM.Wri te(enviarTX);
txtbox_LOG.AppendText("Intensity= " + envi arTX + "rn");
}
el se i f (brightX >= 25 && bri ghtX <= 100){
bri ghtX -= 25;
s tri ng enviarTX = "0" + bri ghtX.ToString() + "e";
s p_COM.Wri te(enviarTX);
txtbox_LOG.AppendText("Intensity= " + envi arTX + "rn");
}
}ca tch{
conecta do = fa lse;
chkTi meSerial.Start();}}}
Code Sequence Diagram: Attachment 24 - S.D. Brightness
RECON OUTPOST TOOL
Software:
Attachment
Attachment
Attachment
Attachment
Attachment
Attachment
Attachment
Attachment
29 – Login
30 - Main Interface
31 - Interface Face Detection
32 - Interface Face Recognition
33 - Interface Database
34 - Interface Night Mode
35 - Interface Infrared Mode
36 - Interface Trace Mode
Hardware:
Attachment – Chapter Hardware
TESTS
TESTING OBJECTIVE
Detection Performance - Test the performance of frontal face for recogniti on metrics.
Measure limitations – Test the performance of the system in a different environment conditions.
All tests are adapted to the nature of the prototype.
36
38. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
TESTING TOOLS
Detection Performance: The Yale Face Database contains 165 grayscale images in GIF format of 15 individuals.
There are 11 images per subject, one per different facial expression or configuration: center -light,
w/glasses, happy, left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink.
(http://vision.ucsd.edu/content/yale-face-database).
This database was converted and rescale to bmp for mat and the names where inserted in the
(TrainedLabels.txt) file.
Hardware used: Full assembled prototype. (Attachment 46 - Prototype Recon Outpost)
O.S used: Windows XP Professional SP3
TESTING
Test Case ID
001
002
003
Description
Detection performance under low lights conditions.
Detection performance under other face angles
Trace Mode distance range
Test Case ID:
Test Description:
Purpose:
001
Detection performance under low lights conditions
Compare who much light conditions affect detection and recognition under 90
lux. of illumination in a 10m2 area.
Yale Face Database plus three more subjects.
The system will detect and recognize the three new subjects
In a straight line towards camera two subjects were detected and
recognized in the first pass, the other subject was identified and
recognized on the second pass.
Pass
N/A
Input Data:
Except Result:
Actual Result:
Pass / Fail:
Remarks:
Test Case ID:
Test Description:
Purpose:
Pass / Fail:
Remarks:
002
Detection performance under other face angles
Test metrics of haarcascade_frontalface _default . l for differe t fa e
angles.
Three test subject in strait line path
The system will detect and recognize the three subjects in different angles.
System fails at z-axis face rotation after +/- [-30º +30º]
System fails at x-axis face rotation after +/- [-15º +30º]
System fails at y-axis face rotation after +/- [-20º +20º]
Pass
N/A
Test Case ID:
003
Input Data:
Except Result:
Actual Result:
37
39. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
Test Description:
Purpose:
Input Data:
Except Result:
Actual Result:
Pass / Fail:
Remarks:
Trace Mode distance range
Test distance of the Trace Mode under infrared light illumination in exterior
environment.
One test subject in strait line path
The system will trace contours at 3mts distance.
System starts to identify subject at 3.5mts and complete the contour of the
body at 2.5mts from the camera.
Pass
N/A
Results: We tested the three main algorithms in the software to measure the limits and capability of the
system. The real time performance of the software under the systems hardware proven to be
acceptable in the conditions tested. Other tests were performed and the result can be viewed at
(http://youtu.be/Hf7Fnb0ZaCA).
FUTURE OPTIONS IMPLEMENTATIONS
For future implementations that will improve the system is recommended:
1.
2.
3.
The sensibility of the Minimum Neighbors Threshold and the Scales Increase Rate may be adjusted by
the user.
Save and Record video images.
Motion sensor trigger
4.
Internet streaming video image and hardware control
CONCLUSION
By the end of this project we were able to implement a complete low cost surveillance prototype that provides
complete face detection and recognition also provided the system with night vision capability options. The
success of our prototype is demonstrated in the test results. The system does not need sophistical installation
and the portability of the prototype aloes the user to diversify the use.
Future Work:
Project Night Watch, system capable of detecting and recognizing faces for over 1km of distance in 0.1 lux of
illumination. The system is compose by objective optics and infrared laser illumination. The system is
remotely controlled, the video is send by internet streaming connection and the system can rotate the
cameras in various angles, the system can be accessed from anywhere in the world.
The authors wishes to thank everyone who was involved in this project especially project supervisor Drª
Daniela Cruz (my teacher!!), Americo Santos (Surveillance Security Expert), Alison Cynthia Budde (test member
crew), special thanks to Jeremy and Julia Ashford (my uncle and aunt for their special support) Douglas Charles
Budde (Graphic Design Expert) and special thanks to Our Lady of Fatima for the blessing.
BIBLIOGRAPHY
1.
2.
J. F. Canny: A computational approach to edge detection. IEEE Trans. Pattern Analysis and
Machine Intelligence, 8 (6), 1986, 679-698.
C. Grigorescu, N. Petkov and M. A. Westenberg: Contour and boundary detection improved by
surround suppression of texture edges, Image and Vision Computing, 22 (8), 2004, 609-622.
38
40. Recon Outpost Research & Analysis Systems
Software Engineering, University Lusófona do Porto
3.
Paul Viola - Robust Real-time Object Detection - Mitsubishi Electric Research Labs, Cambridge, MA
02142
4. Face Recognition Using Eigenfaces, Matthew A. Turk and Alex P. Pentland, MIT Vision and Modeling
La , CVP‘
.
5. Eigenfaces for Recognition, Matthew A. Turk and Alex P. Pentland, Journal of Cognitive
Neuros ie e
.
6. Jan Fajfr: Face recognition in RIA applications, 20 October 2011.
7. Emgu CV: OpenCV in .NET: http://www.emgu.com/wiki/index.php/Main_Page.
8. Clint Cielto: Multiple face detection and recognition in real time, September 2011
9. OpenCV 2.4.5.0 documentation - http://docs.opencv.org/genindex.html
10. Robin Hewitt: Face Recognition With Eigenface: SERVO Magazine, February 2007
11. Arduino Forum, http://forum.arduino.cc
39
48. Detect faces in
image
Wait for stream
No
Image
Haar Cascade
algorithm
No
face
detected
Highlight
detected faces
Attachment 5 - Face Detection AD
Attachment 6 - Face Recognition SD
8
50. Wait for new
position
no
is port
open
Display error
Send comand
to hardware
yes
Calculate
position
Return Position
Attachment 9 - Camera Control AD
Attachment 10 - Infrared SD
10
51. Attachment 11 - Infrared AD
Attachment 12 - Green Filter SD
11
69. Hardware
Modelo:
7 UV LED
http://www.securitycamera2000.com/products/Mini-420TVL-1%7B47%7D3-Sony-CCDLink:
Color-Camera-Audio%7B47%7DVideo-with-12-IR-LED.html
Descricao: UV Leds 365nm
Modelo:
12 IR LED
http://www.securitycamera2000.com/products/Mini-420TVL-1%7B47%7D3-Sony-CCDLink:
Color-Camera-Audio%7B47%7DVideo-with-12-IR-LED.html
Descricao: IR Leds
Modelo:
Link:
Microsoft LifeCam HD-5000
http://www.microsoft.com/hardware/en-us/p/lifecam-hd-5000
LifeCam HD-5000 offers a 720p HD sensor and automatically sharpens your images with
Descricao:
Auto Focus.
Modelo:
Intel D525MW Fanless Dual Core Atom Mini-ITX Board
Link:
http://www.mini-itx.com/store/?c=47
Descricao: 1.8GHz Dual Core, Intel GMA 3150 Video,4GB of system memory.
Modelo:
Servo - Small
Link:
http://www.ptrobotics.com/product.php?id_product=1089
Descricao: low-cost, high quality servo for all your mechatronic needs.
Modelo:
Pan & Tilt Bracket Medium Kit
Link:
http://www.ptrobotics.com/product.php?id_product=1208
Descricao: Pan and Tilt assembly for horizontal surface mount.
Modelo:
Arduino Uno R3
Link:
http://www.ptrobotics.com/product.php?id_product=1033
Descricao: Arduino Uno is a microcontroller board based on the ATmega328.
Modelo:
Lilliput UM1010-NP/T
Link:
http://www.lilliputuk.com/monitors/usb/um1010/
Descricao: 10" touch screen USB monitor
Modelo:
PicoPSU-150-XT DC-DC Power Converter, 150 W
Link:
https://www.logicsupply.eu/power-supplies/dc-converters/picopsu-150-xt/
Descricao: The picoPSU-150-XT is the world's tiniest 12-volt DC-DC ATX power supply unit (PSU).
Modelo:
HP Webcam HD-2200
http://h10010.www1.hp.com/wwpc/ca/en/ho/WF06c/A1-329290-3736366-3736367Link:
3736367-5081841-5081844.html
Descricao: 720P video at up to 30 frames per second.
Attachment 37 - Equipment list
29
70. Webcam Modification:
Theory: The light sensors used in digital cameras are either CCDs (charge-coupled detectors) or
CMOS detectors — the latter is more sensitive and appears in newer, fancier, and pricier webcams.
Both types are more sensitive to infrared light near the visible spectrum than they are to visible light.
So for a CCD/CMOS camera to match what a human sees, all of the infrared light (wavelengths
longer than 700 nanometers) must be cut out with a filter. By removing this IR- utout filter, ha ki g
the we a we a restore the respo se to i frared light.
Objective: Remove manufacturers IR-cutout filter from webcam.
Procedure:
Disassembling webcam casing
Removing filter
Disable auto focus
Hardware:
QT. Model:
1
Microsoft LifeCam HD-5000
Description
720p HD sensor and automatically sharpens your images
with Auto Focus.
Schematic Diagram:
Attachment 38 - Webcam Schematic
Removing the filter:
The IR-blocking filter is a plastic coating, reflecting reddish, on the lens surface closest to the light
detector CCD or CMOS. Take a sharp, small penknife, and scrape the coating off.
30
71. Attachment 39 - Webcam Filter
References:
http://www.lpi.usra.edu/education/fieldtrips/2005/activities/ir_spectrum/ir_webcam.html
IR Illumination array:
Theory: IR LED illuminator a method of lighting up an area using infrared light. Since infrared is
outside the visual range of the vast majority of people, this is essentially bathing an area in totally
invisible light. On the other hand, if the area is monitored by a device capable of seeing into the IR
range, then the area may be brightly lit to that device alone.
Objective: Create an array of IR bright leds, with brightness control via software.
Procedure:
Create PCB for electronic components
Hardware:
QT.
1
8
1
8
1
Model:
Capacitor 0.1F - Panasonic
220-ohm Resistor - Panasonic
74HC595N-Philips
5mm IR LED - 940nm
PCB fiberglass
Description
Capacitor
Resistor
8-bit s shift register
Super-bright IR Leds
Fiberglass Printed Circuit Board
31
72. Schematic Diagram:
Attachment 40 - Arduino IR Array
PCB Layout:
Attachment 41 - PCB Layout
References:
http://www.elcojacobs.com/shiftpwm/
http://arduino.cc/en/Tutorial/ShiftOut
Assembling Hardware:
Objective: Provide information of the sensors to the software.
Procedure:
Integrate all the modules
Assembler PC connections
Hardware:
QT.
1
1
1
Model:
Triple External casing
Hoya 52mm Infrared R72 Filter
Intel D525MW Board
Description
Metal body housing for security cams
R72 passes only infrared rays above 720nm
Fanless Dual Core Atom Mini-ITX Board
32
73. 2
1
1
1
1
1
1
Servo - Small
Pan & Tilt Bracket Medium Kit
Arduino Uno R3
Lilliput UM1010-NP/T
PicoPSU-150-XT DC-DC Power
HP Webcam HD-2200
Mini PC Case
High quality servo for all your mechatronic
Pan and Tilt assembly for horizontal surface mount
Microcontroller board based on the ATmega328
10" touch screen USB monitor
12-volt DC-DC ATX power supply unit
720P video at up to 30 frames per second.
External casing for motherboard
Module Units:
1. Vision unit, composes by optics, webcams and sensors.
2. Electronic unit, compose by Arduino and servos.
3. Brain unit, compose by computer.
Assembling Vision Unit:
UV Leds
Laser
Module
Microsoft LifeCam
HD-5000
IR Leds
Hoya 52mm
Infrared R72
Filter
USB Cam
HP Webcam HD-2200
Ultrasonic Range
Finder - Maxbotix LVEZ1
Attachment 42 - Cam Assembling
Assembling Electronic Unit:
Attachment 43 - Servo Arduino
33
74. Assembling Brain Unit:
90º degrees
flip
Touch Screen
Monitor
PC Casing
16 cm
5 cm
6 cm
19 cm
26 cm
Power
Supply
HD
Motherboard
Attachment 44 - Unit Assembling
34
75. Schematic Wiring Diagram:
RECON OUTPOST PROJECT
Main Hardware Wire
Connections Layout REV.1.1
LCD
CAM HD
ARDUINO
CAM HD IR
Servo X Input
Servo Y Input
Sonar Input
Laser Input
UV Input
IR Leds Input
IR Leds Input
IR Leds Input
=
=
=
=
=
=
=
=
05
06
07
09
10
08
12
13
Attachment 45 - Wire Diagram
35
77. Read this manual before installing.
Always follow instructions for proper use.
1
78. Table of Contents
1. Introduction ...................................................................................................................................4
1.1
Audience Assumptions .........................................................................................................4
1.2
Document Overview ............................................................................................................4
2. Recon Outpost Hardware Setup.......................................................................................................4
2.1 System Components..................................................................................................................4
2.2 System Connections ..................................................................................................................5
3. Using Recon Outpost System......................................................................................................6
3.1
LOGIN..............................................................................................................................6
3.2
MAIN WINDOW ...............................................................................................................6
3.3
Tracking Mode Module – FACE DETECTION........................................................................6
3.3.1
Tracking Mode Module – FACE RECOGNITION ................................................................7
3.3.2
Tracking Mode Module – DATABASE MANAGER .............................................................8
3.4
3.4.1
Night Vision Mode Module ...............................................................................................8
Night Vision Mode Module – NIGHT VISION ...................................................................8
4. Troubleshooting.......................................................................................................................... 10
2
79. LEGAL NOTICE:
Recon Outpost product is designed to be a prototype for testing purpose to test a concept.
Recon Outpost disclaims liability associated with the use of non-default hardware.
Recon Outpost makes no representations concerning the legality of certain product applications
such as the making, transmission, or recording of video of others without their knowledge and/or
consent. We encourage you to check and comply with all applicable local, state, and federal
laws and regulations before engaging in any form of surveillance.
To reduce the risk of electric shock, do not remove cover (or back). No user serviceable
parts inside. Refer servicing to qualified service personnel.
Do not use the camera in extreme environments where high temperatures or high humidity
exists. Use the camera under conditions where temperatures are between -4°F ~ 122°F (-20°C
~ 50°C), and humidity is below 85%.
Whether or not the camera is used outdoors, never point it toward the sun. Use caution
when operating the camera in the vicinity of spot lights or other bright lights and light reflecting
objects.
If installed close to a TV, radio transmitter, magnet, electric motor transformer or audio
speakers the magnetic field generated may interfere with or distort the image.
To prevent damage, do not drop any component or subject it to strong shock or vibration.
Do not install or operate in small, unventilated areas. Heat buildup can significantly reduce
the performance and operating life of the product and may cause a fire.
3
80. 1. Introduction
Welcome to Recon Outpost Configuration User’s Guide document. This document provides
sufficient guidance for hardware and software users to securely use the product in accordance
with the requirements. This document is specifically targeted at the final user.
1.1
Audience Assumptions
This document assumes the audience is generally familiar with computer peripheral installation.
1.2
Document Overview
This document has the following chapters:
Chapter 1, Introduction , introduces the purpose and structure of the document and the
assumptions of the audience.
Chapter 2, Recon Outpost Hardware installation , describes the evaluated installation.
Chapter 3, Using Recon Outpost System , describes the use and environment of the software.
2. Recon Outpost Hardware Setup
2.1 System Components
1 Unit pre-assembled
3 USB cables
1 Power Supply
1 Car battery cable
4
81. 90º degrees
flip
Cameras
Touch Screen
Monitor
5 cm
Camera Interface
PC Casing
19 cm
Figure 1 - System dimensions
2.2 System Connections
All hardware is pre-assembled don´t attempt to open or modify any components, unless you
know what or you doing.
This unit is a PROTOTYPE don’t expose to water, humidity or ignore common sense with
electrical devices.
1. Determine where the unit will be mounted.
2. Monitor - Connect the USB-Y cable to the available USB slots in the pc casing.
3. Camera Interface - Connect the other 3 (three) USB cables to the available USB slots in
the pc casing. See Fig.2
4. Power - Connect the power connectors direct to a DC12V car battery or use the power
supply to connect to outlets in home. Max. 220V - 70W. See Fig.2
USB
POWER
Figure 2 - PC Unit back cover, 2 (two) USB front
5
82. 3. Using Recon Outpost System
3.1 LOGIN
a. To enter the passwords just tap the numeric keys.
b. The password is provided on the cover of this manual.
Here you enter password
to login the system
Figure 3 - Login
3.2 MAIN WINDOW
a. By default Face Tracking comes in the package, if you purchase other Modules please follow the
instruction for their installation.
b. Select the Module you want to monitor, by tapping on the button. All available modules and
functions are highlighted with a light green color.
Exit:
Shutdown the system
Help Monitor:
Available module descriptions
UP/DOWN:
Scroll text up and down
Figure 4 - Main Window
3.3 Tracking Mode Module – FACE DETECTION
This module performs face detection and face recognition, please follow the instructions describe in
this manual for efficient use of all features.
6
83. 1. Video Monitor: In this area the face detection crosshair will attempt to recognize all humans
face in video. If no face is detected, crosshair will not appear.
2. Terminal: The terminal area will show all errors and messages from the system.
3. Identification Analysis: Allows to identifying the detected face.
TIP: Wait to crosshair to appear, this means the face has been detected.
4. Camera Rotation: Allows rotating the camera position.
5. Ba k: Go’s to initial window.
Figure 5 - Face detection
3.3.1 Tracking Mode Module – FACE RECOGNITION
The system will search database for all detected faces and display their name. If the system
does ’t find a match, the system will display the most approximated face result that has the most
similar traces to the detected face.
1.
2.
3.
4.
In the video area the detected face or faces or displayed in the upper right area.
Name of the detected face.
Button to insert the face in the database.
If more than one face is detected this button allows you to display the next face in the upper
right display area
5. Ba k: Go’s to FACE DETECTION window.
7
84. Figure 6 - Face recognition
3.3.2 Tracking Mode Module – DATABASE MANAGER
1. Tap each key to enter a name.
2. After entering the name, tap SAVE button to insert in to database, the detected face will be save
and ready to be detected next time it show in FACE RECOGNITION window.
3. Displays the face that will be saved.
4. Ba k: Go’s to FACE RECOGNITION window in ase you need to add another fa e to data ase.
4
2
1
3
Figure 7 - Database Manager
3.4 Night Vision Mode Module
This module allows you to view live video in adverse weather conditions. Gives users the capability
to analysis and observe details that are not visible to the naked eye.
3.4.1
1.
2.
3.
4.
Night Vision Mode Module – NIGHT VISION
Video Monitor
Brightness: Adjust the intensity of the infrared leds.
Ba k: Go’s to initial window.
Video Mode:
8
85. 4.1 NIGHT - Applies green filter to video.Fig.8
4.2 IR - Change video to gray tons. Fig.9
4.3 TRACE - Is an edge detection operator that uses a multi-stage algorithm to detect a wide
range of edges in images. This will draw all edges in black and white. Fig.10
5. Camera Rotation: Allows rotating the camera position.
Figure 8 - Night Vision
Figure 9 -IR
Figure 10 - Trace
9
86. 4. Troubleshooting
Syste does ’t start or o i age appears on screen?
Check power cable and USB connections.
System has a delay boot up. Please wait at least 2min.
No face is detected?
Try decreasing the distance of the camera to the target, the minimum size face for detection is
40x30 pixels
When I try to identify a face, system shows no face detected.
To identify a face, the face must have been detected by the crosshair in Face Detection.
Terminal displays the message” HARDWARE FAILUER: Che k USB a le”.
Check or replace USB cable.
Motor keeps running, when system is powered on.
Sometimes this can happen; our engineers are working day and night on this issue…
For any question or help: help.reconoutpost@mail.com
10