Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...IOSR Journals
Similar to Comparison between the genetic algorithms optimization and particle swarm optimization for design the close range photogrammetry network (20)
2. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 148 editor@iaeme.com
Photogrammetric measurement operations attempt to satisfy, in an optimal manner, several
objectives as precision, reliability and economy. The ZOD and SOD are greatly simplified in
comparison to geodetic networks for which the four stages were originally developed. Indeed FOD,
the design of network configuration or the sensor placementtask needs to be comprehensively
addressed for photogrammetric projects. The close range photogrammetric network design is the
process of optimizing a network configuration in terms of the accuracy of object-points. This design
stage must provide an optimal imaging geometry and convergence angle for each set of points placed
over a complex object.
2. HEURISTIC OPTIMIZATION ALGORITHMS
Optimization has been an active area of research for several decades. As As many real world
optimization problems become increasingly complex, better optimization algorithms are always
needed. Recently, meta-heuristic global optimization algorithms have become a popular choice for
solving complex and intricate problems, which are otherwise difficult to solve by traditional methods
[12]. The objective of optimization isto seeks values for a set of parameters that maximize or
minimize objective functions subject to certain constraints [13].
Choices of values for the set of Parameters that satisfy all constraints are called a feasible
solution. Feasible solutions with objective function value(s) as good as the values of any other
feasible solutions are called optimal solutions [13]. In order to use optimization successfully, we
must first determine an objective through which we can measure the performance of the system
under study. The objective relies on certain characteristics of the system, called variable or
unknowns. The goal is to find a set of values of the variable that result in the best possible solution to
an optimization problem within a reasonable time limit. The optimization algorithms come from
different areas and are inspired by different techniques. But they all share some common
characteristics. They are iterative; they all begin with an initial guess of the optimal values of the
variables and generate a sequence of improved estimates until they converge to a solution. The
strategy used to move from one potential solution to the next is what distinguishes one algorithm
from another [12].
Figure1: pie chart of the publication distribution of meta-heuristic algorithms
Broadly speaking, optimization algorithms can be placed in two categories: the conventional
or deterministic methods and the modern heuristics or stochastic methods. Conventional methods
adopt the deterministic approach. During the optimization process, any solutions found are assumed
to be exact and the computation for next set of solutions completely depends on the previous
solutions found. That’s why conventional methods are also known as deterministic optimization
3. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 149 editor@iaeme.com
methods. In addition, these methods involve certain assumptions about the formulation of the
objective functions and constraint functions. Conventional methods include algorithms such as linear
programming, non linear programming, dynamic programming, Newton’s method and others. In the
past few decades several global optimization algorithms have been developed that are based on the
nature inspired analogy. These are mostly population based meta-heuristics also called general
purpose algorithms because of their applicability to a wide range of problems. Some popular global
optimization algorithms include Evolution Strategies (ES), Evolutionary Programming (EP), Genetic
Algorithms (GA), Artificial Immune System (AIS), Tabu Search (TS), Ant Colony Optimization
(ACO), Particle Swarm Optimization (PSO), Harmony Search (HS) algorithm, Bee Colony
Optimization (BCO), Gravitational Search Algorithm (GSA), etc [12].
Figure 1 shows the distribution of publications which applied the meta-heuristics methods to
solve the optimization problem. This survey is based on ISI Web of Knowledge databases and
included most of the papers that have been published during the past decade. Figure 1 shows that the
GA and PSO are the most popular algorithms among the others.
3. CLASSIFICATION OF HEURISTIC ALGORITHMS
A vast literature exists on heuristic for solving an impressive array of problems as mentioned
in the introduction of this paper and, more recently, a number of studies have reported on the success
of such techniques for solving difficult problems in all key areas of computer science. The two most
predominant and successful classes or directions are Evolutionary Algorithms and Swarm based
Algorithms which are inspired by the natural evolution and collective behaviour in animals
respectively. There are several algorithms with the same basic concepts of Evolutionary Algorithms
and Swarm based Algorithms are prevailed in figure 2.
Figure 2: Classification of meta-heuristic algorithms
4. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 150 editor@iaeme.com
In this paper, we will give a concise introduction to the theory of some popular and well
known global optimization algorithms.
4. GENETIC ALGORITHMS (GA)
Genetic algorithmis one of the most popular types of Evolutionary algorithms. To be more
precise, it constitutes a computing model for simulating natural and genetic selection that is attached
to the biological evolution described in Darwin’s Theory which was first issued by Holland [6],[7].
In this computing model a population of abstract representations (call chromosomes or the genotype
of the genome) of candidate solutions (called individuals, creatures, or phenotypes) to an
optimization problem could result in better solutions, which are traditionally represented in binary
form as strings comprising of 0 and 1 s with fixed length, but other kinds of encoding are also
possible which include real-values and order chromosomes. The program then will assign the proper
number of bits and the coding. Being a member of the family of evolutionary computation, the first
step of GA is population initialization which is usually done stochastically. The GA usually uses
three simple operators called selection, recombination (usually called crossover) and mutation.
Selection is the step of a genetic algorithm in which a certain number of individuals is chosen from
the current population for later breeding (recombination or crossover); the choosing rate is normally
proportional to individual's fitness value. There are several general selection techniques. Tournament
selection and fitness proportionality selection (also known as roulette-wheel selection) consider all
given individuals. Other methods only choose those individuals with a fitness value greater than a
given arbitrary constant. Crossover and mutation taken together is called reproduction. They are
analogous to biological crossover and mutation respectively [12].
The most important operator in GA is crossover which refers to the recombination of genetic
information during sexual reproduction. The child shares in common with it parents many
characteristics. Therefore, in Gas, the offspring has an equal chance of receiving any given gene
from either one parent because the parents 'chromosomes are combined randomly. To date, there are
many crossover techniques for organisms which use different data structures to store themselves,
such like One-point crossover, two-point crossover, Uniform Crossover as well as Half Uniform
Crossover. The probabilities for crossovers vary according to the problem. Generally speaking,
values between 60 and 80% are typical for one-point crossover as well as two-point crossover.
Uniform crossovers work well with slightly lower probabilities on the other hand. The probability
could also be altered during evolution. So a higher value might initially be attributed to the crossover
probability. Then it is decreased linearly until the end of the run, ending with a value of one half or
two thirds of the initial value [10]. Additionally for real-value cases, the crossover operation could be
expressed as:
211 )1( xxy λλ −+=
1221 )1( xxy λλ −+= ........... (1)
Where y1 and y2 are two descendants created by two given parents x1andx2, and λis random
number between 0 and 1.
Mutation is the stochastic flipping of chromosome bits that occurs each generation which is
used to maintain genetic diversity from one generation of a population of chromosomes to the next.
Mutation occurs step by step until the entire population has been covered. Again, the coding used is
decisive. In the case of binary chromosomes, it simply flips the bit while in real-valued
chromosomes a noise parameter N [0, F] is added to the value at that position. σ could be chosen in
5. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 151 editor@iaeme.com
such way that it decreases in time, e.g., )1(/1 t+=σ , where t is the number of current iteration. The
probability of mutation is normally kept as a low constant value for the entire runoff the GA, such
like 0.001. Despite of these evolutionary operators, in some cases a strategy called "elites" is used,
where the best individual is directly copied to the next generation without undergoing any of
the genetic operators. In one single interaction, new parents are selected for each child and the
process continues until a proper size of individuals for the new population is reached. This process
ultimately results in the population of the new generation differing from the current one. Generally
the average fitness of the population should be improved by this procedure since only the best
organisms from the last generation are selected for breeding [12].
The implementation of the genetic algorithm is described as follows [14]:
Step 1: Initialization. The algorithm starts with a set of solutions (represented by chromosomes)
called population. A set of chromosomes is randomly generated. Each chromosome is composed of
genes.
Step 2: Evaluation. For every chromosome, its fitness value is calculated. Each chromosome’s
fitness value is analyzed one by one. Compared with the existing best fitness value, if one
chromosome can generate better fitness, renew the values of the defined vector and variable with this
chromosome and its fitness value; otherwise, keep their values unchanged.
Step 3: Selection. Population selection within the algorithm utilizes the principle of survival of the
fittest, which is based on the Darwinian's concept of Natural Selection [6],[7]. A random number
generator is employed to generate random numbers whose values are between 0 and 1.
Step 4: Crossover. The crossover operation is one of the most important operations in GA. The basic
idea is to combine some genes from different chromosomes. A GA recombines of bit strings by
copying segments from chromosomes pairs.
Step 5: Mutation. Some useful genes are not generated during the initial step. This difficulty can be
overcome by using the mutation approach. The basic mutation operator randomly generates a
number as the crossover position and then changes the value of this gene randomly.
Step 6: Stopping criteria. Steps 2-5 are repeated until the predefined number of generations has been
reached. The optimal solution can be generated after termination [12].
Where all these steps are shown in figure 3
Figure 1 A graphical representation of genetic algorithm
5. PARTICLE SWARM OPTIMIZATION (PSO)
Particle swarm optimization (PSO) is a computational intelligence oriented, stochastic,
population-based global optimization technique proposed by Kennedy and Eberhartin 1995 [4]. It is
inspired by the social behaviour of bird flocking searching for food. PSO has been extensively
6. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 152 editor@iaeme.com
applied to many engineering optimization areas due to its unique searching mechanism, simple
concept, computational efficiency, and easy implementation. In PSO, the term ―particles refers to
population members which are mass-less and volume-less (or with an arbitrarily small mass or
volume) and are subject to velocities and accelerations towards a better mode of behaviour. Each
particle in the swarm represents a solution in a high-dimensional space with four vectors, its current
position; best position found so far, the best position found by its neighbourhood so far and its
velocity and adjusts its position in the search space based on the best position reached by itself
(pbest) and on the best position reached by its neighbourhood (gbest) during the search process [11].
In each iteration, each particle updates its position and velocity as follows:
i
k
i
k
i
k vxx 11 ++ += (2)
)()( 22111
i
k
g
k
i
k
i
k
i
k
i
k xprcxprcvv −+−+=+ (3)
Where,
i
kx represents Particle position
i
kv represents Particle velocity
i
kp represents personal best position
g
kp represents global best position
C1, C2represents Particle position
r1, r2represents Particle position
Steps in PSO algorithm can be briefed as below:
1) Initialize the swarm by assigning a random position in the problem space to each particle.
2) Evaluate the fitness function for each particle.
3) For each individual particle, compare the particle‘s fitness value with its pbest. If the current
value is better than the pbest value, then set this value as the pbest and the current particle‘s
position, xi, as pi.
4) Identify the particle that has the best fitness value. The value of its fitness function is identified
as guest and its position as pg.
5) update the velocities and positions of all the particles using (1) and (2).
6) Repeat steps 2–5 until a stopping criterion is met (e.g., maximum number of iterations or a
sufficiently good fitness value). [4]
Figure 4: A graphical representation of PSO particle updating position
7. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 153 editor@iaeme.com
1
1
11109
8765
11109
4321
+++
+++
−∆+=
+++
+++
−∆+=
ZLYLXL
LZLYLXL
yyg
ZLYLXL
LZLYLXL
xxf
−−−−
+++++=∆ yxpxrprkrkrkxx 2
22
1
6
3
4
2
2
1 2)2()(
)2(22)( 2
21
6
3
4
2
2
1
−−−−
+++++=∆ yrpyxprkrkrkyy
6. MATHEMATICAL MODEL FOR CLOSE RANGE PHOTOGRAMMETRY NETWORK
DESIGN
The photogrammetric three dimension coordinate determination is based on the co-linearity
equation which simply states that object point, camera projective centre and image point lie on a
straight line. The determination of the three dimension coordinates from a definite point is achieved
through the intersection of two or more straight lines. Abdel Aziz and Karara proposed a simple
method for close range photogrammetric data reduction with non- metric cameras; it establishes the
direct linear transformation (DLT) between the two-dimensional coordinates, and the corresponding
object- space coordinates. The Direct Linear Transformation (DLT) between a point (X, Y, Z) in
object space and its corresponding image space coordinates (x, y) can be established by the linear
fractional equations [3]:
(4)
Where:
L1, L2 L3…L11 are the transformation parameters
X, Y and Z are the object space coordinates
(5)
Where:
x-
= x – xo y-
= y – yo r2
= (x – xo)2
+ (y – yo)2
(6)
x, y are image coordinates
p1and p2 are two asymmetric parameters for decentring distortion
k1 ,k2 and k3 arethree symmetric parameters for radial distortion
r is the radial distance from the principal point [2]
Equation 4 results from the equation of the central perspective in a trivial manner; Δx, Δy are
systematic deformations of the image, i.e. deviations from the central perspective. Equation 4 can be
solved directly for the 11 transformation parameters (L1, L2 L3 …L11) if there are at least six points
in the image whose object-space coordinates are known. Equation 1 is rewritten to serve in a least
squares formulation relating known control points to image coordinate measurements [1]:
vx = L1 X + L2 Y + L3 Z + L4 – xX L9 – xY L10 – xZ L11– Δx (7)
vy = L5 X + L6 Y + L7 Z + L8– yX L9 – yY L10 – y Z L11– Δy
After (L1 to L11, k1, k2, k3, p1, p2) parameters of each stereo pair of photos become available,
it can be computed the object space coordinate (X, Y, Z) of any points appear in each photos of
stereo pair, by applying the DLT equations with additional parameters. By rearranging these
equations it can be represented in matrices forms (for m photos) as shown bellow.
8. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 154 editor@iaeme.com
)3,2(
"
8
"
4
1"1
8
1"1
4
)1,3(
)3,2(711
"
610
"
59
"
311
"
210
"
19
"
1
7
1
11
1"1
6
1
10
1"1
5
1
9
1"
1
3
1
11
1"1
2
1
10
1"1
1
1
9
1"
:
:
:::
:::
m
m
i
m
m
i
m
i
i
I
I
I
m
mmm
i
mmm
i
mmm
i
mmm
i
mmm
i
mmm
i
iii
iii
L
L
L
L
Z
Y
X
LLyLLyLLy
LLxLLxLLx
LLyLLyLLy
LLxLLxLLx
−
−
−
−
=
×
−−−
−−−
−−−
−−−
η
µ
η
µ
(8)
Where: = + ∆ , = + Δ
7. ASSESSMENT OF ACCURACY
There are two different methods can be used to evaluate accuracy: one can evaluate accuracy
by using check measurements and determining from these check measurements the value of
appropriate accuracy criteria; and one can use accuracy predictors. In this study, check
measurements will be used to evaluate accuracy [9].
In this study, we consider n (i = 1,2...n) check points in the studied object that is points whose
true coordinates are known but not used in the photogrammetric computations. Then if Xit,Yit and Zit
are the true coordinates of the check points, and Xiph,Yiph and Ziph its photogrammetric coordinates,
an estimation of the MRXYZ spatial residual is
)()()(
1 222
1
itiphitiphit
n
iph ZZXYXX
n
MRXYZ −+−+−= ∑
(9)
Analogous quantities can be estimated for three axes:
The X- direction: )(
1 2
1
it
n
iph XX
n
MRX −= ∑
The Y-direction: )(
1 2
1
itiph
n
XY
n
MRY −= ∑
The Z-direction: )(
1 2
1
itiph
n
ZZ
n
MRZ −= ∑
8. THE TEST FIELD AND DISCUSSION
The photogrammetric test field at the surveying laboratory (Civil Engineering Department,
Faculty of Engineering, Kafrelsheikh University, Egypt) was used. The three dimension coordinates
of 40 well distributed ground control points (15 points) and check point (25 points) with varying
heights were measured using a Multi-Station Intersection as shown in figure 5. The Multi-Station
Intersection system consists of three Sokkia Reflector less Total Station (SET330RK) and one
processing computer unit, which is connected to them. After an initialization step, three operators
simultaneously measure the vertical angles and horizontal directions of the targeted point. The
system calculates the three dimensions coordinates (by Multi-Station intersection) and the precision
values in real-time. The average precision values of the ground control points and check points are
9. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 155 editor@iaeme.com
±0.2, ±0.4 and ±0.2 mm for X, Y and Z axes, respectively [8]. Two programs were carried out using
Matlab programs to design the close range photogrammetry network, one using the genetic algorithm
optimization and the other with particle swarm optimization. Programs have been designed for
computation of the optimal outer orientation for three camera station. The test field was
photographed from optimal camera stations outing of the genetic algorithm optimization and the
particle swarm optimization. All photographs were taken using high resolution CCD camera (Nikon
D 3100) as shown in figure 6. The cameras settings such as zoom factor, focus, white balance etc.
were kept constant during the test procedure.
Figure 5: The 3D Test Field
Figure 6: The Nikon D 3100 digital camera
Another Matlab program has been designed for computation of the spatial coordinates (X, Y,
Z) of the n checkpoints, the maximum and minimum residual in the X, Y and Z-direction, the
maximum and minimum spatial differences among the n checkpoints and the variance-covariance
matrix of the parameters. It is to be mentioned that the determinations of the residuals have been
carried out for two cases: 1-DLT from optimal camera stations outing of the genetic algorithm
optimization, 2-DLT from optimal camera stations outing of the particle swarm optimization. The
estimated accuracy and standard deviations (SD) for the space coordinates will also be presented in
tabular form.
10. Comparison Between The Genetic Algorithms Optimization and Particle Swarm Optimization For
Design The Close Range Photogrammetry Network, Hossam El-Din Fawzy, Journal Impact Factor
(2015): 9.1215 (Calculated by GISI) www.jifactor.com
www.iaeme.com/ijciet.asp 156 editor@iaeme.com
Table (5): Statistics for the evaluated
standard deviations of the 3D
coordinates, as extracted from the
evaluated variance-covariance matrix,
associated with different cameras.
Std (δ x)mm Case - 1 Case - 2
Max 4.664077 3.785348
Min 0.780686 0.598918
Std (δ y)mm Case - 1 Case - 2
Max 15.1931 11.78741
Min 4.171262 3.60388
Std (δ z)mm Case - 1 Case - 2
Max 5.629424 4.575657
Min 1.11616 1.07935
Table (4): Statistics for the obtained 3D
coordinate differences associated with
different camera stations at the used
checkpoints (in mm)
(δ X) Case - 1 Case - 2
Max 11.43163 8.613751
Min 1.4388 1.140868
(δ Y) Case - 1 Case - 2
Max 12.22304 12.02975
Min 1.928755 1.493896
(δ Z) Case - 1 Case - 2
Max 6.489615 5.554176
Min 0.983443 0.799878
Pos. Case - 1 Case - 2
Max 15.28432 12.77278
Min 2.328068 1.865873
Case–1: DLT from optimal camera stations outing of the genetic algorithm optimization
Case–2: DLT from optimal camera stations outing of the particle swarm optimization.
Using insight into Tables 4, 5 some interesting points is noted:
- In the X, Y and Z direction, the best accuracy has been obtained, when case-2 is used (DLT
from optimal camera stations outing of the particle swarm optimization)
- According to the obtained results, the minimum position error is provided by case-2, when
DLT from optimal camera stations outing of the particle swarm optimization is used.
Using insight into Table 4 one can notice that, there is an improvement in the standard
deviation of the individual point coordinates, when case-2 (DLT from optimal camera stations outing
of the particle swarm optimization) is used
9. CONCLUSIONS
The genetic algorithm optimization and the particle swarm optimization have been used to
design the close range photogrammetry network and the obtained accuracy is discussed and
presented. Based on the experimental results, it can be seen that case-2 (DLT from optimal camera
stations outing of the particle swarm optimization) provides good results for the X, Y and Z
coordinates when compared of case-1 (DLT from optimal camera stations outing of the genetic
algorithm optimization)
From all of the above discussions, the following Advantages for particle swarm optimization
over Genetic Algorithm optimization can be drawn:
- Particle swarm optimization is easier to implement and there are fewer parameters to adjust.
- Particle swarm optimization has a more effective memory capability than Genetic Algorithm.
- particle swarm optimization is more efficient in maintaining the diversity of the swarm, since
all the particles use the information related to the most successful particle in order to improve
themselves, whereas in Genetic algorithm, the worse solutions are discarded and only the new ones
are saved; i.e. in Genetic Algorithm the population evolve around a subset of the best individuals.