SlideShare uma empresa Scribd logo
1 de 19
Baixar para ler offline
Discrete Optimization
Developing effective meta-heuristics for a probabilistic
location model via experimental design
Hari K. Rajagopalan a
, F. Elizabeth Vergara a
, Cem Saydam a,*, Jing Xiao b
a
Business Information Systems and Operations Management Department, The Belk College of Business,
The University of North Carolina at Charlotte, Charlotte, NC 28223, United States
b
Computer Science Department, College of Information Technology, The University of North Carolina at Charlotte Charlotte,
NC 28223, United States
Received 7 January 2005; accepted 9 November 2005
Available online 26 January 2006
Abstract
This article employs a statistical experimental design to guide and evaluate the development of four meta-heuristics
applied to a probabilistic location model. The meta-heuristics evaluated include evolutionary algorithm, tabu search,
simulated annealing, and a hybridized hill-climbing algorithm. Comparative results are analyzed using ANOVA.
Our findings show that all four implementations produce high quality solutions. In particular, it was found that on
average tabu search and simulated annealing find their best solutions in the least amount of time, with relatively small
variability. This is especially important for large-size problems when dynamic redeployment is required.
Ó 2005 Elsevier B.V. All rights reserved.
Keywords: Meta-heuristics; Experimental design; Location
1. Introduction
The goal of emergency medical services (EMS)
is to reduce mortality, disability, and suffering in
persons [9,21]. EMS administrators and managers
often face the difficult task of locating a limited
number of ambulances in a way that will yield
the best service to a constituent population. A
widely used EMS system performance metric is
the response time to incidents. Typically, calls
originating from a population center are assumed
to be covered if they can be reached within a time
threshold. This notion of coverage has been widely
accepted and is written into the EMS Act of 1973,
which requires that in urban areas 95% of requests
be reached in 10 minutes, and in rural areas, calls
should be reached in 30 minutes or less [3].
0377-2217/$ - see front matter Ó 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.ejor.2005.11.007
*
Corresponding author. Tel.: +1 704 687 2047; fax: +1 704
687 6330.
E-mail addresses: hrajagop@uncc.edu (H.K. Rajagopalan),
fvergara@uncc.edu (F.E. Vergara), saydam@uncc.edu (C.
Saydam), xiao@uncc.edu (J. Xiao).
European Journal of Operational Research 177 (2007) 83–101
www.elsevier.com/locate/ejor
The literature on ambulance location problems
is rich and diverse. Most early models tended to be
deterministic in nature and somewhat less realistic
in part due to computational limitations faced by
the researchers at that time. Recent advances both
in hardware and software, and more importantly
in meta-heuristics, as well as the availability of
complex and efficient solvers have enabled
researchers to develop more realistic and sophisti-
cated models. A recent review by Brotcorne et al.
is an excellent source for readers to trace the devel-
opments in this domain [9]. Also, reviews by Owen
and Daskin [30], and Schilling et al. [36] examine
and classify earlier models.
Probabilistic location models acknowledge the
possibility that a given ambulance may not be
available, and therefore model this uncertainty
either using a queuing framework or via a mathe-
matical programming approach, albeit with some
simplifying assumptions. Probabilistic location
models built upon a mathematical programming
framework have evolved from two classical deter-
ministic location models. The first model is a cov-
erage model, known as the set covering location
problem (SCLP) developed by Toregas et al. [37].
The objective of this model is to minimize the
number of facilities required to cover a set of
demand points within a given response-time stan-
dard. In this model, all response units (ambu-
lances) are assumed to be available at all times
and all demand points are treated equally. The sec-
ond model is the maximal covering location prob-
lem (MCLP) by Church and ReVelle [11]. The
MCLP represents an important advance in that
it acknowledges the reality of limited resources
available to the EMS managers by fixing the num-
ber of response units while attempting to maximize
the population covered within the response-time
standard. Various extensions of these deterministic
models have also been developed [9,11,31,35].
In contrast to previous deterministic models,
Daskin’s maximum expected coverage location
problem (MEXCLP) model was among the first
to explicitly incorporate the probabilistic dimen-
sion of the ambulance location model [14]. Daskin
removed the implicit assumption that all units are
available at all times by incorporating a system-
wide unit busy probability into an otherwise deter-
ministic location model. Given a predetermined
number of response units, MEXCLP maximizes
the expected coverage subject to a response-time
standard or, equivalently, a distance threshold.
Daskin also designed an efficient vertex substitu-
tion heuristic in order to solve the MEXCLP. This
heuristic works well for small to medium sized
problems. However, a comparative study con-
ducted by Aytug and Saydam found that for
large-scale applications genetic algorithm imple-
mentations provided faster and better results,
although not sufficiently fast for real-time dynamic
redeployment applications [1].
In their 1989 article, ReVelle and Hogan [31]
suggest two important probabilistic formulations
known as the maximum availability location prob-
lem I and problem II (MALP I & II). Both models
distribute a fixed number of response units in
order to maximize the population covered with a
response unit available within the response-time
standard with a predetermined reliability. In
MALP I, a system-wide busy probability is com-
puted for all units, which is similar to MEXCLP,
while in MALP II, the region is divided into neigh-
borhoods and local busy fractions for response
units in each neighborhood is computed, assuming
that the immediate area of interest is isolated from
the rest of the region. Recently, Galvao et al. [16]
presented a unified view of MEXCLP and MALP
by dropping their simplifying assumptions and
using Larson’s hypercube model [25] to compute
response unit specific busy probabilities. They
demonstrated, using a simulated annealing meth-
odology, that both extended models (referred to
as EMEXCLP and EMALP) produced better
results over the previous models, with some addi-
tional computational burden.
One of the ways that EMS managers can
improve the system performance is by the dynamic
redeployment of ambulances in response to
demand that fluctuates throughout the week, day
of the week, and even hour by hour within a given
day. For example, morning rush hour commutes
shift workers from suburban communities towards
their workplaces in cities or major employment
centers, and returns workers home at the end of
the workday. Given that operating conditions fluc-
tuate, dynamic redeployment strategies require fast
84 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
and effective methods that can be applied in real-
time for typically large-scale problem domains.
To our knowledge, this problem has been scarcely
studied with the notable exception of a study by
Gendreau et al. [18]. In their novel approach,
Gendreau et al. build upon their earlier double cov-
erage model [17] and incorporate several practical
considerations. They show that their model can
be successfully solved using a parallelized version
of their tabu search algorithm first developed for
a static version of the double coverage model. In
a recent review article, Brotcorne et al. suggests
that any developments in this field are likely to take
place in the area of dynamic redeployment models
along with fast and accurate heuristics [9].
A crucial factor for dynamic redeployment
models is the size of zones used to represent poten-
tial demand areas. The accuracy and realism of
these models are increased by decreasing zone
sizes, which in turn increases the number of zones.
Given that the solution space of locating m
response units within n zones is nm
, the need for
fast and powerful heuristic algorithms for large-
scale applications becomes self-evident.
In this paper we report our work in applying
four meta-heuristic approaches to the MEXCLP,
the use of statistical experimental design to ensure
unbiased and objective analyses [4], and present
the results, findings, and comparative analyses,
which demonstrate the effectiveness of each meta-
heuristic on small- to large-scale problems.
This paper is organized as follows. In Section 2
MEXCLP is reviewed in detail. Section 3 describes
the four meta-heuristics, and Section 4 contains
the experimental design. Section 5 presents a
detailed report on the results and analyses. Section
6 includes concluding remarks and suggestions for
future research.
2. A probabilistic location model
MEXCLP proposed by Daskin [13,14] maxi-
mizes the expected coverage of demand subject
to a distance r (or time threshold t) by locating
m response units on a network of n zones. MEX-
CLP assumes that all units operate independently
and each has an identical system-wide busy prob-
ability, p. If a zone is covered by i units
(i = 0,1,2,. . .,m), then the probability that it is
covered by at least one unit is 1 À pi
. Let hj denote
the demand at node j, thus the expected coverage
of zone j is hj (1 À pi
). Let akj = 1 if response unit
in zone k is within distance r of zone j (covers j), 0
otherwise; yij = 1 if zone j is covered at least i
times, 0 otherwise; and let xj = the number of
units located in zone j. Daskin formulates the
MEXCLP as an integer linear programming
(ILP) model as follows:
Maximize
Xn
j¼1
Xm
i¼1
ð1 À pÞpiÀ1
hjyij ð1Þ
Subject to
Xm
i¼1
yij À
Xn
k¼1
akjxk 6 0 8j; ð2Þ
Xn
j¼1
xj 6 m; ð3Þ
xj ¼ 0; 1; 2 . . . ; m 8j; ð4Þ
yij ¼ 0; 1 8i; j. ð5Þ
The objective function (1) maximizes the expected
number of calls that can be covered, constraint (2)
counts the number of times zone j is covered and
relates the decision variables yij to the first set of
decision variables, xj. Constraint (3) specifies the
number of response units available, and constraint
(4) allows multiple units to be located at any zone.
This formulation can be rewritten using fewer
variables with a non-linear objective function
[34]. Let yj denote the number of times zone j is
covered and rewrite the expected coverage for zone
j as hjð1 À pyj Þ and constraint (2) as
Pn
k¼1akjxk ¼
yj. The non-linear version is formulated as fol-
lows:
Maximize
Xn
j¼1
hjð1 À pyj
Þ ð6Þ
Subject to
Xn
k¼1
akjxk ¼ yj 8j; ð7Þ
Xn
k¼1
xk 6 m; ð8Þ
xk ¼ 0; 1; 2; . . . ; m 8k; ð9Þ
yj ¼ 0; 1; 2; . . . ; m 8j. ð10Þ
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 85
The original MEXCLP and the non-linear formula-
tion above are theoretically identical since MEX-
CLP searches the feasible space using binary
variables that sum up to the number of times each
node is covered whereas the non-linear version
searches the feasible space using positive integers
which count the number of times each zone is cov-
ered. The MEXCLP model is useful for implementa-
tion via software like CPLEX [12]. We use CPLEX
to determine the optimal solution for the MEXCLP
ILP formulation. The non-linear model lends itself
to easier implementation for meta-heuristic search
methods such as tabu search and evolutionary algo-
rithms where the objective function (Eq. (6)) can be
directly coded as the fitness function.
In order to solve MEXCLP, we begin by gener-
ating a hypothetical city (county, region) spread
across 1024 sq. miles and impose a 32 mile by
32 mile grid over it. Each problem size utilizes this
grid; however each problem size imposes a differ-
ent level of granularity. For example, an 8 · 8 grid
would contain 64 zones with each zone being a
square of 4 miles by 4 miles, a 16 · 16 grid would
contain 256 zones with each zone being a square of
2 miles by 2 miles, and a 32 · 32 grid would con-
tain 1024 zones with each zone being a square of
1 mile by 1 mile.
Furthermore, we randomly generated uni-
formly and non-uniformly distributed call
(demand) rates for each of 1024 zones. Uniform
distribution spreads demand for calls somewhat
evenly across the region. However, non-uniformly
distributed demand simulates a more realistic sce-
nario for calls where there are significant varia-
tions across the region as shown in Fig. 1. The
demand rates for 64 and 256 zone problems are
then computed by a straightforward aggregation
scheme using the demand rates for the 1024 zone
representation. Furthermore, for each grid (prob-
lem) size twenty demand (call) distributions are
generated, 10 having a uniform demand distribu-
tion and 10 having a non-uniform distribution.
These two elements, the grid representation and
demand distributions, define the basis for the
problem which can be briefly restated as follows:
for a given problem size and call demand distribu-
tion, locate m units (ambulances) to maximize the
expected coverage.
3. Meta-heuristic search methods
Location/relocation problems are typically NP-
complete problems [33]. The size of the solution
space of locating m response units in n zones is
nm
. Because of the complex combinatorial nature
of these problems, there have been various
attempts to identify near-optimal solutions
through the use of meta-heuristic search methods;
most recently, evolutionary algorithms [1,5,7,8,17,
18,23,33,34].
Meta-heuristic search methods try to find good
solutions by searching the solution space using two
kinds of search pressures: (1) exploration, and (2)
exploitation. Exploration pressures cause the
meta-heuristics to evaluate solutions which are
outside the current neighborhood of a given solu-
tion, while exploitation pressures help the meta-
heuristics use the current position of a solution
in a neighborhood to search for better solutions
within that neighborhood.
In this paper, we consider four meta-heuristic
search methods, evolutionary algorithm (EA),
tabu search (TS), simulated annealing (SA), and
hybridized hill-climbing (HC), and attempt to
adapt each meta-heuristic’s method of exploration
and exploitation to help identify good solutions in
MEXCLP.
EA is a meta-heuristic search method that uses
the concept of evolution to generate solutions to
complex problems that typically have very large
10
20
30
10
20
30
0
0.01
0.02
Fig. 1. Non-uniformly distributed call (demand) data.
86 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
search spaces, and hence are difficult to solve [22].
EAs ‘‘are a class of general purpose (domain inde-
pendent) search methods’’ [26]. There are numer-
ous and varied examples of how EAs can be
applied to complex problems [2,26,38], including
the problems in the location domain [1,6].
TS is a meta-heuristic search method developed
by Glover [19,20]. A unique feature of TS is its use
of a memory, or list. Once a solution is entered
into a TS memory, they are then tabu, or disal-
lowed, for some time. The idea is to restrict the
algorithm from visiting the same solution more
than once in a given time period. The exploration
pressure in TS is its ability to accept a worse solu-
tion as it progresses through its search. Tabu
search has been successfully applied to various
problem domains [29], including covering location
models [17,18].
SA is a meta-heuristic that accepts a worse solu-
tion with some probability [24]. The ability of SA
to ‘‘go downhill’’ by accepting a worse solution
acts as a mechanism for exploratory pressure and
can help SA escape local maxima. SA has been
successfully used for location modeling [9,15].
HC is an iterative improvement search method.
Therefore, strictly speaking HC is not a meta-heu-
ristic, however in our implementation HC is
hybridized via the inclusion of mutation opera-
tors. The algorithm, as originally devised, ‘‘is a
strategy which exploits the best solution for possi-
ble improvement; [however] it neglects explora-
tion of the search space’’ [26]. The addition of
the mutation operators in the hybridized HC
greatly improves exploration of the search space;
allowing HC to search each neighborhood where
it lands in a systematic way before moving to
the next randomly determined location in the
search space.
3.1. Data representation
The data representation used for all meta-heu-
ristics is a vector. A vector is a one-dimensional
array of size m + 1, where m is the number of
response units in the system. Index 0 in each vector
contains the vector’s fitness value (fv) as deter-
mined by the evaluation function. Each of the
remaining indices within a vector, indices 1
through m, represents a specific response unit
and the value recorded within each of these indices
indicates the zone to which that unit is assigned.
The zone numbers range between 1 and n, where
n is the number of zones in the grid. Therefore, n
equals 64, 256, and 1024 for each problem size,
respectively. Fig. 2 illustrates the data representa-
tion, i.e., vector, used for all meta-heuristics.
3.2. Algorithm structure
All four meta-heuristic algorithms have the fol-
lowing common structure:
BEGIN
Initialization
Evaluation of data vector(s)
LOOP:
Operation(s) to alter data vector(s)
Evaluation of data vector(s)
UNTIL termination condition
END
3.2.1. Initialization and evaluation
In the initialization step each response unit is
randomly assigned a zone number within the prob-
lem grid; this random solution is then evaluated
using Eq. (6). The fitness value is then stored in
a vector. TS, SA, and HC perform their search
using a single vector; in contrast, EA uses a popu-
lation of vectors. Given that the objective is to
maximize expected coverage, higher evaluation,
i.e. larger fitness values, represent better solutions.
3.2.2. Operators
Operators are procedures used within meta-
heuristics that help define the different levels of
search pressures: exploration and exploitation. In
this study we utilize a number of operators such
as low-order mutation, high-order mutation, and
crossover. However, due to the structure of the
fv 12 28 3
0 1 2 … m
Vector
Fig. 2. Data representation.
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 87
different meta-heuristics, not all operators are used
in all the meta-heuristics.
Crossover is a genetic operator that needs at
least two vectors to create new vectors, called off-
spring. We use a one-point crossover where two
parents create two offspring [26,27]. Crossover is
used to promote exploratory pressure which forces
the meta-heuristic to look for a solution that is in
between other solutions already identified within
the search space.
The idea behind the mutation operator is to
provide exploitation pressure in order to search
the neighborhood for a better solution. In our
implementation, the low-order mutation operator
is used to perform a neighborhood search. Neigh-
borhood searches are commonly encountered in
meta-heuristics [27]. The definition of a neighbor-
hood emphasizes the difference between the com-
peting search pressures of exploration and
exploitation. We define a neighborhood as the
adjacent zones immediately surrounding a given
response unit’s location. For example, in Fig. 3,
given a response unit in zone 28 the neighborhood
zones are 19, 20, 21, 27, 29, 35, 36 and 37.
When this procedure is called, the algorithm
reads the zone within a selected index of a vector.
The neighborhood zones are then systematically
placed in the selected index within the vector,
freezing the values in all other indices, and the fit-
ness value for the resulting vector is calculated. At
the end of the neighborhood search, the vector
that has the highest fitness value is returned.
The low-order mutation operator promotes
exploitation pressure. It is possible to design an
algorithm with stronger exploitation pressure by
searching the entire neighborhood for all m
response units. That is, completing a comprehen-
sive neighborhood search for all indices simulta-
neously. However, this would require 8m
evaluations, which may not be computationally
feasible for large values of m. Consequently, for
each implementation of the low-order mutation
operator we compute, on average, 8 * m evalua-
tions searching within one selected index, while
holding all other indices constant.
High-order mutation is another genetic muta-
tion operator, and is applied to one parent vector
in order to create one offspring. To begin the high-
order mutation, two cut locations are randomly
determined. A third number, between 0 and 1, is
then randomly generated. If the number is less
than 0.5, then the ‘‘genetic material’’ between the
cut points remains unaltered and the data outside
the cuts is changed. The selected indices are filled
in with randomly generated numbers between 1
and n where n is the number of zones in the prob-
lem grid. However, if the third randomly generated
number is greater than or equal to 0.5, then the
‘‘genetic material’’ outside the cuts remains unal-
tered and the data between the cuts is changed
by filling in the selected indices with randomly gen-
erated numbers between 1 and n. Fig. 4 illustrates
the high-order mutation operator.
The high-order mutation operator is desirable
because it keeps part of the earlier solution while
moving away from the neighborhood. High-order
mutation has higher exploratory pressure than
low-order mutation.
3.2.3. Termination condition
Preliminary tests indicated that the improve-
ments experienced by all four meta-heuristics were
less significant after 100 iterations. For that rea-
son, all meta-heuristics for all problem sizes were
programmed to terminate after 100 iterations,
unless a significant improvement had occurred in
the last 20 iterations. If so, the algorithm was
allowed to run longer. An algorithm requiring
more than 100 iterations continues to run until
no improvements are identified for 20 successive
iterations.
3.3. Evolutionary algorithm (EA)
EAs create, manipulate, and maintain a popula-
tion of solutions to problems, where each solution
1 2 3 4 5 6 7 8
9 10 11 12 13 14 15 16
17 18 19 20 21 22 23 24
25 26 27 28 29 30 31 32
33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48
49 50 51 52 53 54 55 56
57 58 59 60 61 62 63 64
Fig. 3. Grid representation of zones for 64 zone (8 · 8)
problem.
88 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
is called a vector (commonly referred to as chro-
mosome) representing a possible solution to the
problem. Interestingly, one can think of the EAs
as performing multiple searches in ‘‘parallel
because each [chromosome] in the population
can be seen as a separate search’’ [32].
The larger the population size, typically, the
better the final solution is; however, computation
time increases proportionally with an increase in
population size. Preliminary tests were run to
determine the population size, and the optimal
genetic operator probability rates; these parame-
ters are in Table 2. The initial experiments were
run by taking a candidate problem and running
it using crossover, high-order, and low-order
mutation rates from 0.1 to 0.9 (increasing incre-
ments of 0.1). This was done for population sizes
of 20, 50, and 100. Therefore, we collected data
from 2187 runs (9 * 9 * 9 * 3 = 2187). From this
data, we examined the payoff between solution
quality and time to get to the solution and decided
on the following parameters: population size = 20,
crossover rate = 0.5, high-order mutation
rate = 0.1, and low-order mutation rate = 0.1.
The EA procedure is as follows:
1. Generate a population of vectors of size 20.
2. Each vector has a probability of being selected
for crossover, high-order or low-order mutation
as described in Table 2. The vector with the
highest evaluation function has a probability
of 1.0 of being selected for all three operations.
3. A vector will not have multiple operations done
on it in the same generation, but two copies of
the same vector can be selected for two different
operations. For example, a result of mutation in
one generation will not be used for crossover in
the same generation however; the parent which
was used for mutation may be selected for
crossover.
4. The operators of crossover, high-order, and
low-order mutation are applied to the selected
vectors.
5. The old population and the offspring are put
together in a pool and the next generation is
created by tournament selection [26,27] pre-
formed on two randomly selected vectors from
the pool. Of the two vectors, the vector with a
higher evaluation function is the winner and is
copied into the next generation’s population.
An elitist strategy is also employed, where the
best vector is always carried forward.
6. Once the new population is selected, the process
goes back to step 2 until the termination condi-
tion is reached.
3.4. Tabu search (TS)
TS uses only one vector to traverse the search
space, consequently the quality of the final solu-
tion for TS is dependent on the quality of the ini-
tial vector. As with all the meta-heuristics, TS uses
the low-order mutation operator to provide
exploitation pressure, and the best vector found
by using the low-order mutation operator is
accepted regardless of whether it is worse than
the original solution. By continually accepting
Parent: Parent:
fv 23 19 14 54 20 28 42 30 fv 23 19 14 54 20 28 42 30
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
Randomly selected indices Randomly selected indices
Offspring: Offspring:
fv 2 22 14 54 16 61 3 27 fv 23 19 36 15 20 28 42 30
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
*Assuming a 8x8 grid, i.e., 64 possible zones
*Random number less than 0.5
*Assuming a 8x8 grid, i.e., 64 possible zones
*Random number greater than or equal to 0.5
↑ ↑ ↑ ↑
Fig. 4. High-order mutation (mutate ends).
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 89
the best solution in a vector’s neighborhood, TS is
able to systematically search the problem’s search
space. The best solution found throughout the
run of the TS algorithm is always recorded. How-
ever, the best solution found might not be the solu-
tion contained in the final (last) vector when the
algorithm terminates.
To tune the memory parameter a candidate
problem was selected randomly and was run for
tabu list sizes from 10 chromosomes to 200
(increasing increments of 10) for 8, 9, 10, 11 servers
(total runs 20 * 4 = 80). We noticed in our initial
runs that there were minor fluctuations close to
the values which were multiples of servers. We
hypothesized that since our greedy search opera-
tor, or low-order mutation operator, changes only
one index in the vector at every iteration, system-
atically starting from the first index and moving
to the last index, and since the vector size is m,
or the number of servers, having m chromosomes
in the tabu list stores the zones of all the servers
being changed once. Having 3 * m in the tabu list
stores the zones of all the servers being changed
three times. The reason that we did not use just
the zones that are being changed, but included
the entire chromosome as tabu was due to the fact
that the problem does not depend upon one loca-
tion but the location of all servers at any point of
time. Again, the tabu list size of 3 * m was decided
upon by running initial experiments with values
from 1 * m to 20 * m for a candidate problem.
3.5. Simulated annealing (SA)
SA, like TS, uses a single vector to search for a
good solution to a given problem, and like TS the
final solution is also dependent on the quality of
the initial vector. SA starts by using a randomly
generated vector, and then uses the low-order
mutation operator to evolve the solution.
After initial trial runs with problem sizes of 64,
256 and 1024 zones, we noticed that SA tends to
get stuck at local optima, especially for larger sized
problems. The problem of getting stuck at local
optima was more pronounced for SA than for
any of the other methods. Upon closer investiga-
tion, it was determined that the temperature
parameter had not been properly tuned. Due to
the large number of problems to be solved it was
not practical to tune the temperature for every
problem. Therefore, in our implementation, we
use an approach developed by Chiyoshi et al.
[10] to automatically adjust the temperature
parameter during runtime through the use of a
feedback loop. This method employs the ratio of
the number of solutions accepted to the number
of solutions generated as a feedback loop from
the system. When this ratio drops below 40% the
exploration pressure is not strong enough and
the temperature is raised, however, when the ratio
goes above 60% the exploration pressure is too
high and the temperature is lowered. After some
experimentation with initial temperature values
of 20, 40 and 60, an initial temperature of 60 was
selected.
3.6. Hill-climbing (HC)
HC also maintains a single vector throughout
the run of the algorithm, which is altered at each
generation by searching the neighborhood for bet-
ter solutions. After initial trial runs with small
problem sizes (64 zones with 8 response units),
HC often got stuck at local optima when using
the low-order mutation operator. The random-
restart operator provides enormous exploratory
pressure, and forces the meta-heuristic to begin
searching in a different region within the search
space, thus escaping local optima. This process
was not sufficient for HC to effectively compete
with the other three meta-heuristics. Therefore,
in an effort to hybridize HC, the high-order muta-
tion operator was introduced as an intermediate
step between the low-order mutation operator
and random-restart procedure to alleviate this
problem.
HC begins with a randomly generated vector,
and then employs the low-order mutation opera-
tor. Only if the new vector is better than the old
vector will the new vector be passed to the next
iteration. The low-order mutation operator is used
until m iterations have passed without any new
solution being accepted, where m is the number
of servers. We use m iterations because of the
way the low-order mutation operator is designed.
The low-order mutation operator, regardless of
90 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
whether employed in HC, SA, TS, or EA, always
takes the first index of the vector which represents
the first server and mutates it with a greedy neigh-
borhood search. This constitutes one iteration for
HC. This new vector will only be accepted if it is
better than existing vector. In the second iteration,
the second server is mutated and so on. Therefore,
after m iterations, all m server positions have been
changed. If in HC, m iterations have passed and no
new solution has been accepted, then it means that
an attempt has been made to change all the server
positions individually and that the current position
is the local optimum.
Once no more improvements can be found
through the use of the low-order mutation opera-
tor, the algorithm then performs high-order muta-
tion on the vector. After the vector undergoes the
high-order mutation, the algorithm begins search-
ing the new neighborhood using the low-order
mutation operator again. After the high-order
mutation is applied m times without an improve-
ment it is determined that the neighborhood has
been exploited. The random-restart procedure is
then used to start the search in a new
neighborhood.
The ‘‘best’’ result from the multiple restarts is
deemed to be the ‘‘best’’ solution achieved by the
HC algorithm. This procedure is referred to as
random-restart hill-climbing [32].
4. Experimental design
To test the meta-heuristics we use a statistical
experimental design approach advocated by Barr
et al. [4] which enables objective analyses and pro-
vides statistically significant conclusions, including
interactions among independent variables. Table 1
summarizes the elements unique to each of the
four meta-heuristics. Table 2 summarizes the
unique parameters used by each meta-heuristic,
and Table 3 summarizes the approximate number
of evaluation functions required for each meta-
heuristic.
Table 4 shows the different independent vari-
ables and dependent variables. There are three
independent variables, (1) distribution (D), (2) size
(S), and (3) heuristic search method (HS), and two
Table1
Summaryofallmeta-heuristics
Meta-heuristicOperatorsusedCommon
data
representation
Randomly
generated
initial
vector(s)
Uses
single
vector
Common
fitness
evaluation
function
Otherunique
parameters
MutationCrossoverSelection
Low-orderHigh-orderOperatorGenerational
EA········•Prob(crossover)
•Prob(mutation)
•Low-order
•High-order
HC······
SA·····Temperature
TS·····Memory(list)size
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 91
dependent variables, (1) quality of solution (QOS)
and (2) time to reach best solution (TBS).
Problems with demand that is distributed uni-
formly across a grid are easier for all heuristic
methods to solve as opposed to when demand is
distributed non-uniformly. With regard to a uni-
form demand distribution, randomly assigned
locations for response units have less deviation in
the quality of their solutions, and are faster to
identify solutions that adequately cover demand.
For non-uniform demand distributions, the ran-
dom assignment does not give any assurance
that the quality of the solution will be consistent.
In the case of non-uniform demand distributions,
the initial randomly generated chromosome can
have vastly different fitness values. Therefore, we
can hypothesize that (H1) problems with uni-
formly distributed demand will be solved with a
better QOS and with smaller TBS values (faster),
than problems with non-uniform demand
distribution.
In our implementation of the different meta-
heuristic methods, the exploitation pressure is the
same. All of the methods use the low-order muta-
tion operator for exploitation. However each
meta-heuristic has different degrees of exploration
pressures. HC uses high-order mutation and a ran-
dom-restart procedure, SA and TS can accept
worse solutions, and EA uses random generation
of an initial population, a crossover and a high-
order mutation operators. Therefore, we hypothe-
size that (H2) the four heuristic search methods
will solve problems with different values of TBS
and QOS.
As can be seen from Table 4, all the indepen-
dent variables are categorical, and the dependent
variables are continuous. Since there is more than
one independent variable it is necessary to setup
the experiment to consider the effects of each inde-
pendent variable upon each of the dependent vari-
ables individually. This process also takes into
consideration interactions between independent
variables. Therefore, we use a randomized facto-
rial design and ANOVA [4,28] to test for direct
effects of the independent variables and their
interactions.
A problem configuration is defined as the prob-
lem’s grid size and the type of demand distribution
Table2
Summaryofuniqueparametersusedforeachmeta-heuristic
Meta-heuristicParameters
Probability
oflow-order
mutation
Probability
ofhigh-order
mutation
Probability
ofcrossover
Population
size
TemperatureMemory(list)size
EA0.10.10.520N/AN/A
HCN/AN/AN/A1N/AN/A
SAN/AN/AN/A1•Probability=#ofsolutions
accepted/#ofsolutionsgenerated
•FeedbackLoop:
–if(Probability<0.4),then
Temperature=Temperature*
(1+((0.4ÀProbability)/0.4))
–elseif(ProbabilityP0.6),then
Temperature=Temperature*
(1/(1+((ProbabilityÀ0.6)/0.4)))
N/A
TSN/AN/AN/A1N/A3*m,wherem=the#
ofresponseunitsinthesystem
92 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
for all four values of m (8, 9, 10 and 11), where m is
the number of response units in the system. There
are six problem configurations (three sizes and two
distributions). Each problem configuration has 10
demand distributions and each demand distribu-
tion is run 10 times for each of the four values of
m for a total of 400 test problems. This is done
for each of the four meta-heuristics. When testing
across the four meta-heuristics there are a total of
1600 (4 · 400) data points.
The solutions for each of the four meta-heuris-
tic methods were compared with the optimum
solution identified through integer linear program-
ming using CPLEX. All runs are done on a Dell
PC with an Intel Pentium 4 2.4 GHz processor,
and 512MB of RAM.
When evaluating the results, two metrics are
used: (1) the average quality of solution (QOS),
or how close the solution given by each meta-heu-
ristic is to the optimum solution, and (2) the aver-
age time at which the best solution (TBS) was
found by each meta-heuristic. When comparing
the computing effort of the different algorithms,
the number of iterations is a misleading metric
since a meta-heuristic search method might require
fewer iterations, and yet still perform more evalu-
ation functions, thereby requiring more time to
complete than another meta-heuristic. The number
of evaluations done for each meta-heuristic during
an iteration can also vary substantially; therefore,
the number of iterations to reach the ‘‘best’’ solu-
tion is not strictly comparable. For these reasons,
we use time (seconds) to reach the ‘‘best’’ solution
for each meta-heuristic as a proxy for computing
effort.
5. Results and discussion
We solved 400 problems and computed the
average and standard deviation of solution quality
(QOS) and time to best solution (TBS) for each of
the six problem configurations and heuristic com-
binations. The total number of problems solved
was 9600. Therefore, the sample size for the exper-
iments is 400. Table 5 displays the mean and stan-
dard deviation for solution quality (QOS) and time
(TBS), respectively.
Overall, the results show that our implementa-
tions of all four meta-heuristics produce very good
results, particularly with respect to solution qual-
ity. The average of solution quality is nearly 99%
with a standard deviation of about 1% across all
problem configurations and all meta-heuristics
(9600 problems). The data was tested using
ANOVA, and post hoc Tamhane’s analysis was
completed to find specific differences. Tamhane’s
analysis was chosen over Bonferroni’s or Tukey’s
Table 3
The approximate number of evaluation functions required for each meta-heuristic
Meta-heuristic Initial number of evaluations Approximate number of evaluations per 100 iterations
Random generation Crossover High-order mutation Low-order mutation Random restart Total
EA 20 8 * 100 2 * 100 16 * 100 N/A 2620
HC 1 N/A 3 * 1 8 * 95 2 * 1 766
SA 1 N/A N/A 8 * 100 N/A 801
TS 1 N/A N/A 8 * 100 N/A 801
Table 4
Experimental setup
Independent variables Dependent variables
(1) Distribution (D) (1) Quality of solution
(measured as the ratio
of given solution to
optimum solution) (QOS)
a. Uniform (U)
b. Non-uniform (NU)
(2) Size (S) (2) Time to reach the
best solution (TBS)a. 64 Zone (64)
b. 256 Zone (256)
c. 1024 Zone (1024)
(3) Heuristic search
method (HS)
a. Hill-climbing (HC)
b. Tabu search (TS)
c. Simulated annealing (SA)
d. Evolutionary
algorithm (EA)
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 93
because we could not assume equal variances
among the different categories of the independent
variables. Tables 6–8 show the ANOVA and the
post hoc Tamhane’s analysis done with respect
to QOS. An alpha of 0.05 was set for statistical
significance.
The ANOVA results from Table 6 show that
the variables S (size) and HS (heuristic search) sig-
nificantly (alpha < 0.01) affect QOS. We can also
see that variable D (distribution) also significantly
affects QOS (alpha < 0.05). Therefore we can infer
from Table 6 that the quality of solution depends
on problem size, the type of meta-heuristic, and
the kind of demand distribution.
Table 6 also shows significant two way interac-
tions between the variables S and D (alpha <
0.01), S and HS (alpha < 0.01) and between D
and HS (alpha < 0.05). The three way interaction
between S, D and HS is insignificant (alpha =
0.39). Closer examination of the interaction
between S and HS shows that for 64 zone prob-
lems HC performs significantly better than TS
and SA, but for 256 zone and 1024 zone problems
there is no significant difference among these three
meta-heuristics. Investigation into the interaction
between D and HS shows that while HC, SA and
TS do significantly better with uniform demand
distributions than non-uniform distributions, EA
Table 5
Performance of different meta-heuristic search methods with respect to the quality of solution (QOS) and time (in seconds) needed to
reach the best solution (TBS)
Distribution Size Mean (std. dev.) N = 9600
CPLEX HC TS SA EA
Uniform 64 QOS 1.0000 (0.00) 0.9924 (0.76) 0.9882 (1.09) 0.9884 (0.93) 0.9927 (0.62)
TBS 0.19 (0.15) 0.36 (0.17) 0.20 (0.09) 0.19 (0.08) 2.39 (1.95)
256 QOS 1.0000 (0.00) 0.9906 (0.94) 0.9893 (1.05) 0.9899 (0.97) 0.9936 (0.58)
TBS 3.92 (28.83) 2.35 (0.79) 1.88 (0.68) 1.86 (0.68) 9.82 (4.69)
1024 QOS 1.0000 (0.00) 0.9873 (1.18) 0.9885 (1.05) 0.9873 (1.26) 0.9918 (0.76)
TBS 137.16 (61.42) 37.67 (13.98) 37.96 (14.84) 38.11 (13.41) 143.47 (45.73)
Non-uniform 64 QOS 1.0000 (0.00) 0.9926 (0.84) 0.9886 (1.11) 0.9885 (1.11) 0.9940 (0.62)
TBS 0.16 (0.11) 0.36 (0.18) 0.20 (0.09) 0.19 (0.09) 2.56 (2.30)
256 QOS 1.0000 (0.00) 0.9903 (0.89) 0.9890 (1.02) 0.9890 (1.01) 0.9937 (0.61)
TBS 2.66 (2.00) 2.35 (0.79) 1.98 (0.73) 1.84 (0.68) 10.30 (4.84)
1024 QOS 1.0000 (0.00) 0.9861 (1.20) 0.9852 (1.38) 0.9857 (1.15) 0.9918 (0.71)
TBS 146.33 (78.79) 37.07 (14.67) 35.75 (13.60) 37.78 (13.84) 147.47 (48.63)
Table 6
ANOVA results: Dependent variable quality of solution
Source Sum of squares Degrees of freedom Mean square F Sig. Observed powera
Corrected model 0.0617b
23 0.0027 28.148 0.000 1.000
Intercept 9404.6136 1 9404.6136 98,661,395.587 0.000 1.000
Size (S) 0.0157 2 0.0079 82.425 0.000 1.000
Distribution (D) 0.0005 1 0.0005 4.814 0.028 0.593
Heuristics (HS) 0.0368 3 0.0123 128.628 0.000 1.000
S * D 0.0017 2 0.0009 8.973 0.000 0.974
S * HS 0.0056 6 0.0009 9.814 0.000 1.000
D * HS 0.0008 3 0.0003 2.922 0.033 0.698
S * D * HS 0.0006 6 0.0001 1.045 0.393 0.421
Error 0.9128 9576 0.0001
Total 9405.5881 9600
Corrected Total 0.9745 9599
a
Computed using alpha = .05.
b
R-Squared = .063 (adjusted R-squared = .061).
94 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
does better with non-uniform demand distribution
than it does with uniform demand distributions.
Finally, the examination of the interaction
between D and S shows that for problems with
uniform distribution the QOS does not deteriorate
when S increases from 64 zones to 256 zones, but
does deteriorate for 1024 zone problems; while for
non-uniform distribution, the QOS does deterio-
rate when S increases from 64 zone to 256 zone
problems.
The post hoc Tamhane’s tests in Table 7 shows
that for all problem configurations, EA tends to
provide slightly better solution quality than the
other three algorithms. The slightly better perfor-
mance of EA might be explained by the fact that
EA performs, on average, more than three times
as many evaluation functions as the other three
algorithms, as illustrated in Table 3. Also,
although statistically significant, these differences
are (a) practically negligible, and (b) are achieved
at great costs in terms of time (Table 5).
Also Table 7 shows that HC performs better
than SA and TS. However, this result can be
explained when taking into account the significant
interaction between variables HS and S. It is only
in 64-zone problems HC solutions are better than
TS and SA solutions. However, this affects the
entire average as a whole and post hoc analysis
shows that HC performs better than SA and TS.
Removing 64 zone size problems shows that HC
Table 7
Post hoc analysis (Tamhane’s test) for factor heuristic search method and dependent variable quality of solution
Heuristic search
method (I)
Heuristic search
method (J)
Mean difference
(I À J)
Std. error Sig. 95% Confidence interval
Lower
bound
Upper
bound
HC TS 0.001743*
0.0003095 0.000 0.000929 0.002558
SA 0.001769*
0.0003028 0.000 0.000972 0.002566
EA À0.003035*
0.0002463 0.000 À0.003684 À0.002387
TS HC À0.001743*
0.0003095 0.000 À0.002558 À0.000929
SA 0.000026 0.0003197 1.000 À0.000816 0.000867
EA À0.004779*
0.0002668 0.000 À0.005481 À0.004076
SA HC À0.001769*
0.0003028 0.000 À0.002566 À0.000972
TS À0.000026 0.0003197 1.000 À0.000867 0.000816
EA À0.004805*
0.0002590 0.000 À0.005486 À0.004123
EA HC 0.003035*
0.0002463 0.000 0.002387 0.003684
TS 0.004779*
0.0002668 0.000 0.004076 0.005481
SA 0.004805*
0.0002590 0.000 0.004123 0.005486
*
The mean difference is significant at the .05 level.
Table 8
Post hoc analysis (Tamhane’s test) for factor size and dependent variable quality of solution
Size (I) Size (J) Mean difference (I À J) Std. error Sig. 95% Confidence interval
Lower bound Upper bound
64 Zone 256 Zone À.000005 .0002313 1.000 À.000557 .000547
1024 Zone .002711*
.0002596 .000 .002092 .003331
256 Zone 64 Zone .000005 .0002313 1.000 À.000547 .000557
1024 Zone .002717*
.0002578 .000 .002101 .003332
1024 Zone 64 Zone À.002711*
.0002596 .000 À.003331 À.002092
256 Zone À.002717*
.0002578 .000 À.003332 À.002101
*
The mean difference is significant at the .05 level.
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 95
does not perform significantly better than SA or
TS. This result for small sized problems might be
explained by considering the exploratory pressure
used by HC. The random-restart procedure works
very well in small problems when the algorithm
randomly jumps around the solution space trolling
for the best solution. In larger problems, however,
jumping around is not as effective because the
solution space is bigger, which negates any advan-
tage random jumping has over the more systematic
exploratory pressures of SA and TS. In general,
there is no statistically significant difference
between the quality of the solutions for SA and
TS.
Table 8 breaks down the effect of variable S on
QOS. As can be seen from the Tamhane’s test in
Table 8 there is no significant difference between
the QOS value when the problem size increases
from 64 to 256 zones. However, with 1024 zone
problems the QOS deteriorates by a very small
(0.0027) but statistically significant amount
(alpha < 0.001). Tables 9 and 10 show the
ANOVA and Tables 11 and 12 show the post
hoc Tamhane’s analysis done with respect to TBS.
Table 9 shows the ANOVA results for the dif-
ferent meta-heuristic search methods without
CPLEX as a category. As can be seen all three
independent variables S, D and HS (alpha < 0.01)
significantly affect time to best solution. Also, all
two way interactions, S and D, S and HS, and D
and HS (alpha < 0.01), are significant. The three
way interaction between S, D and HS is not signif-
icant (alpha = 0.080 and power = 0.707). Investi-
gation into the interaction between S and D
shows that problems with 64 zones and uniform
demand distributions were solved faster than
problems with 64 zones and non-uniform demand.
However, for 256 zones and 1024 zones the TBS
values were not statistically different for uniform
and non-uniform demand distributions. Similarly,
the interaction between S and HS reveals that even
though EA was slower than the other three meta-
heuristics in all three problem sizes (64, 256 and
1024), it is at 1024 zone problems where this differ-
ence becomes dramatic. Finally, the interaction
between D and HS reveals that while the TBS val-
ues for TS and SA are different for uniform and
non-uniform demand distributions, they are more
noticeable for EA.
To compare CPLEX with other meta-heuristics,
CPLEX times were included as another category
in the variable HS; ANOVA results change (Table
10). Variable D is no longer significant
(alpha = 0.146) and all two way interactions
involving variable D become insignificant (S and
D alpha = 0.078, and D and HS alpha = 0.052).
This can be explained by the fact that the TBS val-
ues for CPLEX are not affected significantly by the
demand distributions while the meta-heuristic’s
TBS values are. The interaction between S and
HS can be explained from the data in Table 5.
The real difference in time between CPLEX and
Table 9
ANOVA results: Dependent variable time to best solution (HS not including CPLEX solutions)
Source Sum of squares Degrees of freedom Mean square F Sig. Observed powera
Corrected model 1.5E+13b
23 6.6E+11 2749.84 0.000 1.000
Intercept 5.3E+12 1 5.3E+12 22432 0.000 1.000
S 8E+12 2 4E+12 16747 0.000 1.000
D 3.2E+09 1 3.2E+09 13.29 0.000 0.954
HS 2.9E+12 3 9.8E+11 4105.67 0.000 1.000
S * D 5E+09 2 2.5E+09 10.53 0.000 0.989
S * HS 4.1E+12 6 6.9E+11 2893.74 0.000 1.000
D * HS 6.5E+09 3 2.2E+09 9.09 0.000 0.996
S * D * HS 2.7E+09 6 4.5E+08 1.88 0.080 0.707
Error 2.3E+12 9576 2.4E+08
Total 2.3E+13 9600
Corrected total 1.7E+13 9599
a
Computed using alpha = .05.
b
R-Squared = .869 (adjusted R-squared = .868).
96 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
EA on the one hand, and HC, TS, and SA on the
other occurs in 1024 size problems. At smaller
problem sizes the difference is much less. Also,
average and standard deviation of TBS values
for CPLEX deteriorate much quicker as problem
sizes grow larger.
The post hoc Tamhane’s analysis in Table 11
shows that there is a significant difference between
Table 10
ANOVA results: Dependent variable time to best solution (HS includes CPLEX solutions)
Source Sum of squares Degrees of freedom Mean square F Sig. Observed powera
Corrected model 2.70E+07b
29 9.30E+05 1803.87 .000 1.000
Intercept 9.50E+06 1 9.50E+06 18427.11 .000 1.000
S 1.61E+07 2 8.03E+06 15571.53 .000 1.000
D 1.09E+03 1 1.09E+03 2.12 .146 .307
HSc
4.04E+06 4 1.01E+06 1956.92 .000 1.000
S * D 2.63E+03 2 1.32E+03 2.55 .078 .512
S * HSc
6.86E+06 8 8.57E+05 1662.47 .000 1.000
D * HSc
4.83E+03 4 1.21E+03 2.34 .052 .684
S *D * HSc
1.30E+04 8 1.62E+03 3.15 .001 .970
Error 6.17E+06 11,970 5.16E+02
Total 4.26E+07 12,000
Corrected total 3.31E+07 11,999
a
Computed using alpha = .05.
b
R-Squared = .814 (adjusted R-squared = .813).
c
HS includes CPLEX as the fifth category.
Table 11
Post hoc analysis (Tamhane’s test) for factor heuristic search method and dependent variable time to best solution
Heuristic type (I) Heuristic type (J) Mean difference (I À J) Std. error Sig. 95% Confidence interval
Lower bound Upper bound
HC TS .36433 .544182 .999 À1.15988 1.88854
SA .09662 .547622 1.000 À1.43722 1.63046
EA À39.31011*
1.503689 .000 À43.52328 À35.09694
CPLEX À35.04816*
1.626450 .000 À39.60536 À30.49096
TS HC À.36433 .544182 .999 À1.88854 1.15988
SA À.26771 .545975 1.000 À1.79694 1.26152
EA À39.67444*
1.503090 .000 À43.88593 À35.46294
CPLEX À35.41248*
1.625896 .000 À39.96814 À30.85683
SA HC À.09662 .547622 1.000 À1.63046 1.43722
TS .26771 .545975 1.000 À1.26152 1.79694
EA À39.40673*
1.504339 .000 À43.62171 À35.19175
CPLEX À35.14478*
1.627050 .000 À39.70366 À30.58590
EA HC 39.31011*
1.503689 .000 35.09694 43.52328
TS 39.67444*
1.503090 .000 35.46294 43.88593
SA 39.40673*
1.504339 .000 35.19175 43.62171
CPLEX 4.26195 2.146738 .383 À1.75090 10.27480
CPLEX HC 35.04816*
1.626450 .000 30.49096 39.60536
TS 35.41248*
1.625896 .000 30.85683 39.96814
SA 35.14478*
1.627050 .000 30.58590 39.70366
EA À4.26195 2.146738 .383 À10.27480 1.75090
*
The mean difference is significant at the .05 level.
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 97
the population based meta-heuristic, EA, and
other three single chromosome based meta-heuris-
tics. As expected, EA is much slower than the
other three meta-heuristics HC, TS, and SA. Even
though TS is the fastest meta-heuristic, it is not sig-
nificantly faster than the other two single chromo-
some meta-heuristics (SA and HC). The single
chromosome meta-heuristics outperform CPLEX
significantly, whereas the difference between EA
and CPLEX is not statistically significant. Finally,
Table 12 shows that as the problem size increases,
TBS increases significantly (alpha < 0.001).
The hypotheses that the four heuristic search
methods will solve problems at different values of
TBS and QOS can be partially accepted. EA is def-
initely statistically different from the other three
meta-heuristics both in TBS (worse) and QOS
(better). The other three meta-heuristics are statis-
tically different only under certain conditions (e.g.,
problem size).
The hypothesis that problems with uniformly
distributed demand will be solved with a better
QOS and with smaller TBS values than prob-
lems with non-uniform demand distribution also
can be accepted. The demand distribution does
make a difference in the performance of the
meta-heuristic methods, but does not affect
CPLEX.
The process of developing these different imple-
mentations of meta-heuristics gave the researchers
some insights into the problem domain and certain
operator settings. It appears that neighborhood
size and the quality of initial (starting) solution
(vector) have a significant effect on the perfor-
mance of the algorithms.
The effectiveness of the low-order mutation
operator designed for the four meta-heuristics
depends heavily on the size of the neighborhood.
For this research the neighborhood size was deter-
mined as those zones immediately adjacent to a
given zone, as illustrated in Fig. 3. In preliminary
tests, neighborhood size was set equal to n, the
number of zones in the problem grid, and an entire
set of tests for the meta-heuristics and for all prob-
lem sizes was run. The quality of solutions
improved noticeably for all the meta-heuristics
with the average quality of solution very close to
100% of optimum, and standard deviation reduced
to less than 0.2% in the worst case. However, the
time needed to run the algorithms became prohib-
itive, especially for 1024-zone problems. Future
research might attempt to estimate the optimum
neighborhood size for this problem domain.
While running our initial experiments we
noticed that the quality of an initial vector makes
a tremendous difference in the final quality of solu-
tions for TS and SA. It is also important for HC,
and the impact is felt every time HC does a ran-
dom restart. The average fitness value of a ran-
domly generated vector is approximately 80% of
the optimum solution. It is proposed that an effec-
tive way to promote good results for TS, SA, and
HC might be to generate, say, 50 vectors ran-
domly, and then take the best vector as the starting
vector, thereby increasing the average value of the
initial vector to approximately 90%. This method
could also be used by HC as a modified random-
restart procedure.
Overall, TS and SA on average find their best
solutions in the least amount of time and with rel-
Table 12
Post hoc analysis (Tamhane’s test) for factor size and dependent variable time to best solution
Size (I) Size (J) Mean difference (I À J) Std. error Sig. 95% Confidence interval
Lower bound Upper bound
64 Zone 256 Zone À3.21646*
.066322 .000 À3.37487 À3.05804
1024 Zone À79.15633*
1.031603 .000 À81.62056 À76.69210
256 Zone 64 Zone 3.21646*
.066322 .000 3.05804 3.37487
1024 Zone À75.93987*
1.033314 .000 À78.40818 À73.47156
1024 Zone 64 Zone 79.15633*
1.031603 .000 76.69210 81.62056
256 Zone 75.93987*
1.033314 .000 73.47156 78.40818
*
The mean difference is significant at the .05 level.
98 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
atively small variability as indicated by the stan-
dard deviation for the computing times (Table
5). With the exception of EA, the meta-heuristics
significantly outperform CPLEX as the problem
size gets larger both in average times and standard
deviations. Indeed, relatively large standard devia-
tions in CPLEX times are indicative of the fact
that these are NP-hard problems. For any given
problem instance it is possible for ILP solvers like
CPLEX to take an extraordinary amount of time
[1], therefore, ILP solvers are not viable alterna-
tives for time critical applications. This is quite
important for particularly large-size problems
(1024 zones) where dynamic redeployment prac-
tices will demand fast and accurate solutions.
Under the most realistic of all scenarios with
1024 zones, non-uniformly distributed demand,
TS performs the best with an average time of
35.75 seconds and a standard deviation of 13.60,
which implies that a limit of 1 minute delay to find
the best redeployment strategy can be met (solved)
accurately with a probability of 96.25%.
6. Conclusions
The contribution of this paper is threefold. First,
to our knowledge this is the first time that multiple
meta-heuristics have been systematically developed
and applied to the maximum expected covering
location problem (MEXCLP). This allowed us to
objectively evaluate the performance of various
meta-heuristics for a given problem. There was a
concerted effort to code each meta-heuristic in a
like manner and utilize like procedures so that
results would be meaningful and generalizable.
Specifically, the meta-heuristics used common data
representation, common evaluation function, and a
common mutation operator. In addition, the initial
vector(s) of all meta-heuristics were randomly gen-
erated. Second, we confirm and strengthen the ear-
lier findings reported in [1] that an EA can produce
high quality solutions, however, non-EA
approaches such as TS can be used to solve large-
scale implementations of MEXCLP with a high
degree of confidence in QOS, in less time. Finally,
this research is unique in that it is an experimentally
designed comparative study. The results of the four
meta-heuristics are statistically compared. The
mean and standard deviation are reported for each
heuristic for six different problem configurations
(i.e., 2 distribution types · 3 problem sizes). The
significance of the statistical tests, using ANOVA,
between meta-heuristics and their interaction with
other variables is also reported. This allows for a
clear, unbiased assessment of the performance of
each meta-heuristic for this problem instance.
Results suggest that any one of the four meta-
heuristics could be used for this problem domain
while there are pros and cons to each meta-heuris-
tic examined. However, it is important for the suc-
cessful implementation of EA, SA, and TS that the
parameters be tuned properly. For HC, the only
parameter to be tuned is the size of neighborhood;
this makes HC very simple to use. Overall TS and
SA give very good results with minimal computa-
tional effort (time) for particularly large problems.
As can be seen from our implementation of the
four meta-heuristic methods, common operators
were developed that can be used in various appli-
cations. In addition, as we demonstrated with
SA, parameters need not be static. Parameters
can dynamically evolve as an algorithm runs; this
eliminates the necessity to run a plethora of preli-
minary tests in an effort to identify optimal param-
eter values. Trying to identify optimal static
parameter values can be problematic since there
is no guarantee that these parameter values will
remain constant for different implementations of
a problem, or different problem sizes. Therefore,
we support the process of dynamically adapting
parameter values based on conditions of the algo-
rithm through the use of feedback loops.
Another variation that might warrant investiga-
tion is the use of the crossover operator within
HC. The idea is to apply the crossover operator
to the chromosome recorded as the best solution
found in different runs to create new search areas,
in lieu of the random-restart operator. It is also
possible to merge the different strengths of these
meta-heuristics by having them cooperate with
each other. One idea might be to run EA for a
short time in order to get a population of good
solutions, and then randomly select chromosomes
from that population to be used for the initial
chromosomes for HC, SA, or TS.
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 99
Finally, this experiment has focused on the dif-
ferent exploratory pressures unique to each meta-
heuristic by keeping the exploitative pressure con-
stant. Future research might explore the impact of
different exploitative pressures with respect to local
search methods, like vertex substitution, [15] and
comparing these results with the greedy neighbor-
hood search algorithm, i.e., low-order mutation
operator, which is currently being used.
References
[1] H. Aytug, C. Saydam, Solving large-scale maximum
expected covering location problems by genetic algorithms:
A comparative study, European Journal of Operational
Research 141 (2002) 480–494.
[2] H. Aytug, M. Khouja, F.E. Vergara, A review of the use of
genetic algorithms to solve production and operations
management problems, International Journal of Produc-
tion Research 41 (17) (2003) 3955–4009.
[3] M. Ball, L. Lin, A reliability model applied to emergency
service vehicle location, Operations Research 41 (1993) 18–
36.
[4] R.S. Barr, B.L. Golden, J.P. Kelly, M.G.C. Resende, W.R.
Stewart, Designing and reporting on computational exper-
iments with heuristic methods, Journal of Heuristics 1 (1)
(1995) 9–32.
[5] J. Beasley, Lagrangean heuristics for location problems,
European Journal of Operational Research 65 (1993) 383–
399.
[6] J.E. Beasley, P.C. Chu, A genetic algorithm for the set
covering problem, European Journal of Operational
Research 94 (1996) 392–404.
[7] S. Benati, G. Laporte, Tabu Search algorithms for the
(rjXp)-medianoid and (rjp) centroid problems, Location
Science 2 (1994) 193–204.
[8] L. Brotcorne, G. Laporte, F. Semet, Fast heuristics for
large scale covering location problems, Computers and
Operations Research 29 (2002) 651–665.
[9] L. Brotcorne, G. Laporte, F. Semet, Ambulance location
and relocation models, European Journal of Operational
Research 147 (2003) 451–463.
[10] F.Y. Chiyoshi, R.D. Galvao, A statistical analysis of
simulated annealing applied to the p-median problem,
Annals of Operations Research 96 (2000) 61–74.
[11] R. Church, C. ReVelle, The maximal covering location
problem, Papers of Regional Science Association 32 (1974)
101–118.
[12] CPLEX, Using the CPLEX Callable Library, CPLEX
Optimization, Inc., Incline Village, NV, 1995.
[13] M.S. Daskin, Application of an expected covering model
to emergency medical service system design, Decision
Sciences 13 (1982) 416–439.
[14] M.S. Daskin, A maximal expected covering location
model: Formulation, properties, and heuristic solution,
Transportation Science 17 (1983) 48–69.
[15] M.S. Daskin, Network and Discrete Location, John Wiley
& Sons Inc., New York, 1995.
[16] R.D. Galvao, F.Y. Chiyoshi, R. Morabito, Towards
unified formulations and extensions of two classical
probabilistic location models, Computers and Operations
Research 32 (1) (2005) 15–33.
[17] M. Gendreau, G. Laporte, F. Semet, Solving an ambulance
location model by tabu search, Location Science 5 (2)
(1997) 75–88.
[18] M. Gendreau, G. Laporte, F. Semet, A dynamic model
and parallel tabu search heuristic for real time ambu-
lance relocation, Parallel Computing 27 (2001)
1641–1653.
[19] F. Glover, Future paths for integer programming and links
to artificial intelligence, Computers and Operations
Research 13 (1986) 533–549.
[20] F. Glover, M. Laguna, Tabu Search, Kluwer, Boston, MA,
1997.
[21] J.B. Goldberg, Operations research models for the deploy-
ment of emergency services vehicles, EMS Management
Journal 1 (1) (2004) 20–39.
[22] H.J. Holland, Adaptation in Natural and Artificial
Systems, University of Michigan Press, Ann Arbor, 1975.
[23] J. Jaramillo, J. Bhadury, R. Batta, On the use of genetic
algorithms to solve location problems, Computers and
Operations Research 29 (2002) 761–779.
[24] S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi, Optimization
by simulated annealing, Science 220 (1983) 671–680.
[25] R.C. Larson, A hypercube queuing model for facility
location and redistricting in urban emergency services,
Computers and Operations Research 1 (1974) 67–95.
[26] Z. Michalewicz, Genetic Algorithms + Data Struc-
tures = Evolution Programs, third ed., Springer, New
York, 1999.
[27] Z. Michalewicz, D.B. Fogel, How to Solve it: Modern
Heuristics, Spring-Verlag, 2000.
[28] M.J. Norusis, SPSS 10.0 Guide to Data Analysis, Prentice-
Hall, Inc., Upper Saddle River, New Jersey, 2000.
[29] I.H. Osman, G. LaPorte, Metaheuristics: A bibliography,
Annals of Operations Research 63 (1996) 513–628.
[30] S.H. Owen, M.S. Daskin, Strategic facility location: A
review, European Journal of Operational Research 111
(1998) 423–447.
[31] C. Revelle, K. Hogan, The maximum availability loca-
tion problem, Transportation Science 23 (1989) 192–
200.
[32] S. Russel, P. Norvig, Artificial Intelligence: A Modern
Approach, Prentice-Hall, New Jersey, 1995.
[33] C. Saydam, J. Repede, T. Burwell, Accurate Estimation of
Expected Coverage: A Comparative Study, Socio-Eco-
nomic Planning Sciences 28 (2) (1994) 113–120.
[34] C. Saydam, H. Aytug, Accurate estimation of expected
coverage: Revisited, Socio-Economic Planning Sciences 37
(2003) 69–80.
100 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
[35] D. Schilling, D. Elzinga, J. Cohon, R. Church, C. ReVelle,
The TEAM/FLEET models for simultaneous facility and
equipment sitting, Transportation Science 13 (1979) 163–
175.
[36] D.A. Schilling, V. Jayaraman, R. Barkhi, A review of
covering problems in facility location, Location Science 1
(1) (1993) 25–55.
[37] C. Toregas, R. Swain, C. ReVelle, L. Bergman, The
location of emergency service facilities, Operations
Research 19 (1971) 1363–1373.
[38] F.E. Vergara, M. Khouja, Z. Michalewicz, An evolution-
ary algorithm for optimizing material flow in supply
chains, Computers and Industrial Engineering 43 (2002)
407–421.
H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 101

Mais conteúdo relacionado

Mais procurados

Extended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmExtended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
 
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACH
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACH
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
 
Estimating project development effort using clustered regression approach
Estimating project development effort using clustered regression approachEstimating project development effort using clustered regression approach
Estimating project development effort using clustered regression approachcsandit
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1bPRAWEEN KUMAR
 
TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...
TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...
TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...ijnlc
 
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
Automatic Unsupervised Data Classification Using Jaya Evolutionary AlgorithmAutomatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithmaciijournal
 
2018 algorithms for the minmax regret path problem with interval data
2018   algorithms for the minmax regret path problem with interval data2018   algorithms for the minmax regret path problem with interval data
2018 algorithms for the minmax regret path problem with interval dataFrancisco Pérez
 
Experimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsExperimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
 
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
Automatic Unsupervised Data Classification Using Jaya Evolutionary AlgorithmAutomatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithmaciijournal
 
A HYBRID CLUSTERING ALGORITHM FOR DATA MINING
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGA HYBRID CLUSTERING ALGORITHM FOR DATA MINING
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGcscpconf
 
Optimising Data Using K-Means Clustering Algorithm
Optimising Data Using K-Means Clustering AlgorithmOptimising Data Using K-Means Clustering Algorithm
Optimising Data Using K-Means Clustering AlgorithmIJERA Editor
 
Dynamic approach to k means clustering algorithm-2
Dynamic approach to k means clustering algorithm-2Dynamic approach to k means clustering algorithm-2
Dynamic approach to k means clustering algorithm-2IAEME Publication
 
Textual Data Partitioning with Relationship and Discriminative Analysis
Textual Data Partitioning with Relationship and Discriminative AnalysisTextual Data Partitioning with Relationship and Discriminative Analysis
Textual Data Partitioning with Relationship and Discriminative AnalysisEditor IJMTER
 
Grasp approach to rcpsp with min max robustness objective
Grasp approach to rcpsp with min max robustness objectiveGrasp approach to rcpsp with min max robustness objective
Grasp approach to rcpsp with min max robustness objectivecsandit
 
Urban strategies to promote resilient cities The case of enhancing Historic C...
Urban strategies to promote resilient cities The case of enhancing Historic C...Urban strategies to promote resilient cities The case of enhancing Historic C...
Urban strategies to promote resilient cities The case of enhancing Historic C...inventionjournals
 
Updated paper in latex(1)
Updated paper in latex(1)Updated paper in latex(1)
Updated paper in latex(1)Richa Shukla
 
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...Influence over the Dimensionality Reduction and Clustering for Air Quality Me...
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...IJAEMSJORNAL
 

Mais procurados (20)

Extended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithmExtended pso algorithm for improvement problems k means clustering algorithm
Extended pso algorithm for improvement problems k means clustering algorithm
 
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACH
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACH
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACH
 
Estimating project development effort using clustered regression approach
Estimating project development effort using clustered regression approachEstimating project development effort using clustered regression approach
Estimating project development effort using clustered regression approach
 
84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b84cc04ff77007e457df6aa2b814d2346bf1b
84cc04ff77007e457df6aa2b814d2346bf1b
 
TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...
TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...
TOPIC EXTRACTION OF CRAWLED DOCUMENTS COLLECTION USING CORRELATED TOPIC MODEL...
 
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
Automatic Unsupervised Data Classification Using Jaya Evolutionary AlgorithmAutomatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
 
2018 algorithms for the minmax regret path problem with interval data
2018   algorithms for the minmax regret path problem with interval data2018   algorithms for the minmax regret path problem with interval data
2018 algorithms for the minmax regret path problem with interval data
 
Experimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithmsExperimental study of Data clustering using k- Means and modified algorithms
Experimental study of Data clustering using k- Means and modified algorithms
 
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
Automatic Unsupervised Data Classification Using Jaya Evolutionary AlgorithmAutomatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
 
A HYBRID CLUSTERING ALGORITHM FOR DATA MINING
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGA HYBRID CLUSTERING ALGORITHM FOR DATA MINING
A HYBRID CLUSTERING ALGORITHM FOR DATA MINING
 
Optimising Data Using K-Means Clustering Algorithm
Optimising Data Using K-Means Clustering AlgorithmOptimising Data Using K-Means Clustering Algorithm
Optimising Data Using K-Means Clustering Algorithm
 
Dynamic approach to k means clustering algorithm-2
Dynamic approach to k means clustering algorithm-2Dynamic approach to k means clustering algorithm-2
Dynamic approach to k means clustering algorithm-2
 
Textual Data Partitioning with Relationship and Discriminative Analysis
Textual Data Partitioning with Relationship and Discriminative AnalysisTextual Data Partitioning with Relationship and Discriminative Analysis
Textual Data Partitioning with Relationship and Discriminative Analysis
 
Av33274282
Av33274282Av33274282
Av33274282
 
Master's Thesis Presentation
Master's Thesis PresentationMaster's Thesis Presentation
Master's Thesis Presentation
 
2009 spie hmm
2009 spie hmm2009 spie hmm
2009 spie hmm
 
Grasp approach to rcpsp with min max robustness objective
Grasp approach to rcpsp with min max robustness objectiveGrasp approach to rcpsp with min max robustness objective
Grasp approach to rcpsp with min max robustness objective
 
Urban strategies to promote resilient cities The case of enhancing Historic C...
Urban strategies to promote resilient cities The case of enhancing Historic C...Urban strategies to promote resilient cities The case of enhancing Historic C...
Urban strategies to promote resilient cities The case of enhancing Historic C...
 
Updated paper in latex(1)
Updated paper in latex(1)Updated paper in latex(1)
Updated paper in latex(1)
 
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...Influence over the Dimensionality Reduction and Clustering for Air Quality Me...
Influence over the Dimensionality Reduction and Clustering for Air Quality Me...
 

Semelhante a Developing effective meta heuristics for a probabilistic

A multiperiod set covering location model for dynamic redeployment of ambulances
A multiperiod set covering location model for dynamic redeployment of ambulancesA multiperiod set covering location model for dynamic redeployment of ambulances
A multiperiod set covering location model for dynamic redeployment of ambulancesHari Rajagopalan
 
Comparison of Emergency Medical Services Delivery Performance using Maximal C...
Comparison of Emergency Medical Services Delivery Performance using Maximal C...Comparison of Emergency Medical Services Delivery Performance using Maximal C...
Comparison of Emergency Medical Services Delivery Performance using Maximal C...IJECEIAES
 
Ambulance deployment and shift scheduling an integrated approach
Ambulance deployment and shift scheduling an integrated approachAmbulance deployment and shift scheduling an integrated approach
Ambulance deployment and shift scheduling an integrated approachHari Rajagopalan
 
A_multi-objective_set_covering_problem_A_case_stud.pdf
A_multi-objective_set_covering_problem_A_case_stud.pdfA_multi-objective_set_covering_problem_A_case_stud.pdf
A_multi-objective_set_covering_problem_A_case_stud.pdfappaji nayak
 
facility layout paper
 facility layout paper facility layout paper
facility layout paperSaurabh Tiwary
 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
 
A stochastic algorithm for solving the posterior inference problem in topic m...
A stochastic algorithm for solving the posterior inference problem in topic m...A stochastic algorithm for solving the posterior inference problem in topic m...
A stochastic algorithm for solving the posterior inference problem in topic m...TELKOMNIKA JOURNAL
 
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...ijcseit
 
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...Konstantinos Demertzis
 
An Application of Genetic Algorithm for Non-restricted Space and Pre-determin...
An Application of Genetic Algorithm for Non-restricted Space and Pre-determin...An Application of Genetic Algorithm for Non-restricted Space and Pre-determin...
An Application of Genetic Algorithm for Non-restricted Space and Pre-determin...drboon
 
An Exact Branch And Bound Algorithm For The General Quadratic Assignment Problem
An Exact Branch And Bound Algorithm For The General Quadratic Assignment ProblemAn Exact Branch And Bound Algorithm For The General Quadratic Assignment Problem
An Exact Branch And Bound Algorithm For The General Quadratic Assignment ProblemJoe Andelija
 
Production_planning_of_a_furniture_manufacturing_c.pdf
Production_planning_of_a_furniture_manufacturing_c.pdfProduction_planning_of_a_furniture_manufacturing_c.pdf
Production_planning_of_a_furniture_manufacturing_c.pdfMehnazTabassum20
 
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPING
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGTOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPING
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGijdkp
 
2use of dss in ems to reduce responce time
2use of dss in ems to reduce responce time2use of dss in ems to reduce responce time
2use of dss in ems to reduce responce timeosaleem0123
 
2 a review of different approaches to the facility layout problems
2   a review of different approaches to the facility layout problems2   a review of different approaches to the facility layout problems
2 a review of different approaches to the facility layout problemsQuốc Lê
 
An artificial immune system algorithm for solving the uncapacitated single all...
An artificial immune system algorithm for solving the uncapacitated single all...An artificial immune system algorithm for solving the uncapacitated single all...
An artificial immune system algorithm for solving the uncapacitated single all...IJECEIAES
 
International Journal of Computer Science and Security Volume (4) Issue (2)
International Journal of Computer Science and Security Volume (4) Issue (2)International Journal of Computer Science and Security Volume (4) Issue (2)
International Journal of Computer Science and Security Volume (4) Issue (2)CSCJournals
 

Semelhante a Developing effective meta heuristics for a probabilistic (20)

A multiperiod set covering location model for dynamic redeployment of ambulances
A multiperiod set covering location model for dynamic redeployment of ambulancesA multiperiod set covering location model for dynamic redeployment of ambulances
A multiperiod set covering location model for dynamic redeployment of ambulances
 
Comparison of Emergency Medical Services Delivery Performance using Maximal C...
Comparison of Emergency Medical Services Delivery Performance using Maximal C...Comparison of Emergency Medical Services Delivery Performance using Maximal C...
Comparison of Emergency Medical Services Delivery Performance using Maximal C...
 
Ambulance deployment and shift scheduling an integrated approach
Ambulance deployment and shift scheduling an integrated approachAmbulance deployment and shift scheduling an integrated approach
Ambulance deployment and shift scheduling an integrated approach
 
A_multi-objective_set_covering_problem_A_case_stud.pdf
A_multi-objective_set_covering_problem_A_case_stud.pdfA_multi-objective_set_covering_problem_A_case_stud.pdf
A_multi-objective_set_covering_problem_A_case_stud.pdf
 
facility layout paper
 facility layout paper facility layout paper
facility layout paper
 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
 
A stochastic algorithm for solving the posterior inference problem in topic m...
A stochastic algorithm for solving the posterior inference problem in topic m...A stochastic algorithm for solving the posterior inference problem in topic m...
A stochastic algorithm for solving the posterior inference problem in topic m...
 
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...
 
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...
GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspec...
 
An Application of Genetic Algorithm for Non-restricted Space and Pre-determin...
An Application of Genetic Algorithm for Non-restricted Space and Pre-determin...An Application of Genetic Algorithm for Non-restricted Space and Pre-determin...
An Application of Genetic Algorithm for Non-restricted Space and Pre-determin...
 
IJCSI-2015-12-2-10138 (1) (2)
IJCSI-2015-12-2-10138 (1) (2)IJCSI-2015-12-2-10138 (1) (2)
IJCSI-2015-12-2-10138 (1) (2)
 
Hmtc1300663
Hmtc1300663Hmtc1300663
Hmtc1300663
 
An Exact Branch And Bound Algorithm For The General Quadratic Assignment Problem
An Exact Branch And Bound Algorithm For The General Quadratic Assignment ProblemAn Exact Branch And Bound Algorithm For The General Quadratic Assignment Problem
An Exact Branch And Bound Algorithm For The General Quadratic Assignment Problem
 
Production_planning_of_a_furniture_manufacturing_c.pdf
Production_planning_of_a_furniture_manufacturing_c.pdfProduction_planning_of_a_furniture_manufacturing_c.pdf
Production_planning_of_a_furniture_manufacturing_c.pdf
 
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPING
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPINGTOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPING
TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPING
 
2use of dss in ems to reduce responce time
2use of dss in ems to reduce responce time2use of dss in ems to reduce responce time
2use of dss in ems to reduce responce time
 
2 a review of different approaches to the facility layout problems
2   a review of different approaches to the facility layout problems2   a review of different approaches to the facility layout problems
2 a review of different approaches to the facility layout problems
 
H012225053
H012225053H012225053
H012225053
 
An artificial immune system algorithm for solving the uncapacitated single all...
An artificial immune system algorithm for solving the uncapacitated single all...An artificial immune system algorithm for solving the uncapacitated single all...
An artificial immune system algorithm for solving the uncapacitated single all...
 
International Journal of Computer Science and Security Volume (4) Issue (2)
International Journal of Computer Science and Security Volume (4) Issue (2)International Journal of Computer Science and Security Volume (4) Issue (2)
International Journal of Computer Science and Security Volume (4) Issue (2)
 

Mais de Hari Rajagopalan

Beyond efficiency and productivity
Beyond efficiency and productivityBeyond efficiency and productivity
Beyond efficiency and productivityHari Rajagopalan
 
The value of information systems to small and medium sized enterprisest
The value of information systems to small and medium sized enterprisestThe value of information systems to small and medium sized enterprisest
The value of information systems to small and medium sized enterprisestHari Rajagopalan
 
Can piracy lead to higher prices in the music
Can piracy lead to higher prices in the musicCan piracy lead to higher prices in the music
Can piracy lead to higher prices in the musicHari Rajagopalan
 
Application of complex adaptive systems to pricing of reproducible informatio...
Application of complex adaptive systems to pricing of reproducible informatio...Application of complex adaptive systems to pricing of reproducible informatio...
Application of complex adaptive systems to pricing of reproducible informatio...Hari Rajagopalan
 
Optimal pricing and delayed incentives in a heterogeneous consumer market
Optimal pricing and delayed incentives in a heterogeneous consumer marketOptimal pricing and delayed incentives in a heterogeneous consumer market
Optimal pricing and delayed incentives in a heterogeneous consumer marketHari Rajagopalan
 
Optimal inventory policy under continuous unit cost and risk of sudden obs
Optimal inventory policy under continuous unit cost and risk of sudden obs Optimal inventory policy under continuous unit cost and risk of sudden obs
Optimal inventory policy under continuous unit cost and risk of sudden obs Hari Rajagopalan
 
The dynamic redeployment coverage location model
The dynamic redeployment coverage location modelThe dynamic redeployment coverage location model
The dynamic redeployment coverage location modelHari Rajagopalan
 

Mais de Hari Rajagopalan (7)

Beyond efficiency and productivity
Beyond efficiency and productivityBeyond efficiency and productivity
Beyond efficiency and productivity
 
The value of information systems to small and medium sized enterprisest
The value of information systems to small and medium sized enterprisestThe value of information systems to small and medium sized enterprisest
The value of information systems to small and medium sized enterprisest
 
Can piracy lead to higher prices in the music
Can piracy lead to higher prices in the musicCan piracy lead to higher prices in the music
Can piracy lead to higher prices in the music
 
Application of complex adaptive systems to pricing of reproducible informatio...
Application of complex adaptive systems to pricing of reproducible informatio...Application of complex adaptive systems to pricing of reproducible informatio...
Application of complex adaptive systems to pricing of reproducible informatio...
 
Optimal pricing and delayed incentives in a heterogeneous consumer market
Optimal pricing and delayed incentives in a heterogeneous consumer marketOptimal pricing and delayed incentives in a heterogeneous consumer market
Optimal pricing and delayed incentives in a heterogeneous consumer market
 
Optimal inventory policy under continuous unit cost and risk of sudden obs
Optimal inventory policy under continuous unit cost and risk of sudden obs Optimal inventory policy under continuous unit cost and risk of sudden obs
Optimal inventory policy under continuous unit cost and risk of sudden obs
 
The dynamic redeployment coverage location model
The dynamic redeployment coverage location modelThe dynamic redeployment coverage location model
The dynamic redeployment coverage location model
 

Último

Top Rated Pune Call Girls (DIPAL) ⟟ 8250077686 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls (DIPAL) ⟟ 8250077686 ⟟ Call Me For Genuine Sex Serv...Top Rated Pune Call Girls (DIPAL) ⟟ 8250077686 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls (DIPAL) ⟟ 8250077686 ⟟ Call Me For Genuine Sex Serv...Dipal Arora
 
Call Girls Jaipur Just Call 9521753030 Top Class Call Girl Service Available
Call Girls Jaipur Just Call 9521753030 Top Class Call Girl Service AvailableCall Girls Jaipur Just Call 9521753030 Top Class Call Girl Service Available
Call Girls Jaipur Just Call 9521753030 Top Class Call Girl Service AvailableJanvi Singh
 
Andheri East ^ (Genuine) Escort Service Mumbai ₹7.5k Pick Up & Drop With Cash...
Andheri East ^ (Genuine) Escort Service Mumbai ₹7.5k Pick Up & Drop With Cash...Andheri East ^ (Genuine) Escort Service Mumbai ₹7.5k Pick Up & Drop With Cash...
Andheri East ^ (Genuine) Escort Service Mumbai ₹7.5k Pick Up & Drop With Cash...Anamika Rawat
 
💚Call Girls In Amritsar 💯Anvi 📲🔝8725944379🔝Amritsar Call Girl No💰Advance Cash...
💚Call Girls In Amritsar 💯Anvi 📲🔝8725944379🔝Amritsar Call Girl No💰Advance Cash...💚Call Girls In Amritsar 💯Anvi 📲🔝8725944379🔝Amritsar Call Girl No💰Advance Cash...
💚Call Girls In Amritsar 💯Anvi 📲🔝8725944379🔝Amritsar Call Girl No💰Advance Cash...Sheetaleventcompany
 
Call Girls Ahmedabad Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Ahmedabad Just Call 9630942363 Top Class Call Girl Service AvailableCall Girls Ahmedabad Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Ahmedabad Just Call 9630942363 Top Class Call Girl Service AvailableGENUINE ESCORT AGENCY
 
8980367676 Call Girls In Ahmedabad Escort Service Available 24×7 In Ahmedabad
8980367676 Call Girls In Ahmedabad Escort Service Available 24×7 In Ahmedabad8980367676 Call Girls In Ahmedabad Escort Service Available 24×7 In Ahmedabad
8980367676 Call Girls In Ahmedabad Escort Service Available 24×7 In AhmedabadGENUINE ESCORT AGENCY
 
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...Sheetaleventcompany
 
Call Girls in Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service Avai...
Call Girls in Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service Avai...Call Girls in Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service Avai...
Call Girls in Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service Avai...adilkhan87451
 
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service AvailableCall Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Availableperfect solution
 
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls ServiceGENUINE ESCORT AGENCY
 
Models Call Girls In Hyderabad 9630942363 Hyderabad Call Girl & Hyderabad Esc...
Models Call Girls In Hyderabad 9630942363 Hyderabad Call Girl & Hyderabad Esc...Models Call Girls In Hyderabad 9630942363 Hyderabad Call Girl & Hyderabad Esc...
Models Call Girls In Hyderabad 9630942363 Hyderabad Call Girl & Hyderabad Esc...GENUINE ESCORT AGENCY
 
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service AvailableCall Girls Madurai Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service AvailableGENUINE ESCORT AGENCY
 
Call Girls Amritsar Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Amritsar Just Call 8250077686 Top Class Call Girl Service AvailableCall Girls Amritsar Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Amritsar Just Call 8250077686 Top Class Call Girl Service AvailableDipal Arora
 
Independent Call Girls Service Mohali Sector 116 | 6367187148 | Call Girl Ser...
Independent Call Girls Service Mohali Sector 116 | 6367187148 | Call Girl Ser...Independent Call Girls Service Mohali Sector 116 | 6367187148 | Call Girl Ser...
Independent Call Girls Service Mohali Sector 116 | 6367187148 | Call Girl Ser...karishmasinghjnh
 
Coimbatore Call Girls in Thudiyalur : 7427069034 High Profile Model Escorts |...
Coimbatore Call Girls in Thudiyalur : 7427069034 High Profile Model Escorts |...Coimbatore Call Girls in Thudiyalur : 7427069034 High Profile Model Escorts |...
Coimbatore Call Girls in Thudiyalur : 7427069034 High Profile Model Escorts |...chennailover
 
Dehradun Call Girls Service {8854095900} ❤️VVIP ROCKY Call Girl in Dehradun U...
Dehradun Call Girls Service {8854095900} ❤️VVIP ROCKY Call Girl in Dehradun U...Dehradun Call Girls Service {8854095900} ❤️VVIP ROCKY Call Girl in Dehradun U...
Dehradun Call Girls Service {8854095900} ❤️VVIP ROCKY Call Girl in Dehradun U...Sheetaleventcompany
 
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service AvailableCall Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service AvailableDipal Arora
 
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...chetankumar9855
 
Premium Bangalore Call Girls Jigani Dail 6378878445 Escort Service For Hot Ma...
Premium Bangalore Call Girls Jigani Dail 6378878445 Escort Service For Hot Ma...Premium Bangalore Call Girls Jigani Dail 6378878445 Escort Service For Hot Ma...
Premium Bangalore Call Girls Jigani Dail 6378878445 Escort Service For Hot Ma...tanya dube
 

Último (20)

Top Rated Pune Call Girls (DIPAL) ⟟ 8250077686 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls (DIPAL) ⟟ 8250077686 ⟟ Call Me For Genuine Sex Serv...Top Rated Pune Call Girls (DIPAL) ⟟ 8250077686 ⟟ Call Me For Genuine Sex Serv...
Top Rated Pune Call Girls (DIPAL) ⟟ 8250077686 ⟟ Call Me For Genuine Sex Serv...
 
Call Girls in Gagan Vihar (delhi) call me [🔝 9953056974 🔝] escort service 24X7
Call Girls in Gagan Vihar (delhi) call me [🔝  9953056974 🔝] escort service 24X7Call Girls in Gagan Vihar (delhi) call me [🔝  9953056974 🔝] escort service 24X7
Call Girls in Gagan Vihar (delhi) call me [🔝 9953056974 🔝] escort service 24X7
 
Call Girls Jaipur Just Call 9521753030 Top Class Call Girl Service Available
Call Girls Jaipur Just Call 9521753030 Top Class Call Girl Service AvailableCall Girls Jaipur Just Call 9521753030 Top Class Call Girl Service Available
Call Girls Jaipur Just Call 9521753030 Top Class Call Girl Service Available
 
Andheri East ^ (Genuine) Escort Service Mumbai ₹7.5k Pick Up & Drop With Cash...
Andheri East ^ (Genuine) Escort Service Mumbai ₹7.5k Pick Up & Drop With Cash...Andheri East ^ (Genuine) Escort Service Mumbai ₹7.5k Pick Up & Drop With Cash...
Andheri East ^ (Genuine) Escort Service Mumbai ₹7.5k Pick Up & Drop With Cash...
 
💚Call Girls In Amritsar 💯Anvi 📲🔝8725944379🔝Amritsar Call Girl No💰Advance Cash...
💚Call Girls In Amritsar 💯Anvi 📲🔝8725944379🔝Amritsar Call Girl No💰Advance Cash...💚Call Girls In Amritsar 💯Anvi 📲🔝8725944379🔝Amritsar Call Girl No💰Advance Cash...
💚Call Girls In Amritsar 💯Anvi 📲🔝8725944379🔝Amritsar Call Girl No💰Advance Cash...
 
Call Girls Ahmedabad Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Ahmedabad Just Call 9630942363 Top Class Call Girl Service AvailableCall Girls Ahmedabad Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Ahmedabad Just Call 9630942363 Top Class Call Girl Service Available
 
8980367676 Call Girls In Ahmedabad Escort Service Available 24×7 In Ahmedabad
8980367676 Call Girls In Ahmedabad Escort Service Available 24×7 In Ahmedabad8980367676 Call Girls In Ahmedabad Escort Service Available 24×7 In Ahmedabad
8980367676 Call Girls In Ahmedabad Escort Service Available 24×7 In Ahmedabad
 
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...
Call Girls Service Jaipur {9521753030} ❤️VVIP RIDDHI Call Girl in Jaipur Raja...
 
Call Girls in Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service Avai...
Call Girls in Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service Avai...Call Girls in Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service Avai...
Call Girls in Lucknow Just Call 👉👉7877925207 Top Class Call Girl Service Avai...
 
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service AvailableCall Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 9667172968 Top Class Call Girl Service Available
 
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service
9630942363 Genuine Call Girls In Ahmedabad Gujarat Call Girls Service
 
Models Call Girls In Hyderabad 9630942363 Hyderabad Call Girl & Hyderabad Esc...
Models Call Girls In Hyderabad 9630942363 Hyderabad Call Girl & Hyderabad Esc...Models Call Girls In Hyderabad 9630942363 Hyderabad Call Girl & Hyderabad Esc...
Models Call Girls In Hyderabad 9630942363 Hyderabad Call Girl & Hyderabad Esc...
 
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service AvailableCall Girls Madurai Just Call 9630942363 Top Class Call Girl Service Available
Call Girls Madurai Just Call 9630942363 Top Class Call Girl Service Available
 
Call Girls Amritsar Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Amritsar Just Call 8250077686 Top Class Call Girl Service AvailableCall Girls Amritsar Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Amritsar Just Call 8250077686 Top Class Call Girl Service Available
 
Independent Call Girls Service Mohali Sector 116 | 6367187148 | Call Girl Ser...
Independent Call Girls Service Mohali Sector 116 | 6367187148 | Call Girl Ser...Independent Call Girls Service Mohali Sector 116 | 6367187148 | Call Girl Ser...
Independent Call Girls Service Mohali Sector 116 | 6367187148 | Call Girl Ser...
 
Coimbatore Call Girls in Thudiyalur : 7427069034 High Profile Model Escorts |...
Coimbatore Call Girls in Thudiyalur : 7427069034 High Profile Model Escorts |...Coimbatore Call Girls in Thudiyalur : 7427069034 High Profile Model Escorts |...
Coimbatore Call Girls in Thudiyalur : 7427069034 High Profile Model Escorts |...
 
Dehradun Call Girls Service {8854095900} ❤️VVIP ROCKY Call Girl in Dehradun U...
Dehradun Call Girls Service {8854095900} ❤️VVIP ROCKY Call Girl in Dehradun U...Dehradun Call Girls Service {8854095900} ❤️VVIP ROCKY Call Girl in Dehradun U...
Dehradun Call Girls Service {8854095900} ❤️VVIP ROCKY Call Girl in Dehradun U...
 
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service AvailableCall Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service Available
Call Girls Rishikesh Just Call 8250077686 Top Class Call Girl Service Available
 
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
Call Girl In Pune 👉 Just CALL ME: 9352988975 💋 Call Out Call Both With High p...
 
Premium Bangalore Call Girls Jigani Dail 6378878445 Escort Service For Hot Ma...
Premium Bangalore Call Girls Jigani Dail 6378878445 Escort Service For Hot Ma...Premium Bangalore Call Girls Jigani Dail 6378878445 Escort Service For Hot Ma...
Premium Bangalore Call Girls Jigani Dail 6378878445 Escort Service For Hot Ma...
 

Developing effective meta heuristics for a probabilistic

  • 1. Discrete Optimization Developing effective meta-heuristics for a probabilistic location model via experimental design Hari K. Rajagopalan a , F. Elizabeth Vergara a , Cem Saydam a,*, Jing Xiao b a Business Information Systems and Operations Management Department, The Belk College of Business, The University of North Carolina at Charlotte, Charlotte, NC 28223, United States b Computer Science Department, College of Information Technology, The University of North Carolina at Charlotte Charlotte, NC 28223, United States Received 7 January 2005; accepted 9 November 2005 Available online 26 January 2006 Abstract This article employs a statistical experimental design to guide and evaluate the development of four meta-heuristics applied to a probabilistic location model. The meta-heuristics evaluated include evolutionary algorithm, tabu search, simulated annealing, and a hybridized hill-climbing algorithm. Comparative results are analyzed using ANOVA. Our findings show that all four implementations produce high quality solutions. In particular, it was found that on average tabu search and simulated annealing find their best solutions in the least amount of time, with relatively small variability. This is especially important for large-size problems when dynamic redeployment is required. Ó 2005 Elsevier B.V. All rights reserved. Keywords: Meta-heuristics; Experimental design; Location 1. Introduction The goal of emergency medical services (EMS) is to reduce mortality, disability, and suffering in persons [9,21]. EMS administrators and managers often face the difficult task of locating a limited number of ambulances in a way that will yield the best service to a constituent population. A widely used EMS system performance metric is the response time to incidents. Typically, calls originating from a population center are assumed to be covered if they can be reached within a time threshold. This notion of coverage has been widely accepted and is written into the EMS Act of 1973, which requires that in urban areas 95% of requests be reached in 10 minutes, and in rural areas, calls should be reached in 30 minutes or less [3]. 0377-2217/$ - see front matter Ó 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2005.11.007 * Corresponding author. Tel.: +1 704 687 2047; fax: +1 704 687 6330. E-mail addresses: hrajagop@uncc.edu (H.K. Rajagopalan), fvergara@uncc.edu (F.E. Vergara), saydam@uncc.edu (C. Saydam), xiao@uncc.edu (J. Xiao). European Journal of Operational Research 177 (2007) 83–101 www.elsevier.com/locate/ejor
  • 2. The literature on ambulance location problems is rich and diverse. Most early models tended to be deterministic in nature and somewhat less realistic in part due to computational limitations faced by the researchers at that time. Recent advances both in hardware and software, and more importantly in meta-heuristics, as well as the availability of complex and efficient solvers have enabled researchers to develop more realistic and sophisti- cated models. A recent review by Brotcorne et al. is an excellent source for readers to trace the devel- opments in this domain [9]. Also, reviews by Owen and Daskin [30], and Schilling et al. [36] examine and classify earlier models. Probabilistic location models acknowledge the possibility that a given ambulance may not be available, and therefore model this uncertainty either using a queuing framework or via a mathe- matical programming approach, albeit with some simplifying assumptions. Probabilistic location models built upon a mathematical programming framework have evolved from two classical deter- ministic location models. The first model is a cov- erage model, known as the set covering location problem (SCLP) developed by Toregas et al. [37]. The objective of this model is to minimize the number of facilities required to cover a set of demand points within a given response-time stan- dard. In this model, all response units (ambu- lances) are assumed to be available at all times and all demand points are treated equally. The sec- ond model is the maximal covering location prob- lem (MCLP) by Church and ReVelle [11]. The MCLP represents an important advance in that it acknowledges the reality of limited resources available to the EMS managers by fixing the num- ber of response units while attempting to maximize the population covered within the response-time standard. Various extensions of these deterministic models have also been developed [9,11,31,35]. In contrast to previous deterministic models, Daskin’s maximum expected coverage location problem (MEXCLP) model was among the first to explicitly incorporate the probabilistic dimen- sion of the ambulance location model [14]. Daskin removed the implicit assumption that all units are available at all times by incorporating a system- wide unit busy probability into an otherwise deter- ministic location model. Given a predetermined number of response units, MEXCLP maximizes the expected coverage subject to a response-time standard or, equivalently, a distance threshold. Daskin also designed an efficient vertex substitu- tion heuristic in order to solve the MEXCLP. This heuristic works well for small to medium sized problems. However, a comparative study con- ducted by Aytug and Saydam found that for large-scale applications genetic algorithm imple- mentations provided faster and better results, although not sufficiently fast for real-time dynamic redeployment applications [1]. In their 1989 article, ReVelle and Hogan [31] suggest two important probabilistic formulations known as the maximum availability location prob- lem I and problem II (MALP I & II). Both models distribute a fixed number of response units in order to maximize the population covered with a response unit available within the response-time standard with a predetermined reliability. In MALP I, a system-wide busy probability is com- puted for all units, which is similar to MEXCLP, while in MALP II, the region is divided into neigh- borhoods and local busy fractions for response units in each neighborhood is computed, assuming that the immediate area of interest is isolated from the rest of the region. Recently, Galvao et al. [16] presented a unified view of MEXCLP and MALP by dropping their simplifying assumptions and using Larson’s hypercube model [25] to compute response unit specific busy probabilities. They demonstrated, using a simulated annealing meth- odology, that both extended models (referred to as EMEXCLP and EMALP) produced better results over the previous models, with some addi- tional computational burden. One of the ways that EMS managers can improve the system performance is by the dynamic redeployment of ambulances in response to demand that fluctuates throughout the week, day of the week, and even hour by hour within a given day. For example, morning rush hour commutes shift workers from suburban communities towards their workplaces in cities or major employment centers, and returns workers home at the end of the workday. Given that operating conditions fluc- tuate, dynamic redeployment strategies require fast 84 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 3. and effective methods that can be applied in real- time for typically large-scale problem domains. To our knowledge, this problem has been scarcely studied with the notable exception of a study by Gendreau et al. [18]. In their novel approach, Gendreau et al. build upon their earlier double cov- erage model [17] and incorporate several practical considerations. They show that their model can be successfully solved using a parallelized version of their tabu search algorithm first developed for a static version of the double coverage model. In a recent review article, Brotcorne et al. suggests that any developments in this field are likely to take place in the area of dynamic redeployment models along with fast and accurate heuristics [9]. A crucial factor for dynamic redeployment models is the size of zones used to represent poten- tial demand areas. The accuracy and realism of these models are increased by decreasing zone sizes, which in turn increases the number of zones. Given that the solution space of locating m response units within n zones is nm , the need for fast and powerful heuristic algorithms for large- scale applications becomes self-evident. In this paper we report our work in applying four meta-heuristic approaches to the MEXCLP, the use of statistical experimental design to ensure unbiased and objective analyses [4], and present the results, findings, and comparative analyses, which demonstrate the effectiveness of each meta- heuristic on small- to large-scale problems. This paper is organized as follows. In Section 2 MEXCLP is reviewed in detail. Section 3 describes the four meta-heuristics, and Section 4 contains the experimental design. Section 5 presents a detailed report on the results and analyses. Section 6 includes concluding remarks and suggestions for future research. 2. A probabilistic location model MEXCLP proposed by Daskin [13,14] maxi- mizes the expected coverage of demand subject to a distance r (or time threshold t) by locating m response units on a network of n zones. MEX- CLP assumes that all units operate independently and each has an identical system-wide busy prob- ability, p. If a zone is covered by i units (i = 0,1,2,. . .,m), then the probability that it is covered by at least one unit is 1 À pi . Let hj denote the demand at node j, thus the expected coverage of zone j is hj (1 À pi ). Let akj = 1 if response unit in zone k is within distance r of zone j (covers j), 0 otherwise; yij = 1 if zone j is covered at least i times, 0 otherwise; and let xj = the number of units located in zone j. Daskin formulates the MEXCLP as an integer linear programming (ILP) model as follows: Maximize Xn j¼1 Xm i¼1 ð1 À pÞpiÀ1 hjyij ð1Þ Subject to Xm i¼1 yij À Xn k¼1 akjxk 6 0 8j; ð2Þ Xn j¼1 xj 6 m; ð3Þ xj ¼ 0; 1; 2 . . . ; m 8j; ð4Þ yij ¼ 0; 1 8i; j. ð5Þ The objective function (1) maximizes the expected number of calls that can be covered, constraint (2) counts the number of times zone j is covered and relates the decision variables yij to the first set of decision variables, xj. Constraint (3) specifies the number of response units available, and constraint (4) allows multiple units to be located at any zone. This formulation can be rewritten using fewer variables with a non-linear objective function [34]. Let yj denote the number of times zone j is covered and rewrite the expected coverage for zone j as hjð1 À pyj Þ and constraint (2) as Pn k¼1akjxk ¼ yj. The non-linear version is formulated as fol- lows: Maximize Xn j¼1 hjð1 À pyj Þ ð6Þ Subject to Xn k¼1 akjxk ¼ yj 8j; ð7Þ Xn k¼1 xk 6 m; ð8Þ xk ¼ 0; 1; 2; . . . ; m 8k; ð9Þ yj ¼ 0; 1; 2; . . . ; m 8j. ð10Þ H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 85
  • 4. The original MEXCLP and the non-linear formula- tion above are theoretically identical since MEX- CLP searches the feasible space using binary variables that sum up to the number of times each node is covered whereas the non-linear version searches the feasible space using positive integers which count the number of times each zone is cov- ered. The MEXCLP model is useful for implementa- tion via software like CPLEX [12]. We use CPLEX to determine the optimal solution for the MEXCLP ILP formulation. The non-linear model lends itself to easier implementation for meta-heuristic search methods such as tabu search and evolutionary algo- rithms where the objective function (Eq. (6)) can be directly coded as the fitness function. In order to solve MEXCLP, we begin by gener- ating a hypothetical city (county, region) spread across 1024 sq. miles and impose a 32 mile by 32 mile grid over it. Each problem size utilizes this grid; however each problem size imposes a differ- ent level of granularity. For example, an 8 · 8 grid would contain 64 zones with each zone being a square of 4 miles by 4 miles, a 16 · 16 grid would contain 256 zones with each zone being a square of 2 miles by 2 miles, and a 32 · 32 grid would con- tain 1024 zones with each zone being a square of 1 mile by 1 mile. Furthermore, we randomly generated uni- formly and non-uniformly distributed call (demand) rates for each of 1024 zones. Uniform distribution spreads demand for calls somewhat evenly across the region. However, non-uniformly distributed demand simulates a more realistic sce- nario for calls where there are significant varia- tions across the region as shown in Fig. 1. The demand rates for 64 and 256 zone problems are then computed by a straightforward aggregation scheme using the demand rates for the 1024 zone representation. Furthermore, for each grid (prob- lem) size twenty demand (call) distributions are generated, 10 having a uniform demand distribu- tion and 10 having a non-uniform distribution. These two elements, the grid representation and demand distributions, define the basis for the problem which can be briefly restated as follows: for a given problem size and call demand distribu- tion, locate m units (ambulances) to maximize the expected coverage. 3. Meta-heuristic search methods Location/relocation problems are typically NP- complete problems [33]. The size of the solution space of locating m response units in n zones is nm . Because of the complex combinatorial nature of these problems, there have been various attempts to identify near-optimal solutions through the use of meta-heuristic search methods; most recently, evolutionary algorithms [1,5,7,8,17, 18,23,33,34]. Meta-heuristic search methods try to find good solutions by searching the solution space using two kinds of search pressures: (1) exploration, and (2) exploitation. Exploration pressures cause the meta-heuristics to evaluate solutions which are outside the current neighborhood of a given solu- tion, while exploitation pressures help the meta- heuristics use the current position of a solution in a neighborhood to search for better solutions within that neighborhood. In this paper, we consider four meta-heuristic search methods, evolutionary algorithm (EA), tabu search (TS), simulated annealing (SA), and hybridized hill-climbing (HC), and attempt to adapt each meta-heuristic’s method of exploration and exploitation to help identify good solutions in MEXCLP. EA is a meta-heuristic search method that uses the concept of evolution to generate solutions to complex problems that typically have very large 10 20 30 10 20 30 0 0.01 0.02 Fig. 1. Non-uniformly distributed call (demand) data. 86 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 5. search spaces, and hence are difficult to solve [22]. EAs ‘‘are a class of general purpose (domain inde- pendent) search methods’’ [26]. There are numer- ous and varied examples of how EAs can be applied to complex problems [2,26,38], including the problems in the location domain [1,6]. TS is a meta-heuristic search method developed by Glover [19,20]. A unique feature of TS is its use of a memory, or list. Once a solution is entered into a TS memory, they are then tabu, or disal- lowed, for some time. The idea is to restrict the algorithm from visiting the same solution more than once in a given time period. The exploration pressure in TS is its ability to accept a worse solu- tion as it progresses through its search. Tabu search has been successfully applied to various problem domains [29], including covering location models [17,18]. SA is a meta-heuristic that accepts a worse solu- tion with some probability [24]. The ability of SA to ‘‘go downhill’’ by accepting a worse solution acts as a mechanism for exploratory pressure and can help SA escape local maxima. SA has been successfully used for location modeling [9,15]. HC is an iterative improvement search method. Therefore, strictly speaking HC is not a meta-heu- ristic, however in our implementation HC is hybridized via the inclusion of mutation opera- tors. The algorithm, as originally devised, ‘‘is a strategy which exploits the best solution for possi- ble improvement; [however] it neglects explora- tion of the search space’’ [26]. The addition of the mutation operators in the hybridized HC greatly improves exploration of the search space; allowing HC to search each neighborhood where it lands in a systematic way before moving to the next randomly determined location in the search space. 3.1. Data representation The data representation used for all meta-heu- ristics is a vector. A vector is a one-dimensional array of size m + 1, where m is the number of response units in the system. Index 0 in each vector contains the vector’s fitness value (fv) as deter- mined by the evaluation function. Each of the remaining indices within a vector, indices 1 through m, represents a specific response unit and the value recorded within each of these indices indicates the zone to which that unit is assigned. The zone numbers range between 1 and n, where n is the number of zones in the grid. Therefore, n equals 64, 256, and 1024 for each problem size, respectively. Fig. 2 illustrates the data representa- tion, i.e., vector, used for all meta-heuristics. 3.2. Algorithm structure All four meta-heuristic algorithms have the fol- lowing common structure: BEGIN Initialization Evaluation of data vector(s) LOOP: Operation(s) to alter data vector(s) Evaluation of data vector(s) UNTIL termination condition END 3.2.1. Initialization and evaluation In the initialization step each response unit is randomly assigned a zone number within the prob- lem grid; this random solution is then evaluated using Eq. (6). The fitness value is then stored in a vector. TS, SA, and HC perform their search using a single vector; in contrast, EA uses a popu- lation of vectors. Given that the objective is to maximize expected coverage, higher evaluation, i.e. larger fitness values, represent better solutions. 3.2.2. Operators Operators are procedures used within meta- heuristics that help define the different levels of search pressures: exploration and exploitation. In this study we utilize a number of operators such as low-order mutation, high-order mutation, and crossover. However, due to the structure of the fv 12 28 3 0 1 2 … m Vector Fig. 2. Data representation. H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 87
  • 6. different meta-heuristics, not all operators are used in all the meta-heuristics. Crossover is a genetic operator that needs at least two vectors to create new vectors, called off- spring. We use a one-point crossover where two parents create two offspring [26,27]. Crossover is used to promote exploratory pressure which forces the meta-heuristic to look for a solution that is in between other solutions already identified within the search space. The idea behind the mutation operator is to provide exploitation pressure in order to search the neighborhood for a better solution. In our implementation, the low-order mutation operator is used to perform a neighborhood search. Neigh- borhood searches are commonly encountered in meta-heuristics [27]. The definition of a neighbor- hood emphasizes the difference between the com- peting search pressures of exploration and exploitation. We define a neighborhood as the adjacent zones immediately surrounding a given response unit’s location. For example, in Fig. 3, given a response unit in zone 28 the neighborhood zones are 19, 20, 21, 27, 29, 35, 36 and 37. When this procedure is called, the algorithm reads the zone within a selected index of a vector. The neighborhood zones are then systematically placed in the selected index within the vector, freezing the values in all other indices, and the fit- ness value for the resulting vector is calculated. At the end of the neighborhood search, the vector that has the highest fitness value is returned. The low-order mutation operator promotes exploitation pressure. It is possible to design an algorithm with stronger exploitation pressure by searching the entire neighborhood for all m response units. That is, completing a comprehen- sive neighborhood search for all indices simulta- neously. However, this would require 8m evaluations, which may not be computationally feasible for large values of m. Consequently, for each implementation of the low-order mutation operator we compute, on average, 8 * m evalua- tions searching within one selected index, while holding all other indices constant. High-order mutation is another genetic muta- tion operator, and is applied to one parent vector in order to create one offspring. To begin the high- order mutation, two cut locations are randomly determined. A third number, between 0 and 1, is then randomly generated. If the number is less than 0.5, then the ‘‘genetic material’’ between the cut points remains unaltered and the data outside the cuts is changed. The selected indices are filled in with randomly generated numbers between 1 and n where n is the number of zones in the prob- lem grid. However, if the third randomly generated number is greater than or equal to 0.5, then the ‘‘genetic material’’ outside the cuts remains unal- tered and the data between the cuts is changed by filling in the selected indices with randomly gen- erated numbers between 1 and n. Fig. 4 illustrates the high-order mutation operator. The high-order mutation operator is desirable because it keeps part of the earlier solution while moving away from the neighborhood. High-order mutation has higher exploratory pressure than low-order mutation. 3.2.3. Termination condition Preliminary tests indicated that the improve- ments experienced by all four meta-heuristics were less significant after 100 iterations. For that rea- son, all meta-heuristics for all problem sizes were programmed to terminate after 100 iterations, unless a significant improvement had occurred in the last 20 iterations. If so, the algorithm was allowed to run longer. An algorithm requiring more than 100 iterations continues to run until no improvements are identified for 20 successive iterations. 3.3. Evolutionary algorithm (EA) EAs create, manipulate, and maintain a popula- tion of solutions to problems, where each solution 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Fig. 3. Grid representation of zones for 64 zone (8 · 8) problem. 88 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 7. is called a vector (commonly referred to as chro- mosome) representing a possible solution to the problem. Interestingly, one can think of the EAs as performing multiple searches in ‘‘parallel because each [chromosome] in the population can be seen as a separate search’’ [32]. The larger the population size, typically, the better the final solution is; however, computation time increases proportionally with an increase in population size. Preliminary tests were run to determine the population size, and the optimal genetic operator probability rates; these parame- ters are in Table 2. The initial experiments were run by taking a candidate problem and running it using crossover, high-order, and low-order mutation rates from 0.1 to 0.9 (increasing incre- ments of 0.1). This was done for population sizes of 20, 50, and 100. Therefore, we collected data from 2187 runs (9 * 9 * 9 * 3 = 2187). From this data, we examined the payoff between solution quality and time to get to the solution and decided on the following parameters: population size = 20, crossover rate = 0.5, high-order mutation rate = 0.1, and low-order mutation rate = 0.1. The EA procedure is as follows: 1. Generate a population of vectors of size 20. 2. Each vector has a probability of being selected for crossover, high-order or low-order mutation as described in Table 2. The vector with the highest evaluation function has a probability of 1.0 of being selected for all three operations. 3. A vector will not have multiple operations done on it in the same generation, but two copies of the same vector can be selected for two different operations. For example, a result of mutation in one generation will not be used for crossover in the same generation however; the parent which was used for mutation may be selected for crossover. 4. The operators of crossover, high-order, and low-order mutation are applied to the selected vectors. 5. The old population and the offspring are put together in a pool and the next generation is created by tournament selection [26,27] pre- formed on two randomly selected vectors from the pool. Of the two vectors, the vector with a higher evaluation function is the winner and is copied into the next generation’s population. An elitist strategy is also employed, where the best vector is always carried forward. 6. Once the new population is selected, the process goes back to step 2 until the termination condi- tion is reached. 3.4. Tabu search (TS) TS uses only one vector to traverse the search space, consequently the quality of the final solu- tion for TS is dependent on the quality of the ini- tial vector. As with all the meta-heuristics, TS uses the low-order mutation operator to provide exploitation pressure, and the best vector found by using the low-order mutation operator is accepted regardless of whether it is worse than the original solution. By continually accepting Parent: Parent: fv 23 19 14 54 20 28 42 30 fv 23 19 14 54 20 28 42 30 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 Randomly selected indices Randomly selected indices Offspring: Offspring: fv 2 22 14 54 16 61 3 27 fv 23 19 36 15 20 28 42 30 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 *Assuming a 8x8 grid, i.e., 64 possible zones *Random number less than 0.5 *Assuming a 8x8 grid, i.e., 64 possible zones *Random number greater than or equal to 0.5 ↑ ↑ ↑ ↑ Fig. 4. High-order mutation (mutate ends). H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 89
  • 8. the best solution in a vector’s neighborhood, TS is able to systematically search the problem’s search space. The best solution found throughout the run of the TS algorithm is always recorded. How- ever, the best solution found might not be the solu- tion contained in the final (last) vector when the algorithm terminates. To tune the memory parameter a candidate problem was selected randomly and was run for tabu list sizes from 10 chromosomes to 200 (increasing increments of 10) for 8, 9, 10, 11 servers (total runs 20 * 4 = 80). We noticed in our initial runs that there were minor fluctuations close to the values which were multiples of servers. We hypothesized that since our greedy search opera- tor, or low-order mutation operator, changes only one index in the vector at every iteration, system- atically starting from the first index and moving to the last index, and since the vector size is m, or the number of servers, having m chromosomes in the tabu list stores the zones of all the servers being changed once. Having 3 * m in the tabu list stores the zones of all the servers being changed three times. The reason that we did not use just the zones that are being changed, but included the entire chromosome as tabu was due to the fact that the problem does not depend upon one loca- tion but the location of all servers at any point of time. Again, the tabu list size of 3 * m was decided upon by running initial experiments with values from 1 * m to 20 * m for a candidate problem. 3.5. Simulated annealing (SA) SA, like TS, uses a single vector to search for a good solution to a given problem, and like TS the final solution is also dependent on the quality of the initial vector. SA starts by using a randomly generated vector, and then uses the low-order mutation operator to evolve the solution. After initial trial runs with problem sizes of 64, 256 and 1024 zones, we noticed that SA tends to get stuck at local optima, especially for larger sized problems. The problem of getting stuck at local optima was more pronounced for SA than for any of the other methods. Upon closer investiga- tion, it was determined that the temperature parameter had not been properly tuned. Due to the large number of problems to be solved it was not practical to tune the temperature for every problem. Therefore, in our implementation, we use an approach developed by Chiyoshi et al. [10] to automatically adjust the temperature parameter during runtime through the use of a feedback loop. This method employs the ratio of the number of solutions accepted to the number of solutions generated as a feedback loop from the system. When this ratio drops below 40% the exploration pressure is not strong enough and the temperature is raised, however, when the ratio goes above 60% the exploration pressure is too high and the temperature is lowered. After some experimentation with initial temperature values of 20, 40 and 60, an initial temperature of 60 was selected. 3.6. Hill-climbing (HC) HC also maintains a single vector throughout the run of the algorithm, which is altered at each generation by searching the neighborhood for bet- ter solutions. After initial trial runs with small problem sizes (64 zones with 8 response units), HC often got stuck at local optima when using the low-order mutation operator. The random- restart operator provides enormous exploratory pressure, and forces the meta-heuristic to begin searching in a different region within the search space, thus escaping local optima. This process was not sufficient for HC to effectively compete with the other three meta-heuristics. Therefore, in an effort to hybridize HC, the high-order muta- tion operator was introduced as an intermediate step between the low-order mutation operator and random-restart procedure to alleviate this problem. HC begins with a randomly generated vector, and then employs the low-order mutation opera- tor. Only if the new vector is better than the old vector will the new vector be passed to the next iteration. The low-order mutation operator is used until m iterations have passed without any new solution being accepted, where m is the number of servers. We use m iterations because of the way the low-order mutation operator is designed. The low-order mutation operator, regardless of 90 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 9. whether employed in HC, SA, TS, or EA, always takes the first index of the vector which represents the first server and mutates it with a greedy neigh- borhood search. This constitutes one iteration for HC. This new vector will only be accepted if it is better than existing vector. In the second iteration, the second server is mutated and so on. Therefore, after m iterations, all m server positions have been changed. If in HC, m iterations have passed and no new solution has been accepted, then it means that an attempt has been made to change all the server positions individually and that the current position is the local optimum. Once no more improvements can be found through the use of the low-order mutation opera- tor, the algorithm then performs high-order muta- tion on the vector. After the vector undergoes the high-order mutation, the algorithm begins search- ing the new neighborhood using the low-order mutation operator again. After the high-order mutation is applied m times without an improve- ment it is determined that the neighborhood has been exploited. The random-restart procedure is then used to start the search in a new neighborhood. The ‘‘best’’ result from the multiple restarts is deemed to be the ‘‘best’’ solution achieved by the HC algorithm. This procedure is referred to as random-restart hill-climbing [32]. 4. Experimental design To test the meta-heuristics we use a statistical experimental design approach advocated by Barr et al. [4] which enables objective analyses and pro- vides statistically significant conclusions, including interactions among independent variables. Table 1 summarizes the elements unique to each of the four meta-heuristics. Table 2 summarizes the unique parameters used by each meta-heuristic, and Table 3 summarizes the approximate number of evaluation functions required for each meta- heuristic. Table 4 shows the different independent vari- ables and dependent variables. There are three independent variables, (1) distribution (D), (2) size (S), and (3) heuristic search method (HS), and two Table1 Summaryofallmeta-heuristics Meta-heuristicOperatorsusedCommon data representation Randomly generated initial vector(s) Uses single vector Common fitness evaluation function Otherunique parameters MutationCrossoverSelection Low-orderHigh-orderOperatorGenerational EA········•Prob(crossover) •Prob(mutation) •Low-order •High-order HC······ SA·····Temperature TS·····Memory(list)size H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 91
  • 10. dependent variables, (1) quality of solution (QOS) and (2) time to reach best solution (TBS). Problems with demand that is distributed uni- formly across a grid are easier for all heuristic methods to solve as opposed to when demand is distributed non-uniformly. With regard to a uni- form demand distribution, randomly assigned locations for response units have less deviation in the quality of their solutions, and are faster to identify solutions that adequately cover demand. For non-uniform demand distributions, the ran- dom assignment does not give any assurance that the quality of the solution will be consistent. In the case of non-uniform demand distributions, the initial randomly generated chromosome can have vastly different fitness values. Therefore, we can hypothesize that (H1) problems with uni- formly distributed demand will be solved with a better QOS and with smaller TBS values (faster), than problems with non-uniform demand distribution. In our implementation of the different meta- heuristic methods, the exploitation pressure is the same. All of the methods use the low-order muta- tion operator for exploitation. However each meta-heuristic has different degrees of exploration pressures. HC uses high-order mutation and a ran- dom-restart procedure, SA and TS can accept worse solutions, and EA uses random generation of an initial population, a crossover and a high- order mutation operators. Therefore, we hypothe- size that (H2) the four heuristic search methods will solve problems with different values of TBS and QOS. As can be seen from Table 4, all the indepen- dent variables are categorical, and the dependent variables are continuous. Since there is more than one independent variable it is necessary to setup the experiment to consider the effects of each inde- pendent variable upon each of the dependent vari- ables individually. This process also takes into consideration interactions between independent variables. Therefore, we use a randomized facto- rial design and ANOVA [4,28] to test for direct effects of the independent variables and their interactions. A problem configuration is defined as the prob- lem’s grid size and the type of demand distribution Table2 Summaryofuniqueparametersusedforeachmeta-heuristic Meta-heuristicParameters Probability oflow-order mutation Probability ofhigh-order mutation Probability ofcrossover Population size TemperatureMemory(list)size EA0.10.10.520N/AN/A HCN/AN/AN/A1N/AN/A SAN/AN/AN/A1•Probability=#ofsolutions accepted/#ofsolutionsgenerated •FeedbackLoop: –if(Probability<0.4),then Temperature=Temperature* (1+((0.4ÀProbability)/0.4)) –elseif(ProbabilityP0.6),then Temperature=Temperature* (1/(1+((ProbabilityÀ0.6)/0.4))) N/A TSN/AN/AN/A1N/A3*m,wherem=the# ofresponseunitsinthesystem 92 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 11. for all four values of m (8, 9, 10 and 11), where m is the number of response units in the system. There are six problem configurations (three sizes and two distributions). Each problem configuration has 10 demand distributions and each demand distribu- tion is run 10 times for each of the four values of m for a total of 400 test problems. This is done for each of the four meta-heuristics. When testing across the four meta-heuristics there are a total of 1600 (4 · 400) data points. The solutions for each of the four meta-heuris- tic methods were compared with the optimum solution identified through integer linear program- ming using CPLEX. All runs are done on a Dell PC with an Intel Pentium 4 2.4 GHz processor, and 512MB of RAM. When evaluating the results, two metrics are used: (1) the average quality of solution (QOS), or how close the solution given by each meta-heu- ristic is to the optimum solution, and (2) the aver- age time at which the best solution (TBS) was found by each meta-heuristic. When comparing the computing effort of the different algorithms, the number of iterations is a misleading metric since a meta-heuristic search method might require fewer iterations, and yet still perform more evalu- ation functions, thereby requiring more time to complete than another meta-heuristic. The number of evaluations done for each meta-heuristic during an iteration can also vary substantially; therefore, the number of iterations to reach the ‘‘best’’ solu- tion is not strictly comparable. For these reasons, we use time (seconds) to reach the ‘‘best’’ solution for each meta-heuristic as a proxy for computing effort. 5. Results and discussion We solved 400 problems and computed the average and standard deviation of solution quality (QOS) and time to best solution (TBS) for each of the six problem configurations and heuristic com- binations. The total number of problems solved was 9600. Therefore, the sample size for the exper- iments is 400. Table 5 displays the mean and stan- dard deviation for solution quality (QOS) and time (TBS), respectively. Overall, the results show that our implementa- tions of all four meta-heuristics produce very good results, particularly with respect to solution qual- ity. The average of solution quality is nearly 99% with a standard deviation of about 1% across all problem configurations and all meta-heuristics (9600 problems). The data was tested using ANOVA, and post hoc Tamhane’s analysis was completed to find specific differences. Tamhane’s analysis was chosen over Bonferroni’s or Tukey’s Table 3 The approximate number of evaluation functions required for each meta-heuristic Meta-heuristic Initial number of evaluations Approximate number of evaluations per 100 iterations Random generation Crossover High-order mutation Low-order mutation Random restart Total EA 20 8 * 100 2 * 100 16 * 100 N/A 2620 HC 1 N/A 3 * 1 8 * 95 2 * 1 766 SA 1 N/A N/A 8 * 100 N/A 801 TS 1 N/A N/A 8 * 100 N/A 801 Table 4 Experimental setup Independent variables Dependent variables (1) Distribution (D) (1) Quality of solution (measured as the ratio of given solution to optimum solution) (QOS) a. Uniform (U) b. Non-uniform (NU) (2) Size (S) (2) Time to reach the best solution (TBS)a. 64 Zone (64) b. 256 Zone (256) c. 1024 Zone (1024) (3) Heuristic search method (HS) a. Hill-climbing (HC) b. Tabu search (TS) c. Simulated annealing (SA) d. Evolutionary algorithm (EA) H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 93
  • 12. because we could not assume equal variances among the different categories of the independent variables. Tables 6–8 show the ANOVA and the post hoc Tamhane’s analysis done with respect to QOS. An alpha of 0.05 was set for statistical significance. The ANOVA results from Table 6 show that the variables S (size) and HS (heuristic search) sig- nificantly (alpha < 0.01) affect QOS. We can also see that variable D (distribution) also significantly affects QOS (alpha < 0.05). Therefore we can infer from Table 6 that the quality of solution depends on problem size, the type of meta-heuristic, and the kind of demand distribution. Table 6 also shows significant two way interac- tions between the variables S and D (alpha < 0.01), S and HS (alpha < 0.01) and between D and HS (alpha < 0.05). The three way interaction between S, D and HS is insignificant (alpha = 0.39). Closer examination of the interaction between S and HS shows that for 64 zone prob- lems HC performs significantly better than TS and SA, but for 256 zone and 1024 zone problems there is no significant difference among these three meta-heuristics. Investigation into the interaction between D and HS shows that while HC, SA and TS do significantly better with uniform demand distributions than non-uniform distributions, EA Table 5 Performance of different meta-heuristic search methods with respect to the quality of solution (QOS) and time (in seconds) needed to reach the best solution (TBS) Distribution Size Mean (std. dev.) N = 9600 CPLEX HC TS SA EA Uniform 64 QOS 1.0000 (0.00) 0.9924 (0.76) 0.9882 (1.09) 0.9884 (0.93) 0.9927 (0.62) TBS 0.19 (0.15) 0.36 (0.17) 0.20 (0.09) 0.19 (0.08) 2.39 (1.95) 256 QOS 1.0000 (0.00) 0.9906 (0.94) 0.9893 (1.05) 0.9899 (0.97) 0.9936 (0.58) TBS 3.92 (28.83) 2.35 (0.79) 1.88 (0.68) 1.86 (0.68) 9.82 (4.69) 1024 QOS 1.0000 (0.00) 0.9873 (1.18) 0.9885 (1.05) 0.9873 (1.26) 0.9918 (0.76) TBS 137.16 (61.42) 37.67 (13.98) 37.96 (14.84) 38.11 (13.41) 143.47 (45.73) Non-uniform 64 QOS 1.0000 (0.00) 0.9926 (0.84) 0.9886 (1.11) 0.9885 (1.11) 0.9940 (0.62) TBS 0.16 (0.11) 0.36 (0.18) 0.20 (0.09) 0.19 (0.09) 2.56 (2.30) 256 QOS 1.0000 (0.00) 0.9903 (0.89) 0.9890 (1.02) 0.9890 (1.01) 0.9937 (0.61) TBS 2.66 (2.00) 2.35 (0.79) 1.98 (0.73) 1.84 (0.68) 10.30 (4.84) 1024 QOS 1.0000 (0.00) 0.9861 (1.20) 0.9852 (1.38) 0.9857 (1.15) 0.9918 (0.71) TBS 146.33 (78.79) 37.07 (14.67) 35.75 (13.60) 37.78 (13.84) 147.47 (48.63) Table 6 ANOVA results: Dependent variable quality of solution Source Sum of squares Degrees of freedom Mean square F Sig. Observed powera Corrected model 0.0617b 23 0.0027 28.148 0.000 1.000 Intercept 9404.6136 1 9404.6136 98,661,395.587 0.000 1.000 Size (S) 0.0157 2 0.0079 82.425 0.000 1.000 Distribution (D) 0.0005 1 0.0005 4.814 0.028 0.593 Heuristics (HS) 0.0368 3 0.0123 128.628 0.000 1.000 S * D 0.0017 2 0.0009 8.973 0.000 0.974 S * HS 0.0056 6 0.0009 9.814 0.000 1.000 D * HS 0.0008 3 0.0003 2.922 0.033 0.698 S * D * HS 0.0006 6 0.0001 1.045 0.393 0.421 Error 0.9128 9576 0.0001 Total 9405.5881 9600 Corrected Total 0.9745 9599 a Computed using alpha = .05. b R-Squared = .063 (adjusted R-squared = .061). 94 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 13. does better with non-uniform demand distribution than it does with uniform demand distributions. Finally, the examination of the interaction between D and S shows that for problems with uniform distribution the QOS does not deteriorate when S increases from 64 zones to 256 zones, but does deteriorate for 1024 zone problems; while for non-uniform distribution, the QOS does deterio- rate when S increases from 64 zone to 256 zone problems. The post hoc Tamhane’s tests in Table 7 shows that for all problem configurations, EA tends to provide slightly better solution quality than the other three algorithms. The slightly better perfor- mance of EA might be explained by the fact that EA performs, on average, more than three times as many evaluation functions as the other three algorithms, as illustrated in Table 3. Also, although statistically significant, these differences are (a) practically negligible, and (b) are achieved at great costs in terms of time (Table 5). Also Table 7 shows that HC performs better than SA and TS. However, this result can be explained when taking into account the significant interaction between variables HS and S. It is only in 64-zone problems HC solutions are better than TS and SA solutions. However, this affects the entire average as a whole and post hoc analysis shows that HC performs better than SA and TS. Removing 64 zone size problems shows that HC Table 7 Post hoc analysis (Tamhane’s test) for factor heuristic search method and dependent variable quality of solution Heuristic search method (I) Heuristic search method (J) Mean difference (I À J) Std. error Sig. 95% Confidence interval Lower bound Upper bound HC TS 0.001743* 0.0003095 0.000 0.000929 0.002558 SA 0.001769* 0.0003028 0.000 0.000972 0.002566 EA À0.003035* 0.0002463 0.000 À0.003684 À0.002387 TS HC À0.001743* 0.0003095 0.000 À0.002558 À0.000929 SA 0.000026 0.0003197 1.000 À0.000816 0.000867 EA À0.004779* 0.0002668 0.000 À0.005481 À0.004076 SA HC À0.001769* 0.0003028 0.000 À0.002566 À0.000972 TS À0.000026 0.0003197 1.000 À0.000867 0.000816 EA À0.004805* 0.0002590 0.000 À0.005486 À0.004123 EA HC 0.003035* 0.0002463 0.000 0.002387 0.003684 TS 0.004779* 0.0002668 0.000 0.004076 0.005481 SA 0.004805* 0.0002590 0.000 0.004123 0.005486 * The mean difference is significant at the .05 level. Table 8 Post hoc analysis (Tamhane’s test) for factor size and dependent variable quality of solution Size (I) Size (J) Mean difference (I À J) Std. error Sig. 95% Confidence interval Lower bound Upper bound 64 Zone 256 Zone À.000005 .0002313 1.000 À.000557 .000547 1024 Zone .002711* .0002596 .000 .002092 .003331 256 Zone 64 Zone .000005 .0002313 1.000 À.000547 .000557 1024 Zone .002717* .0002578 .000 .002101 .003332 1024 Zone 64 Zone À.002711* .0002596 .000 À.003331 À.002092 256 Zone À.002717* .0002578 .000 À.003332 À.002101 * The mean difference is significant at the .05 level. H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 95
  • 14. does not perform significantly better than SA or TS. This result for small sized problems might be explained by considering the exploratory pressure used by HC. The random-restart procedure works very well in small problems when the algorithm randomly jumps around the solution space trolling for the best solution. In larger problems, however, jumping around is not as effective because the solution space is bigger, which negates any advan- tage random jumping has over the more systematic exploratory pressures of SA and TS. In general, there is no statistically significant difference between the quality of the solutions for SA and TS. Table 8 breaks down the effect of variable S on QOS. As can be seen from the Tamhane’s test in Table 8 there is no significant difference between the QOS value when the problem size increases from 64 to 256 zones. However, with 1024 zone problems the QOS deteriorates by a very small (0.0027) but statistically significant amount (alpha < 0.001). Tables 9 and 10 show the ANOVA and Tables 11 and 12 show the post hoc Tamhane’s analysis done with respect to TBS. Table 9 shows the ANOVA results for the dif- ferent meta-heuristic search methods without CPLEX as a category. As can be seen all three independent variables S, D and HS (alpha < 0.01) significantly affect time to best solution. Also, all two way interactions, S and D, S and HS, and D and HS (alpha < 0.01), are significant. The three way interaction between S, D and HS is not signif- icant (alpha = 0.080 and power = 0.707). Investi- gation into the interaction between S and D shows that problems with 64 zones and uniform demand distributions were solved faster than problems with 64 zones and non-uniform demand. However, for 256 zones and 1024 zones the TBS values were not statistically different for uniform and non-uniform demand distributions. Similarly, the interaction between S and HS reveals that even though EA was slower than the other three meta- heuristics in all three problem sizes (64, 256 and 1024), it is at 1024 zone problems where this differ- ence becomes dramatic. Finally, the interaction between D and HS reveals that while the TBS val- ues for TS and SA are different for uniform and non-uniform demand distributions, they are more noticeable for EA. To compare CPLEX with other meta-heuristics, CPLEX times were included as another category in the variable HS; ANOVA results change (Table 10). Variable D is no longer significant (alpha = 0.146) and all two way interactions involving variable D become insignificant (S and D alpha = 0.078, and D and HS alpha = 0.052). This can be explained by the fact that the TBS val- ues for CPLEX are not affected significantly by the demand distributions while the meta-heuristic’s TBS values are. The interaction between S and HS can be explained from the data in Table 5. The real difference in time between CPLEX and Table 9 ANOVA results: Dependent variable time to best solution (HS not including CPLEX solutions) Source Sum of squares Degrees of freedom Mean square F Sig. Observed powera Corrected model 1.5E+13b 23 6.6E+11 2749.84 0.000 1.000 Intercept 5.3E+12 1 5.3E+12 22432 0.000 1.000 S 8E+12 2 4E+12 16747 0.000 1.000 D 3.2E+09 1 3.2E+09 13.29 0.000 0.954 HS 2.9E+12 3 9.8E+11 4105.67 0.000 1.000 S * D 5E+09 2 2.5E+09 10.53 0.000 0.989 S * HS 4.1E+12 6 6.9E+11 2893.74 0.000 1.000 D * HS 6.5E+09 3 2.2E+09 9.09 0.000 0.996 S * D * HS 2.7E+09 6 4.5E+08 1.88 0.080 0.707 Error 2.3E+12 9576 2.4E+08 Total 2.3E+13 9600 Corrected total 1.7E+13 9599 a Computed using alpha = .05. b R-Squared = .869 (adjusted R-squared = .868). 96 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 15. EA on the one hand, and HC, TS, and SA on the other occurs in 1024 size problems. At smaller problem sizes the difference is much less. Also, average and standard deviation of TBS values for CPLEX deteriorate much quicker as problem sizes grow larger. The post hoc Tamhane’s analysis in Table 11 shows that there is a significant difference between Table 10 ANOVA results: Dependent variable time to best solution (HS includes CPLEX solutions) Source Sum of squares Degrees of freedom Mean square F Sig. Observed powera Corrected model 2.70E+07b 29 9.30E+05 1803.87 .000 1.000 Intercept 9.50E+06 1 9.50E+06 18427.11 .000 1.000 S 1.61E+07 2 8.03E+06 15571.53 .000 1.000 D 1.09E+03 1 1.09E+03 2.12 .146 .307 HSc 4.04E+06 4 1.01E+06 1956.92 .000 1.000 S * D 2.63E+03 2 1.32E+03 2.55 .078 .512 S * HSc 6.86E+06 8 8.57E+05 1662.47 .000 1.000 D * HSc 4.83E+03 4 1.21E+03 2.34 .052 .684 S *D * HSc 1.30E+04 8 1.62E+03 3.15 .001 .970 Error 6.17E+06 11,970 5.16E+02 Total 4.26E+07 12,000 Corrected total 3.31E+07 11,999 a Computed using alpha = .05. b R-Squared = .814 (adjusted R-squared = .813). c HS includes CPLEX as the fifth category. Table 11 Post hoc analysis (Tamhane’s test) for factor heuristic search method and dependent variable time to best solution Heuristic type (I) Heuristic type (J) Mean difference (I À J) Std. error Sig. 95% Confidence interval Lower bound Upper bound HC TS .36433 .544182 .999 À1.15988 1.88854 SA .09662 .547622 1.000 À1.43722 1.63046 EA À39.31011* 1.503689 .000 À43.52328 À35.09694 CPLEX À35.04816* 1.626450 .000 À39.60536 À30.49096 TS HC À.36433 .544182 .999 À1.88854 1.15988 SA À.26771 .545975 1.000 À1.79694 1.26152 EA À39.67444* 1.503090 .000 À43.88593 À35.46294 CPLEX À35.41248* 1.625896 .000 À39.96814 À30.85683 SA HC À.09662 .547622 1.000 À1.63046 1.43722 TS .26771 .545975 1.000 À1.26152 1.79694 EA À39.40673* 1.504339 .000 À43.62171 À35.19175 CPLEX À35.14478* 1.627050 .000 À39.70366 À30.58590 EA HC 39.31011* 1.503689 .000 35.09694 43.52328 TS 39.67444* 1.503090 .000 35.46294 43.88593 SA 39.40673* 1.504339 .000 35.19175 43.62171 CPLEX 4.26195 2.146738 .383 À1.75090 10.27480 CPLEX HC 35.04816* 1.626450 .000 30.49096 39.60536 TS 35.41248* 1.625896 .000 30.85683 39.96814 SA 35.14478* 1.627050 .000 30.58590 39.70366 EA À4.26195 2.146738 .383 À10.27480 1.75090 * The mean difference is significant at the .05 level. H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 97
  • 16. the population based meta-heuristic, EA, and other three single chromosome based meta-heuris- tics. As expected, EA is much slower than the other three meta-heuristics HC, TS, and SA. Even though TS is the fastest meta-heuristic, it is not sig- nificantly faster than the other two single chromo- some meta-heuristics (SA and HC). The single chromosome meta-heuristics outperform CPLEX significantly, whereas the difference between EA and CPLEX is not statistically significant. Finally, Table 12 shows that as the problem size increases, TBS increases significantly (alpha < 0.001). The hypotheses that the four heuristic search methods will solve problems at different values of TBS and QOS can be partially accepted. EA is def- initely statistically different from the other three meta-heuristics both in TBS (worse) and QOS (better). The other three meta-heuristics are statis- tically different only under certain conditions (e.g., problem size). The hypothesis that problems with uniformly distributed demand will be solved with a better QOS and with smaller TBS values than prob- lems with non-uniform demand distribution also can be accepted. The demand distribution does make a difference in the performance of the meta-heuristic methods, but does not affect CPLEX. The process of developing these different imple- mentations of meta-heuristics gave the researchers some insights into the problem domain and certain operator settings. It appears that neighborhood size and the quality of initial (starting) solution (vector) have a significant effect on the perfor- mance of the algorithms. The effectiveness of the low-order mutation operator designed for the four meta-heuristics depends heavily on the size of the neighborhood. For this research the neighborhood size was deter- mined as those zones immediately adjacent to a given zone, as illustrated in Fig. 3. In preliminary tests, neighborhood size was set equal to n, the number of zones in the problem grid, and an entire set of tests for the meta-heuristics and for all prob- lem sizes was run. The quality of solutions improved noticeably for all the meta-heuristics with the average quality of solution very close to 100% of optimum, and standard deviation reduced to less than 0.2% in the worst case. However, the time needed to run the algorithms became prohib- itive, especially for 1024-zone problems. Future research might attempt to estimate the optimum neighborhood size for this problem domain. While running our initial experiments we noticed that the quality of an initial vector makes a tremendous difference in the final quality of solu- tions for TS and SA. It is also important for HC, and the impact is felt every time HC does a ran- dom restart. The average fitness value of a ran- domly generated vector is approximately 80% of the optimum solution. It is proposed that an effec- tive way to promote good results for TS, SA, and HC might be to generate, say, 50 vectors ran- domly, and then take the best vector as the starting vector, thereby increasing the average value of the initial vector to approximately 90%. This method could also be used by HC as a modified random- restart procedure. Overall, TS and SA on average find their best solutions in the least amount of time and with rel- Table 12 Post hoc analysis (Tamhane’s test) for factor size and dependent variable time to best solution Size (I) Size (J) Mean difference (I À J) Std. error Sig. 95% Confidence interval Lower bound Upper bound 64 Zone 256 Zone À3.21646* .066322 .000 À3.37487 À3.05804 1024 Zone À79.15633* 1.031603 .000 À81.62056 À76.69210 256 Zone 64 Zone 3.21646* .066322 .000 3.05804 3.37487 1024 Zone À75.93987* 1.033314 .000 À78.40818 À73.47156 1024 Zone 64 Zone 79.15633* 1.031603 .000 76.69210 81.62056 256 Zone 75.93987* 1.033314 .000 73.47156 78.40818 * The mean difference is significant at the .05 level. 98 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 17. atively small variability as indicated by the stan- dard deviation for the computing times (Table 5). With the exception of EA, the meta-heuristics significantly outperform CPLEX as the problem size gets larger both in average times and standard deviations. Indeed, relatively large standard devia- tions in CPLEX times are indicative of the fact that these are NP-hard problems. For any given problem instance it is possible for ILP solvers like CPLEX to take an extraordinary amount of time [1], therefore, ILP solvers are not viable alterna- tives for time critical applications. This is quite important for particularly large-size problems (1024 zones) where dynamic redeployment prac- tices will demand fast and accurate solutions. Under the most realistic of all scenarios with 1024 zones, non-uniformly distributed demand, TS performs the best with an average time of 35.75 seconds and a standard deviation of 13.60, which implies that a limit of 1 minute delay to find the best redeployment strategy can be met (solved) accurately with a probability of 96.25%. 6. Conclusions The contribution of this paper is threefold. First, to our knowledge this is the first time that multiple meta-heuristics have been systematically developed and applied to the maximum expected covering location problem (MEXCLP). This allowed us to objectively evaluate the performance of various meta-heuristics for a given problem. There was a concerted effort to code each meta-heuristic in a like manner and utilize like procedures so that results would be meaningful and generalizable. Specifically, the meta-heuristics used common data representation, common evaluation function, and a common mutation operator. In addition, the initial vector(s) of all meta-heuristics were randomly gen- erated. Second, we confirm and strengthen the ear- lier findings reported in [1] that an EA can produce high quality solutions, however, non-EA approaches such as TS can be used to solve large- scale implementations of MEXCLP with a high degree of confidence in QOS, in less time. Finally, this research is unique in that it is an experimentally designed comparative study. The results of the four meta-heuristics are statistically compared. The mean and standard deviation are reported for each heuristic for six different problem configurations (i.e., 2 distribution types · 3 problem sizes). The significance of the statistical tests, using ANOVA, between meta-heuristics and their interaction with other variables is also reported. This allows for a clear, unbiased assessment of the performance of each meta-heuristic for this problem instance. Results suggest that any one of the four meta- heuristics could be used for this problem domain while there are pros and cons to each meta-heuris- tic examined. However, it is important for the suc- cessful implementation of EA, SA, and TS that the parameters be tuned properly. For HC, the only parameter to be tuned is the size of neighborhood; this makes HC very simple to use. Overall TS and SA give very good results with minimal computa- tional effort (time) for particularly large problems. As can be seen from our implementation of the four meta-heuristic methods, common operators were developed that can be used in various appli- cations. In addition, as we demonstrated with SA, parameters need not be static. Parameters can dynamically evolve as an algorithm runs; this eliminates the necessity to run a plethora of preli- minary tests in an effort to identify optimal param- eter values. Trying to identify optimal static parameter values can be problematic since there is no guarantee that these parameter values will remain constant for different implementations of a problem, or different problem sizes. Therefore, we support the process of dynamically adapting parameter values based on conditions of the algo- rithm through the use of feedback loops. Another variation that might warrant investiga- tion is the use of the crossover operator within HC. The idea is to apply the crossover operator to the chromosome recorded as the best solution found in different runs to create new search areas, in lieu of the random-restart operator. It is also possible to merge the different strengths of these meta-heuristics by having them cooperate with each other. One idea might be to run EA for a short time in order to get a population of good solutions, and then randomly select chromosomes from that population to be used for the initial chromosomes for HC, SA, or TS. H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 99
  • 18. Finally, this experiment has focused on the dif- ferent exploratory pressures unique to each meta- heuristic by keeping the exploitative pressure con- stant. Future research might explore the impact of different exploitative pressures with respect to local search methods, like vertex substitution, [15] and comparing these results with the greedy neighbor- hood search algorithm, i.e., low-order mutation operator, which is currently being used. References [1] H. Aytug, C. Saydam, Solving large-scale maximum expected covering location problems by genetic algorithms: A comparative study, European Journal of Operational Research 141 (2002) 480–494. [2] H. Aytug, M. Khouja, F.E. Vergara, A review of the use of genetic algorithms to solve production and operations management problems, International Journal of Produc- tion Research 41 (17) (2003) 3955–4009. [3] M. Ball, L. Lin, A reliability model applied to emergency service vehicle location, Operations Research 41 (1993) 18– 36. [4] R.S. Barr, B.L. Golden, J.P. Kelly, M.G.C. Resende, W.R. Stewart, Designing and reporting on computational exper- iments with heuristic methods, Journal of Heuristics 1 (1) (1995) 9–32. [5] J. Beasley, Lagrangean heuristics for location problems, European Journal of Operational Research 65 (1993) 383– 399. [6] J.E. Beasley, P.C. Chu, A genetic algorithm for the set covering problem, European Journal of Operational Research 94 (1996) 392–404. [7] S. Benati, G. Laporte, Tabu Search algorithms for the (rjXp)-medianoid and (rjp) centroid problems, Location Science 2 (1994) 193–204. [8] L. Brotcorne, G. Laporte, F. Semet, Fast heuristics for large scale covering location problems, Computers and Operations Research 29 (2002) 651–665. [9] L. Brotcorne, G. Laporte, F. Semet, Ambulance location and relocation models, European Journal of Operational Research 147 (2003) 451–463. [10] F.Y. Chiyoshi, R.D. Galvao, A statistical analysis of simulated annealing applied to the p-median problem, Annals of Operations Research 96 (2000) 61–74. [11] R. Church, C. ReVelle, The maximal covering location problem, Papers of Regional Science Association 32 (1974) 101–118. [12] CPLEX, Using the CPLEX Callable Library, CPLEX Optimization, Inc., Incline Village, NV, 1995. [13] M.S. Daskin, Application of an expected covering model to emergency medical service system design, Decision Sciences 13 (1982) 416–439. [14] M.S. Daskin, A maximal expected covering location model: Formulation, properties, and heuristic solution, Transportation Science 17 (1983) 48–69. [15] M.S. Daskin, Network and Discrete Location, John Wiley & Sons Inc., New York, 1995. [16] R.D. Galvao, F.Y. Chiyoshi, R. Morabito, Towards unified formulations and extensions of two classical probabilistic location models, Computers and Operations Research 32 (1) (2005) 15–33. [17] M. Gendreau, G. Laporte, F. Semet, Solving an ambulance location model by tabu search, Location Science 5 (2) (1997) 75–88. [18] M. Gendreau, G. Laporte, F. Semet, A dynamic model and parallel tabu search heuristic for real time ambu- lance relocation, Parallel Computing 27 (2001) 1641–1653. [19] F. Glover, Future paths for integer programming and links to artificial intelligence, Computers and Operations Research 13 (1986) 533–549. [20] F. Glover, M. Laguna, Tabu Search, Kluwer, Boston, MA, 1997. [21] J.B. Goldberg, Operations research models for the deploy- ment of emergency services vehicles, EMS Management Journal 1 (1) (2004) 20–39. [22] H.J. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, 1975. [23] J. Jaramillo, J. Bhadury, R. Batta, On the use of genetic algorithms to solve location problems, Computers and Operations Research 29 (2002) 761–779. [24] S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680. [25] R.C. Larson, A hypercube queuing model for facility location and redistricting in urban emergency services, Computers and Operations Research 1 (1974) 67–95. [26] Z. Michalewicz, Genetic Algorithms + Data Struc- tures = Evolution Programs, third ed., Springer, New York, 1999. [27] Z. Michalewicz, D.B. Fogel, How to Solve it: Modern Heuristics, Spring-Verlag, 2000. [28] M.J. Norusis, SPSS 10.0 Guide to Data Analysis, Prentice- Hall, Inc., Upper Saddle River, New Jersey, 2000. [29] I.H. Osman, G. LaPorte, Metaheuristics: A bibliography, Annals of Operations Research 63 (1996) 513–628. [30] S.H. Owen, M.S. Daskin, Strategic facility location: A review, European Journal of Operational Research 111 (1998) 423–447. [31] C. Revelle, K. Hogan, The maximum availability loca- tion problem, Transportation Science 23 (1989) 192– 200. [32] S. Russel, P. Norvig, Artificial Intelligence: A Modern Approach, Prentice-Hall, New Jersey, 1995. [33] C. Saydam, J. Repede, T. Burwell, Accurate Estimation of Expected Coverage: A Comparative Study, Socio-Eco- nomic Planning Sciences 28 (2) (1994) 113–120. [34] C. Saydam, H. Aytug, Accurate estimation of expected coverage: Revisited, Socio-Economic Planning Sciences 37 (2003) 69–80. 100 H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101
  • 19. [35] D. Schilling, D. Elzinga, J. Cohon, R. Church, C. ReVelle, The TEAM/FLEET models for simultaneous facility and equipment sitting, Transportation Science 13 (1979) 163– 175. [36] D.A. Schilling, V. Jayaraman, R. Barkhi, A review of covering problems in facility location, Location Science 1 (1) (1993) 25–55. [37] C. Toregas, R. Swain, C. ReVelle, L. Bergman, The location of emergency service facilities, Operations Research 19 (1971) 1363–1373. [38] F.E. Vergara, M. Khouja, Z. Michalewicz, An evolution- ary algorithm for optimizing material flow in supply chains, Computers and Industrial Engineering 43 (2002) 407–421. H.K. Rajagopalan et al. / European Journal of Operational Research 177 (2007) 83–101 101