2. 798 A. Kunjurand S. Krishnamurty
utility function. A salient feature of utility analysis is that it can be used to evaluate alternatives
under uncertainty. Generating methods [8], on the other hand, have been developed to enumerate
the exact non-inferior set or an approximation of it. Here a feasible solution to a multi-criteria
optimization problem is considered non-inferior (Pareto-optimal) if there exists no other feasible
solution that will yield an improvement in one criterion without causing a degradation in at least
one other criterion. This information is then presented to the decision maker, who is required to
select a design that is most suitable. A major drawback of this method is that most real world
problems are too large to allow the exact non-inferior set to be found and, even if it were generated,
the set would include too many alternatives for the decision maker's consideration. The weighting
and constraint methods [8] are the most widely used generating techniques. They operate by
converting the multiple objectives into a single-criterion using certain parameters, which are then
varied to obtain different Pareto-optimal solutions.
In this paper, Taguchi's method has been further developed to handle multiple objective product
design problems using generative techniques and a rational approach to treat constraints in such
problems is presented. Application of Taguchi Method for constrained problems has primarily been
dealt with by introducing a penalty function in the performance parameter [7]. This is not possible
for most problems as the inclusion of a penalty function may make the performance parameter
non-additive or even discontinuous. Furthermore, the optimization of multiple performance
parameters by using weighting functions to arrive at a single measure of performance can lead to
erroneous results because of the difficulty involved in making the trade-off decisions to arrive at
this single expression.
This paper addresses these problems and proposes a methodology that further develops
Taguchi's method to incorporate multiple objectives and constraints in product design. In this
methodology, statistical analysis (ANOVA) concepts are used to obtain a well diversified, though
not necessarily exhaustive, set of Pareto-optimal solutions. It is expected that this integrated
procedure will provide a rational platform for the systematic identification of robust designs in a
multiple objective domain.
ROBUST DESIGN FOR A SINGLE RESPONSE
The Taguchi concept of robust design is based on maximizing performance measures called
signal-to-noise ratios by running a partial factorial set of experiments using orthogonal arrays. The
signal-to-noise ratio is typically given by
S/N = - 10 Iog[MSD] (1)
where MSD refers to Mean Square Deviation of objective function.
The S/N ratio aims at achieving the separability of design factors into control factors and signal
factors. A robust optimum design is identified by finding the optimum setting of the control factors
to reduce variation and then adjusting the signal factors to shift the mean.
Additivity of factor effects is an important consideration in statistical design of experiments. It
ensures that the performance measure is not adversely affected by the non-linearities of the
objective function. Note that this form of the S/N ratio does not guarantee separability and
additivity for all types of objective functions, and thus the use of transformations such as
2-transformations may become necessary [15, 16].
The application of the Taguchi concept to product design, which is usually characterized by
continuous design variables, requires that the design space be discretized by splitting each variable
range into a desired number of intervals. The non-linear effects of the response (i.e. objective
function) can be captured by using at least three levels (two intervals) for each factor. Standard
orthogonal arrays, developed by Taguchi, are used to design the experiments (inner array) and to
simulate the noise effects (outer array), if any. The objective function is evaluated, and the S/N
ratio or a suitably transformed performance measure (henceforth generalized to S/N) is determined
for all the experiments. ANOVA is performed on this SIN ratio to determine the effects of different
factors on performance. Specifically, the sum of squares, mean squares, F-values, and percentage
contribution is computed to determine which factors contribute significantly to the variance and
3. A robustmulti-criteriaoptimizationapproach 799
mean response. This information is then used to predict the optimum settings for the various design
factors.
ROBUST DESIGN FOR MULTIPLE RESPONSES WITH CONSTRAINTS
Taguchi's method deals with a single objective function and the only constraints it can handle
are the variable bounds. To handle multiple criteria problems, an alternative is to formulate the
multiple objective optimization problem as a weighted combination of single objective functions.
However, such an approach will fail to adequately represent the actual design problem and will
have several shortcomings as described earlier. It is then apparent that Taguchi's method, in its
as is form, does not handle multiple objective formulations or constraints. This work presents a
methodical approach based on Taguchi's concept to address multi-objective problems where each
objective is treated independently and a rational procedure is employed to identify a design space
that encompasses a diverse set of Pareto-optimal design. In this procedure ANOVA for each of
the objectives and constraints (responses) are performed simultaneously and then the design space
is systematically pruned and refined based on the effect of the various design variables on the
multiple responses. The purpose of the ANOVA tables is to help distinguish the robust designs
from the non-robust ones. The underlying principle in this scheme is the identification of the
optimum of each of the objectives using its corresponding ANOVA results and subsequent isolation
of the range indicated by the individual optima for later evaluation of Pareto-optimal designs.
RMCO PROCEDURE
A flow-chart describing this technique of robust multiple criterion optimization (RMCO) is
shown in Fig. 1. The basic steps involved in the procedure are the identification of the feasible
domain (based on constraint satisfaction), the development of optimality condition for each
objective, and finally the generation of the designs in the Pareto-optimal solution set. Towards this
end, a two-step iterative procedure is used to identify the non-inferior set. Each iteration involves:
(1) Elimination of all factor levels that cause any of the constraints to be violated.
(2) Identification of those factor levels that significantly effect one or more objectives.
The results of these steps are then used towards determining the Pareto-optimal set. The
development and implementation of the methodical procedure involving these two steps and the
rationale behind it are described below.
In the first step, the ANOVA of the constraint function is used to identify the factor levels that
cause a constraint violation. It should be noted here that the ANOVA of any experimental data
is instrumental in identifying the extent to which each factor effects the response. The percentage
contribution (or the F-statistic) of each of the factors is a good indicator of how much any
particular factor contributes to the violation of the constraint function. All factors that have a high
percentage contribution, as compared to that of the other factors, are identified. For this purpose,
the designer may decide to use the factors with the highest percentage contributions or, as an
alternative, a cut off value may be set. The levels of these factors that cause the constraint violation
are identified. If the mean effect of this level on the constraint function is beyond the constraint
limit, then this level is eliminated from further consideration. In this manner, the non-feasible
designs are filtered out from the initial design space.
The ANOVA of the objective functions is then considered, noting that the design space is reduced
due to the elimination of levels in the previous step. In this step the best level of a factor is identified
for each objective function. The range indicated by the various optimum levels of any factor for
the different objective functions will form the new bounds in the next iteration. In case the same
level is the best for all objective functions, then a small neighborhood around that level is bracketed
to obtain the levels for the next iteration. In some cases the objectives may be mutually conflicting,
i.e. the best design for one objective is the worst design for another one and vice versa. In such
situations, there is no refinement of the design space as all the considered levels will be bracketed
and thus a more elaborate procedure is required to identify the better levels. This topic will be
covered in the next section.
4. 800 A. Kunjur and S. Krishnamurty
The procedure outlined above does not guarantee an exhaustive non-inferior set, but it should
be noted that enumeration of such a set may not be preferred in most practical design situations
as it will only compound the complexity of the final selection process. On the other hand, providing
a representative set in a systematic manner as shown here will greatly facilitate the identification
of appropriate designs as either the final design or for further refinement.
TREATMENT OF CONFLICTING OBJECTIVES
To handle design problems with conflicting objective functions, a novel procedure has been
developed for pruning the design space iteratively that is based on identification of relative
significance of design factors on different objectives. The purpose here is to retain all such levels
of a factor that significantly effect an objective and eliminate all levels that have a negligible
influence. For this purpose, factors are classified into three categories---dominant, significant and
insignificant factors. A factor is assumed to have a dominant effect on an objective function if its
percentage contribution (or F-value) is considerably higher than that of all other factors for that
Fmmul~a~Problem
Lay-o~:tbeexlx:ie~ts ,H ]vl~lifythevariablelevels I
I
AblOVAfor
objectivesandcomnints
[ Id~mtifyfusible desip ~ ,l
I r~-vct~ optim~ty
Coaditimt
Obtmnon~
designs
Fig. 1. Flowchart showing the RMCO procedure.
5. A robust multi-criteria optimization approach
L~ffi200cm I p X24 ID
r
Fig. 2. Beam design problem.
x,:iFo:[
801
XI
objective. A simple procedure to determine dominant factors would be to find the factor that has
a percentage contribution greater than the sum of all other factors for the concerned objective
function. A factor has a significant effect if the percentage contribution is greater than a designer
specified cut-off value (or if the F-value is greater than the F-value identified from a designer
specified confidence from the F-tables).
The factor levels violating the constraint are first eliminated, as before, and the design space is
then further reduced according to the following rules:
(1) If a factor has a significant effect on all objective functions, then all the levels that optimize
at least one objective are selected.
(2) If a factor has a dominant effect on a single objective, the factor level that optimizes this
objective is selected regardless of its significance on other objectives.
(3) If a factor has an insignificant effect on all the objectives, then the designers discretion is used
to determine the objective most effected by this factor and the best level for the factor is identified.
This procedure thus attempts to identify non-dominated designs, but note that it does not ensure
the elimination of all dominated designs from the final set. The proportion of dominated designs
in the final set depends to a large extent on the designer determined cut-off value for identifying
significant factors. A high value for this cut-off indicates that the designer does not want any
dominated designs in the final set. In this case, the designer is willing to eliminate some
non-dominated designs to reduce the cost of processing the final set. Alternatively, a low cut off
value will result in a large final design set, and consequently a sizable number of dominated designs,
that have to be filtered out by further processing. Depending on the choice of the designer, further
processing would mean either another iteration on the reduced design space or filtering out the
dominated designs using a standard procedure such as the technique of dominated
approximations [17]. Thus the cut-off value is a trade-off between the number of non-inferior
designs that the designer requires and the amount of effort he/she is willing to put in to identify
these.
The application of robust design concepts to multi-criteria optimization is illustrated with two
mechanical design examples. The first one is a beam design example with conflicting objectives,
i.e. weight and beam deflection. The second example is a mechanism dimensional synthesis problem
with path deviation and mechanical advantage as the primary objectives.
BEAM DESIGN EXAMPLE
The application of the robust design methodology to multi-criteria optimization problems with
conflicting objectives is demonstrated by a beam design problem [10] in~,olving two objectives and
a single constraint. The objective of this problem is to determine the dimensions of a beam (Fig.
2), that minimizes the cross section area for a given beam length and the deflection of the beam
under given loads. The various parameter values for the problem are:
Permissible bending stress of the beam material is 16 kN/cm2.
Young's Modulus of Elasticity (E) is 2 • 104kN/cm2.
Maximal bending forces P = 600 kN and Q = 50 kN.
Length of the beam (L) is 200 cm.
6. 802 A. Kunjur and S. Krishnamurty
The problem can be mathematically expressed as follows:
Minimize Cross Section Area = fl(x) = 2x2xa + x3(xl -- 2x4)
Minimize Vertical Deflection = f2(x) = PL3/48EI
Subject to the constraint
Maximum Stress = f3(x)
180,000x~ 15,000x~
-x3(x~ - 2x4)3+ 2x2x4(4x 2 + 3x~(xt - 2x4)) + (x~ - 2x4)x] d- 2x4x~ <~ 16
where I is the Moment of Inertia of the beam cross section.
The geometric constraints are:
10~<Xl<~80, 10~<X2~<50, 0.9~<X3<~5, 0.9<~x4~<5.
To capture non-linear effects of the various responses, four levels were initially chosen for each
of the four design variables, within the specified range. Thus, neglecting interaction effects, the total
number of degrees of freedom of the system is 12. The L~6 orthogonal array was used to design
the experiments for this problem. It was found from the mean effects table, that both the area and
the deflection equations satisfy the additivity condition and thus do not require any transformation
or study of interactions. The constraint equation was found to be non-additive, but this does not
affect the result as the analysis of constraint equations is done only to eliminate invalid designs
and not to predict constraint values for the optimum design. Dimensional variations in the design
variables were treated as the appropriate noise factors, and an L9 array was used to simulate these
variations. The experiments are set up as shown in Table 1.
The objective functions are computed for each combination of design variables and for each
combination of noise values. The S/N ratios for the various experiments are shown in Table 1.
The mean response of each factor level is then computed for each objective function and constraint
equation and ANOVA is performed. These are listed in Table 2.
It is seen from the ANOVA of the constraint equation [Table 2(c)] that the factors x~, x2, and
x4 have a significant effect on the constraint, assuming a 1% cut off for significance. The levels
of these factors that maximize the constraint value are found from Table 2(c) to be the first level
for all three factors. It is further observed that the mean effects corresponding to these levels of
x~ and x4 are above the transformed constraint limit given by 10, log(162) = 24.08. Consequently,
these two levels are eliminated from further consideration. The ANOVA of the objective functions
[Tables 2(a) and 2(b)] show that all the four factors (x~-x4) have a significant effect on the cross
section area (objective 1), whereas only the factors x~, x2 and x4 significantly influence the deflection
of the beam (objective 2). Additionally, x~ is a dominant factor for beam deflection as its percentage
contribution for deflection (92.7%) is much greater than that of any other factor. The level of x~
Table 1. Experiment set-up and S/N ratios
Ex. no. xl x2 x3 x4 SNI SN2 SN3
1 10 10 0.9 0.9 -28.09 -21.62 48.89
2 10 23.3 2.3 2.3 -41.56 -9.55 38.49
3 10 36.7 3.6 3.6 -48.77 -4.45 33.72
4 10 50 5 5 - 53.98 - 1.59 30.98
5 33.3 10 2.3 3.6 - 42.41 11.75 15.36
6 33.3 23.3 0.9 5 - 48.10 19.66 17.65
7 33.3 36.7 5 0.9 - 46.99 15.66 20.45
8 33.3 50 3.6 2.3 - 50.46 21.93 16.52
9 56.7 10 3.6 5 -48.57 25.80 11.15
10 56.7 23.3 5 3.6 -52.37 30.57 3.18
11 56.7 36.7 0.9 2.3 - 46.68 28.66 11.72
12 56.7 50 2,3 0.9 - 46.70 26.17 14.04
13 80 10 5 2.3 - 52.33 33.91 15.35
14 80 23.3 3.6 0.9 - 50.20 32.42 17.50
15 80 36.7 2.3 5 - 54.45 41.32 - 0.46
16 80 50 0.9 3.6 - 52.58 40.90 2.48
7. A robust multi-criteria optimization approach
Table 2. Analysis of variance for all objectives and constraints
803
(a) Objective 1
Mean effects
Level 1 Level 2 Level 3 Level 4 SS DOF MS F0 % Cont
xl --43.0977 --46.98891 --48.57791 -52.439 179.5706 3 59,85686 11.14517 26.25771
x2 --42.89953 -48.05369 --49.22116 --50.92913 143.5511 3 47,85038 8.9096 20.47161
x3 --43.86072 -46.28051 -49.49674 -51.46554 136.5584 3 45.51947 8.47559 19,34831
x4 -42.99387 --47.80442 -49.03123 -51.27399 146.7245 3 48.90818 9.106559 20.98138
Residual 16.11196
Error 16.11196 3 5.370654 12.94099
Total 622.5166 15 100
(b) Objective 2
Level 1 Level 2 Level 3 Level 4 SS DOF MS F0 % Cont
xl -9.30278 17.24992 27.80081 37.1364 4832.267 3 1610.756 2172.463 92.76362
x2 12.46102 18.27419 20.29889 21.85025 202.677 3 67.55899 91.11835 3.849803
x3 16.89814 17.42388 18.92493 19.6374 19.54822 3 6.516072 8.788375 0.332715
x4 13.15836 18.73512 19.69314 21.29773 150.1122 3 50.03741 67.4866 2.840268
Residual 2.224327
Error 2.224327 3 0.741442 0.213597
Total 5206.829 15 100
(c) Constraint
Level 1 Level 2 Level 3 Level 4 SS DOF MS F0 % Cont
xl 38.02051 17.49263 10.02144 8.718296 2196.467 3 732.8224 104.2603 80.33045
x2 22.68757 19.20507 16.35471 16.00554 115.3661 3 38.45536 5.471134 3.478278
x~ 20.18324 16.85627 19.72158 17.49179 32.11156 3 10.70385 1.522862 0.406756
.v4 25.2187 20.52064 13.68373 14.82981 343.4986 3 114.4995 16,29012 11.89481
Residual 21.08632
Error 21.08632 3 7.028774 3.889705
Total 2710.53 15 100
that optimizes deflection (level 4) is thus singled out for further consideration, regardless of its effect
on the weight of the beam. The factors x: and x4 have a significant effect on both the objectives
and therefore a range of levels is obtained for these factors by identifying the levels that optimize
each of the objective functions. The factor x3 has a significant effect only on the weight of the beam,
which is optimized by setting x3 to the first level as seen from Table 2(a). The factor levels that
optimize the respective objectives, after eliminating the levels that violate the constraints, are shown
highlighted in Tables 2(a) and (b). Subsequently, the design space is narrowed down the new levels
(bounds) for the design factors are identified as follows:
xl=80, 10~<x2~<50, x3=0.9, 2.3<~5.
At this point, the designer has the option of either (1) performing another iteration on the
reduced space to further narrow down the search or (2) identifying the non-inferior set by
combining the various levels of each factor. In the case of another iteration, the reduced bounds
(as for x2 and x4), or the neighborhood (as for x~ and x3) of the predicted levels can be used to
formulate new design levels. Alternatively, the combination of the various levels in the reduced
design space results in a design set as shown in Table 3. The graphical representation of the resulting
objective functions for all the non-dominated designs is shown in Fig. 3. All the designs in this
set, for this particular example, are Pareto-optimal designs. This will not be true in general and
the final solution set may require further processing to identify the non-dominated designs. Note
that a lower cut-off value would have resulted in fewer factor levels being eliminated and
consequently the final design set would have had a larger number of designs, possibly including
some dominated designs.
MECHANISM DIMENSIONAL SYNTHESIS
Typically, mechanism dimensional synthesis involves the minimization of structural error subject
to a set of size and geometric constraints such as Grashof and crank rocker conditions. For
example, in path generation problems, the coupler point is required to trace a path with minimum
8. 804 A. Kunjur and S. Krishnamurty
Table 3. Non-inferior design set
Ex. No. xl x2 x3 x4 Area Deft. Stress
1 80 10 0.9 2.3 113.86 0.049213 20.41247
2 80 I0 0.9 3.2 130.24 0.0402i6 13.59076
3 80 10 0.9 4.1 146.62 0.034253 9.955928
4 80 10 0.9 5 163 0.03002 7.719161
5 80 23.3 0.9 2.3 175.04 0.025778 0.185818
6 80 23.3 0,9 3.2 215.36 0,020007 0.487314
7 80 23.3 0.9 4.1 255,68 0.016491 0.590065
8 80 23.3 0.9 5 296 0.01413 0.629231
9 80 36.7 0.9 2.3 236.68 0.01742 1.760328
10 80 36.7 0.9 3.2 301.12 0.013282 i.447877
11 80 36.7 0.9 4.1 365.56 0.010832 !.241703
12 80 36,7 0.9 5 430 0.009215 1.09808
13 80 50 0.9 2.3 297.86 0.013179 1.858706
14 80 50 0.9 3.2 386.24 0.009959 1.45284
15 80 50 0.9 4.1 474.62 0.00808 1.2075i
16 80 50 0.9 5 563 0.00685 1.044104
MIN 113.86 0.00685
error relative to a given curve, specified by a set of points. The objective function is a measure of
the error between the path obtained and the desired path. This error is usually expressed as the
sum of the squares of the error at each point in the path. The objective function is evaluated by
determining the coordinates of the precision point at various positions of the mechanism for a
particular design and this is accomplished using basic geometric and trigonometric relations. The
minimum mechanical advantage for one complete cycle is also an important performance criterion
as it determines the amount of power that can be transmitted by the mechanism. This is generally
computed by performing a velocity analysis of the mechanism and is expressed as the ratio of input
to output velocity. The optimization problem for a four-bar mechanism is thus stated as follows
(Fig. 4):
P
=~ (p, _ p~)2 + (p, _ p~.,)2
Minimize F 2p (2)
Plot of non4nfedor set
i
0.05
0.045
0.04
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0 I I I I I
100 200 300 400 503
Cross Section
Fig. 3. Graph of cross section area against deflection.
e00
IO~1 1
9. A robustmulti-criteriaoptimizationapproach 805
x4
x3
xl xO
(x
x7
Fig.4. Typicalfourbar mechanism.
Maximize MA = Min[xl COo/V~] (3)
where
i = position
p = number of points
COo= angular velocity of input link
V, = velocity of coupler point at position i
Subject to
Grashof Condition--xL + Xs <~xe + xo
Crank Rocker--x~ < x0, x2, x3
Variable Bounds--x~nj ~<xj ~<Xmaxj
where P~, P,* are the x and y coordinates of the obtained points, P]~,P~,are the x and y coordinates
of the desired points, xjs are the link lengths, xL is the longest link length, Xs is the shortest link
length, xp and xe are the lengths of the other two links of the four bar. The Grashof condition
checks assemblability at all positions. The crank rocker constraint ensures that the crank is the
smallest link in the four-bar and hence can be rotated by external means. The desired curve is
specified as a set of coupler coordinates that the coupler point traces at discrete angular rotations
of the crank, i.e. it is a path synthesis problem with prescribed timing.
The variable bounds constraint is easily taken care of in Taguchi's method by ensuring that the
chosen levels are within the specified range. The treatment of the other two constraints requires
special consideration since the violation of either of them results in non-assemblability of the
mechanism or in a non-drivable crank. The crank of an assemblable four-bar mechanism has to
be the smallest link to satisfy the crank rocker constraint. This can be ensured by normalizing all
the design variables with respect to the crank and selecting variable bounds so that the length of
any link is greater than that of the crank. This procedure essentially reduces the crank length to
a constant (say unity) and each design represents a family of coupler curves obtained by varying
the crank length and proportionally modifying the design variables. The coupler curves generated
by the various designs of any family are similar in shape but vary in the magnitude of the coupler
points obtained. This is analogous to similar triangles (or any geometry) wherein two similar
triangles may be of different sizes. Moreover, two similar curves will have the same mechanical
advantage (objective function # 2) at analogous points since the ratio of input to output velocity
does not change for different crank lengths. Thus we now need a procedure to parametrically
compare two curves so that the amount of deviation (objective function # 1) can be quantified
irrespective of the size (crank size in particular) of the mechanisms generating them. This procedure
is briefly described in the following paragraph. It should be noted that we have conveniently taken
MMT 32/7--B
10. 806 A. Kunjur and S. Krishnamurty
care of two of the constraints and the third constraint, namely the Grashoff criterion, needs to be
accounted for during the optimization process.
Kota [6] presents a methodology to parametrically compare two discretized symmetric curves
by computing the parametric length and angle at each of the coupler points for both the curves.
He then determines the deviation in the parametric angle between the two curves at specified
parametric lengths. The square of these deviations summed over all the points gives the total
deviation between the two curves. For the purposes of the problem considered in our example, this
procedure is modified to include non-symmetric curves and prescribed timing and is detailed below.
The first step is to obtain the coupler points of the generated curve and arbitrarily select a starting
point. The parametric angle at any other point is computed as the difference between the tangential
angle of the curve at this point and that at the starting point. The deviation, F, is then obtained
as follows:
•/,•(0" - 0~)2
F = (4)
n
where 0* and 0~ are the parametric angles of the generated and desired curves respectively at
position i of the coupler. In order to match the two curves irrespective of their position or
orientation, F is measured with every point on one of the curves as the starting point and the
minimum of these values gives the deviation between the two curves.
Thus the prescribed timing mechanism path synthesis problem has been reduced to a 5 variable
problem with two objectives and one constraint as shown below (see Fig. 5):
Minimize F (as in 4)
Maximize MA (as in 3)
such that
XL + 1 < xe-t- xQ.
The orientation of the curve has been accounted for in the objective function computation. As
a result, the coordinates of the crank-ground joint (x5, x6), the orientation of the ground link (x7),
and the crank angle (x8) are no longer variables (note that variable x9 has been changed to x5
in Fig. 5).
The standard L25 orthogonal array allows five levels for up to six factors and is considered
suitable for this problem. The levels of the five design factors along with the experiment layout
for the initial iteration are shown in Table 4. Uncertainty is deliberately introduced into the design
x4
xl
c
x3
////////!
Fig. 5. Modified dimensional synthesis problem.
11. A robust multi-criteria optimization approach
Table 4. Initial experiment set-up
807
xl x2 x3 x4 x5
1 1.5000 2.5000 2.5000 O.5000 0.0000
2 1.5000 3.2500 3.2500 1.5000 1.2571
3 1.5000 4,0000 4.0000 2.5000 2.5143
4 1.5000 4.7500 4.7500 3.5000 3.5968
5 1.5000 5.0000 5.5000 4.5000 4.8540
6 2.2500 2.5000 3.2500 2.5000 3.5968
7 2.2500 3.2500 4.0000 3.5000 4.8540
8 2.2500 4.0000 4.7500 4.5000 0.0000
9 2.2500 4.7500 5.5000 0.5000 1.2571
10 2.25O0 5.5000 2.5000 1.5000 2.5143
I1 3.0000 2.5000 4.0000 4.5000 1.2571
12 3.0000 3.2500 4.7500 O.5000 2.5143
13 3.0000 4.0000 5.5000 1.5000 3.5968
14 3.0000 4.7500 2.5000 2.5000 4.8540
15 3.0000 5.5000 3.2500 3.5000 0.0000
16 3.7500 2.5000 4.7500 1.5000 4.8540
17 3.7500 3.2500 5.5000 2.5000 0.0000
18 3.7500 4.0000 2.5000 3.5000 1.2571
19 3.7500 4.7500 3.2500 4.5000 2.5143
20 3.7500 5.5000 4.0000 0.5000 3.5968
21 4.5000 2.5000 5.5000 3.5000 2.5143
22 4.5000 3.2500 2.5000 4.5000 3.5968
23 4.5000 4.0000 3.2500 0.5000 4.8540
24 4.5000 4.7500 4.0000 1.5000 0.0000
25 4.5000 5.5000 4.7500 2.5000 1.2571
by the L8 orthogonal array (noise array) which allows two levels of noise for each of the factors.
The levels in the noise array are set by assuming a + 1% variation in the link lengths and angles.
Each iteration will thus involve 25 experiments. Here, each experiment is being replicated eight
times with differing values of noise factors resulting in a total of 200 function evaluations of each
objective and constraint function. The mechanism optimization problem is different from typical
mechanical design optimization problems as the objective functions will have an indeterminate
value at non-assemblable positions, if the Grashof condition is not satisfied. To accommodate this
situation, ANOVA is performed only on the constraint function until all factor levels causing a
constraint violation are eliminated. Once this is accomplished, the objective functions can be
evaluated for any design in the experimental design set-up. Note that this will generally not be
necessary in most design optimization problems and the ANOVA of the constraint and objective
functions can be performed in parallel.
The ANOVA tables of the constraint function for the first two iterations are shown in Tables
5 and 6. In Table 5 it is seen that design factors 2 and 3 have a predominant effect (i.e. with the
highest percentage contributions) on the constraint function and levels 5 and 1 respectively of these
factors are found to have large mean effect values. Thus these levels are eliminated and the second
iteration is performed on the reduced space. Similarly, the ANOVA of the second iteration indicates
that the first level of the first factor causes a violation and hence it is eliminated. All the designs
considered in the third iteration are found to satisfy the constraint and thus ANOVA can be
performed on the objective functions. Note that as the objectives for this problem are not
conflicting, only the mean effect values of the various factors are required to determine their
optimum range. The design factors are bracketed between the optimum levels of the individual
Table 5. ANOVA of constraint function for first iteration
Mean effects
x Level l Level 2 Level 3 Level 4 Level 5 SOS DOF MS F % Cont
0 0.000 0.350 0.I00 0.000 0.000 0.46 4 0.12 4.230 0.84
2 0.000 0.000 0.000 0.050 0.400 0.61 4 0.15 5.609 5.86
3 0.400 0.050 0.000 0.000 0.000 0.61 4 0.15 5.609 5.86
4 0.000 0.350 0.050 0.050 0.000 0.44 4 0.l I 4.000 0.00
5 0.050 0.000 0.350 0.000 0.050 0.44 4 0.11 4.000 0.00
Error 0.43 4 0.l I 87.44
Total 2.98 24 100.00
12. 808 A. Kunjur and S. Krishnamurty
Table 6. ANOVA of constraint function for second iteration
x Level 1 Level 2 Level 3 Level 4 Level 5 SOS DOF MS F % Cont
1 0.2500 0.1000 0.0000 0.0000 0.0000 0.24 4 0.061 70.668 87.41
2 0.0625 0.0625 0.0625 0.0625 0.1000 0.01 4 0.00 4.000 0.00
3 O.lO00 0.0625 0.0625 0.0625 0.0625 0.01 4 0.00 4.000 0.00
4 0.0625 0.1000 0.0625 0.0625 0.01 4 0.00 4.000 0.00
5 0.0625 0.0625 0.1000 0.0625 0.0625 0.01 4 0.00 4.000 0.00
Error 0.01 4 0.00 12.59
Total 0.27 24 100.00
objectives, which are shown highlighted in Table 7. For the subsequent iteration, the bracketed
levels are used as the new range, and the ANOVA (Table 8) is again performed on both objectives.
The optimum levels of each factor for each objective for this iteration are shown highlighted in
Table 8 and the reduced design space is:
x0 = 3.9375, 4.6094 < x2 < 4.75 (2 levels), X3 = 3.5312,
0.5 < x4 < 1.5 (5 levels), 3.9548 < x5 < 4.854 (2 levels).
These levels identify 20 designs (2 x 5 x 2) from which the Pareto-optimal set shown in Table 9
is determined. Thus we obtain six non-dominated designs that can be considered to be optimal in
a robust sense.
Table 7, ANOVA of objectives 1 and 2 for third iteration
ANOVA table for objective No. 1
x Level 1 Level 2 Level 3 Level 4 Level 5 SOS DOF MS F % Cont
0 42.958 41.309 54,595 54.950 56.163 1043.48 4 260.87 13.140 15.12
2 47,245 50.664 53.322 53.795 44.946 295.08 4 73.77 3.716 -0.47
3 59.890 54.012 44.459 45,953 45.657 899.29 4 224.82 11.324 12.12
4 47.754 59.199 49.657 46.605 46.756 559.17 4 139.79 7.041 5.03
5 44.988 56.922 56.637 55.287 36.136 1686.19 4 421.55 21,233 28.51
Error 317.65 4 79.41 39.70
Total 4800.86 24 100.00
ANOVA table for objective No. 2
x Level 1 Level 2 Level 3 Level 4 Level 5 SOS DOF MS F %Cont
0 - 19.903 - 15.171 - 11.663 -- 11.611 - 11.859 259.80 4 64.95 15.97 10.35
2 - 16.491 -14.989 -15.906 - 11.827 -- 10.995 122.81 4 30.70 7.553 3.07
3 -- 10.491 - 13.269 -14.911 -15.044 -16.492 104.86 4 26.22 6.44 2.12
4 -- 5.023 -8.668 -14.787 -17.140 -24.590 1158.1 4 289.53 71.22 58.10
5 -11.760 -16.581 -16.598 -15.083 -- 10.186 170.68 4 42.67 10.49 5.62
Error 65.04 4 16.26 20.74
Total 1881.3 24 100.00
Table 8. ANOVA of objectives 1 and 2 for fourth iteration
ANOVA table for objective No. 1
x Level 1 Level 2 Level 3 Level 4 Level 5 SOS DOF MS F % Cont
0 50.189 49.021 48.288 48.226 48.344 13.89 4 3.47 29.05 4.94
2 48.547 48.870 48.472 49.183 48.996 1.80 4 0.45 3.767 -0.05
3 48.807 48,683 48.755 48.737 49.087 0.51 4 0.13 1.060 -0.58
4 44.860 46.711 48.754 50.866 52.878 203,9 4 50.99 426.59 83.34
5 47.507 48.082 49.277 50.034 49.169 20.36 4 5.09 42,575 7.61
Error 1.91 4 0.48 4.73
Total 242.4 24 100.00
ANOVA table for objective No. 2
x Level 1 Level 2 Level 3 Level 4 Level 5 SOS DOF MS F % Cont
0 -- 4.596 -4.712 -4.811 -5.037 -5.106 0.93 4 0.23 4.142 0.04
2 --5.278 -4,714 -5.310 -4,699 --4.261 3.92 4 0.98 17.418 3.41
3 --4.822 --5.033 -4.919 -4.945 --4.543 0.71 4 0.18 3.166 --0.21
4 -- 3.233 -3.940 -4.906 -5.538 -6,645 35.71 4 8.93 158.76 39,31
5 -5.044 -6.414 -5.838 -4,503 --2.463 46.38 4 11.60 206.20 51.36
Error 0.90 4 0.22 6,10
Total 88.56 24 100.00
13. A robust multi-criteriaoptimizationapproach
Table 9. Non-dominated designs
809
xO x2 x3 x4 x5 S/NI Obj. 1 S/N2 Obj. 2
I 3.937 4.75 3.5312 0.50 4.854 44.896 0.106 -- 1.631 0.922
2 3.937 4.75 3.5312 0.75 4.854 46.994 0.095 - !.891 0.910
3 3.937 4.75 3.5312 1.00 4.854 49.443 0.084 -2.171 0.897
4 3.937 4.75 3.5312 1.25 4.854 52.284 0.073 --2.470 0.884
5 3.937 4.75 3.5312 1.50 3.954 56.993 0.058 -5.606 0.755
6 3.937 4.75 3.5312 1.50 4.854 55.529 0.062 -2.784 0.870
0.95
0.9
0.85
0.8
]E
0.75
0.7
0.O4
Plot of NonJnfedor Set
I I I I I t
0.05 0.06 0.07 0.08 0.09 0.1 0.11
Deviation
Fig. 6. Non-dominated design set.
Figure 6 shows a plot of the six non-dominated designs obtained after the fourth iteration. It
should be noted that the number of non-dominated designs obtained and the accuracy of the
solution depends on the number of iterations, each of which involves a total of 200 function
evaluations of each objective. Generally, a large number of iterations will result in a larger
non-dominated design set. For example, it was seen that for three iterations, three non-dominated
designs were obtained and for five iterations 16 were obtained. Moreover, as each iteration refines
the design space, the discretization interval reduces and therefore the quality of the solution
becomes better with every iteration. Thus the termination criterion for the procedure is determined
by the trade-off between the number of non-dominated designs required, the solution accuracy and
the number of function evaluations.
An added advantage in obtaining a Pareto-optimal set of designs as shown in Fig. 6 is that it
clearly depicts the amount of one criterion that must be sacrificed so as to obtain a particular
improvement in the other criterion. Thus the designer has a clear idea of the available range of
choice and is in a position to make an intelligent decision.
CONCLUSIONS
In this paper, an efficient framework for the identification of a non-dominated design set for
multi-criteria optimization problems with constraints is presented. This work employs a statistical
design of experiment formulation for the purposes of finding robust design solutions to problems
with multiple objectives. In particular, a novel two-step procedure is developed that utilizes
ANOVA results to handle constraints and to identify a Pareto-optimal solution set based on
relative effects of the various factors on the objective functions. A salient feature of this approach
is its consistent treatment of conflicting objectives that enables determination of such
14. 810 A. Kunjur and S. Krishnamurty
Pareto-optimal solutions based on a rigorous categorization of design factors. Thus, this technique
eliminates the need for dealing with trade off decisions necessary for computing pre-set weights
to reduce the multiple objectives to a single expression. The potential application of this technique
to engineering design problems is discussed with the aid of an illustrative beam design and a
mechanism dimensional synthesis problem.
Acknowledgements--The authors gratefully acknowledge the support of the National Science Foundation under Grant
No. CMS-9402608.
REFERENCES
1. Roy, R. K., A Primer on the TaguchiMethod. Van Nostrand Reinhold, New York, 1990.
2. Kacker, R. N., Journal of Quality Technology, 1985, 17(4), 176-188.
3. Bagchi, T. P., TaguchiMethods Explained--PracticalSteps to Robust Design.Prentice-Hall of India, New Delhi, 1993.
4. Dehnad, K., Quality Control, Robust Design,and the TaguchiMethod. Wadsworth & Brooks/Cole, California, 1989.
5. Song, A. A., Mathur, A. and Pattipati, K. R., IEEE Transactionson Systems, Man and Cybernetics, 1995, 25(11),
1437-1446.
6. Kota, S. and Chiou, S. J., Mechanism and Machine Theory, 1993, 2~(6), 777-794.
7. Otto, K. N. and Antonson, E. K., ASME Design Theory and Methodology, DE-Vol. 31, 1991.
8. Cohon, J. L., in Design Optimization, ed. John S. Gero. Academic Press, New York, 1985.
9. Cohon, J. L., MultiobjectiveProgrammingand Planning. Academic Press, New York, 1978.
10. Osyczka, A., in Design Optimization,ed. John S. Gero. Academic Press, New York, 1985.
11. Charnes, A. and Cooper, W. W., ManagementModelsandIndustrialApplicationsofLinearProgramming,Vol. 1. Wiley,
New York, 1961.
12. Keeney, R. L. and Raiffa, H., Decisionsand MultipleObjectives:Preferencesand ValueTradeoffs.Wiley and Sons, New
York, 1976.
13. Von Neumann, J. and Morgenstern, O., Theoryof Gamesand EconomicBehavior,3rd edn. Princeton University Press,
Princeton, N.J., 1953.
14. Thurston, D. L., Carnahan, J. V. and Liu, T., ASME Design Theory and Methodology, DE-Vol. 31, 1991.
15. Box, G., Technometrics, 1988, 30(1), 1-17.
16. Leon, R. V., Shoemaker, A. C. and Kacker, R. N., 1987, Technometrics, 1987, 29(3), 253-265.
17. Majchrzak, J., in Aspiration Based Decision Support Systems, eds A. Lewandowski and A. P. Wierzbicki.
Springer-Verlag, New York, 1989.
18. Tribus, M. and Szonyi, G., in Quality Progress, May 1989.
EIN ROBUSTER MEHRKRITERIEN-ANSATZ ZUR OPTIMIERUNG
Zusammen[asstmg--Dergesamte Wert eines Produktes wird allgemein dutch seine Leistungsf'~ihigkeit
bezi~glichvielfiiltiger Faktoren bestimmt. Somit wird die Aufgabe des Produktentwurfs vereinfacht, wenn
all diese Leistungsmerkmale gleichzeitig optimiert werden k6nnen. Ein anderer bedeutsamer Faktor, der
die Qualitaumlautt des Produkts bestimmt, ist dessen Empfindlichkeit gegeniiber aiisseren oder nicht
kontrollierbaren Schwankungen. Fiir das Einbeziehen dieser Punkte in den Produktentwurf wird in diesem
Aufsatz ein neuer robuster Mehrkriterien-Ansatz zur Optimierung vorgestellt, der Mehrzweck-
Optimierungskonzepte mit statistischen robusten Entwurfstechniken erg/inzl. In diesem Ansatz werden
Mengen von Pareto-optimalen robusten Entwurfsl6sungen mit Hilfe des Entwurfs yon Versuchsaufbauten
gewonnen, die ANOVA Ergebnisse zur Quantifizierung von relativer Dominanz und Signifikanz von
Entwuffsfaktoren ausnutzen. Die Anwendung dieser Methode auf ingenieurswissenschaftliche
Entwurfsprobleme wird anhand zweier Fallbeispiele dargestellt, die ein Dimensionales Syntheseproblem
eines Mechanismus beinhalten.