SlideShare uma empresa Scribd logo
1 de 19
Baixar para ler offline
Gradient-Based Multi-Objective Optimization Technology
                                             Vladimir Sevastyanov1
                                       eArtius, Inc., Irvine, CA 92614, US


                                              EXTENDED ABSTRACT


          Multi-Gradient Analysis (MGA), and two multi-objective optimization methods based on
          MGA are presented: Multi-Gradient Explorer (MGE), and Multi Gradient Pathfinder
          (MGP) methods. Dynamically Dimensioned Response Surface Method (DDRSM) for
          dynamic reduction of task dimension and fast estimation of gradients is also disclosed.
          MGE and MGP are based on the MGA’s ability to analyze gradients and determine the
          area of simultaneous improvement (ASI) for all objective functions. MGE starts from a
          given initial point, and approaches Pareto frontier sequentially by stepping into the ASI
          area until a Pareto optimal point is obtained. MGP starts from a Pareto-optimal point,
          and steps along the Pareto surface in the direction that allows for improvement on a subset
          of the objective functions with higher priority. DDRSM works for optimization tasks with
          virtually any number (up to thousands) of design variables, and requires just 5-7 model
          evaluations per Pareto optimal point for the MGE and MGP algorithms regardless of task
          dimension. Both algorithms are designed to optimize computationally expensive models,
          and are able to optimize models with dozens, hundreds, and even thousands of design
          variables.


                                                  1. Introduction

    T here are two groups of multi-objective optimization methods: scalarization and non-scalarization methods [1].
      Scalarization methods use a global criterion to combine multiple objective functions in a utility function, and
require solving a sequence of single-objective problems. Absence of numerical methods designed specifically for
multi-objective optimization caused the invention of such an artificial scalarization technique. The existing weighted
sum approaches that are widely used for design optimization do not work well with the non-convex Pareto surfaces.
Uniform distribution of Pareto optimal points cannot be guaranteed even if the weights are varying consistently and
continuously. Hence, Pareto set will be incomplete and inaccurate [1].
       Genetic Algorithm (GA) is one of the major techniques based on non-scalarization. It combines the use of
random numbers and heuristic strategies inspired by evolutionary biology. GAs are computationally extremely
intensive and resource-consuming, and do not provide adequate enough accuracy [1].
     In order to overcome limitations of GAs and scalarization techniques, a new gradient-based technique has been
invented at eArtius, Inc. (patented). The technique uses multi-gradient analysis (MGA), and allows the developing
Multi-Gradient Explorer (MGE) algorithm of multi-objective optimization.
     Further research was inspired by two fundamental issues typical for traditional multi-objective optimization
approaches, and by hardly increasing computational effort necessary for performing optimization: (a) necessity to
search for optimal solutions in the entire design space while Pareto optimal points can only be found on Pareto
frontier, and (b) necessity to cover the entire Pareto frontier by a large number of found Pareto optimal designs while
the user needs just a few trade offs in his area of interest on the Pareto frontier. These two issues caused the use of
brute force methods, such as parallelization of algorithms, in most of the prior art multi-objective optimization
technologies.
     However, even brute-force methods cannot resolve fundamental problems related to the famous “curse of
dimensionality” phenomenon. According to [2], adding extra dimensions to the design space requires an exponential
increase in the number of Pareto optimal points to maintain the same quality of approximation for Pareto frontier.
     New Multi-Gradient Pathfinder (MGP) algorithm has been invented at eArtius (patent pending). MGP uses
Pareto frontier as a search space, and performs directed optimization on Pareto frontier in the area of interest

1
    Chief Executive Officer


                                                             1
                                      American Institute of Aeronautics and Astronautics
determined by the user, which increases algorithm efficiency by orders of magnitude, and gives the user more control
over the optimization process.
     Another important area for improvements in optimization technology is related to response surface methods
which are commonly used in engineering design to minimize the expense of running computationally expensive
analyses and simulations. All known approximation techniques including Response Surface Methodology, Kriging
Models, etc. are limited by 40-60 design variables [3] because of the same “curse of dimensionality” phenomenon.
According to [2], adding extra dimensions to the design space requires exponential increase in the number of sample
points necessary to build an adequate global surrogate model.
     A new response surface method named Dynamically Dimensioned Response Surface Method (DDRSM) has
been invented at eArtius (patent pending), which successfully avoids the “curse of dimensionality” limitations, and
efficiently works with up to thousands of design variables without ever increasing the number of sample points.
     New eArtius design optimization technology is made up out of the optimization algorithms MGE, MGP, HMGE,
HMGP, response surface method DDRSM, and implemented in the eArtius design optimization tool Pareto Explorer.

                                          2. Multi-Gradient Analysis
     Any traditional gradient-based optimization method comprises sequential steps from an initial point to an
optimal point. Each step improves the current point with respect to the objective function. The most important
element of such an algorithm is determining the direction of the next step. Traditional gradient-based algorithms
use the fact that the gradient of the objective function indicates the direction of the steepest ascent of the objective
function but what if several objective functions need to be optimized? In this case we need to find a point improving
all objective functions simultaneously. The following diagrams (see FIG.1) illustrate graphically how MGA
determines the area of simultaneous improvement for all objective functions. It is illustrated for the simplest
multi-objective optimization task with two independent variables and two objective functions that need to be
maximized.




             FIG. 1A                                  FIG. 1B                                 FIG. 1C


FIG. 1A illustrates how the gradient G1 and the line L1 (G1 is perpendicular to L1) help to split the sub-region
    into the area of increased values A1 and the area of decreased values for the first objective function;
            FIG. 1B similarly illustrates splitting the sub-region for the second objective function;
FIG. 1C illustrates that the area of simultaneous increasing ASI of both objective functions F1 and F2 is equal
                                   to intersection of areas A1 and A2: A1∩A2.
     The main problem of the Multi-Gradient Analysis is to find a point X '∈ ASI , which guarantees that the point
X 0 will be improved by the point X ' with respect to all objective functions.
   MGA is illustrated with two objective functions on FIG.1, but it works in the same way with any reasonable
number of objective functions and unlimited number of design variables.

     The MGA pseudo-code:

1   Begin
2   Input initial point X*.
3   Evaluate criteria gradients on X*.
4   Determine ASI for all criteria.
5   Determine the direction of simultaneous improvement for all objectives for the next step.
6   Determine the length of the step.
5   Perform the step, and evaluate new point X’ belonging to ASI.
7   If X’ dominates X* then report improved point X’ and go to 10.


                                                             2
                                      American Institute of Aeronautics and Astronautics
8 If X’ does not dominate X* then report X* as Pareto optimal point.
10 End

     MGA can be implemented in a number of different ways. Some of them are discussed in [4]. In fact, the same
technique is widely used for constrained gradient-based optimization with a single objective function [5]. However,
the technique was never used for multi-objective optimization.
       Since MGA technique results in an improved point, it can be used as an element in any multi-objective
optimization algorithm. The following two sections discuss two MGA-based multi-objective optimization
algorithms.

                                      3. Multi-Gradient Explorer Algorithm
    MGE uses a conventional approach for optimization practice. It starts from an initial point, and iterates toward
Pareto frontier until a Pareto optimal point is found. Then it takes another initial point, iterates again, and so on.

     The MGE pseudo-code:

1   Begin
2   Generate required number of initial points X1,…,XN.
3   i=1.
4   Declare current point: Xc= Xi.
5   Apply MGA analysis to Xc for finding a point X’ in ASI.
6   If X’ dominates Xc then Xc=X’ and go to 5.
7   If X’ does not dominate Xc then declare Xc as Pareto optimal point; i=i+1 and go to 4.
8   Report all the solutions found.
9   End

     MGE algorithm can be used in two modes: (a) improvement of a given initial point, and (b) approximation of
the entire Pareto frontier.
     In the mode (a) MGE usually performs about 4-7 steps, and finds several Pareto optimal points improving a
given initial design (see FIG.2.) Assuming that DDRSM response surface method is used for estimating gradients, it
usually takes just about 15-30 model evaluations to approach Pareto frontier regardless of task dimension. Thus,
MGE is the best choice for computationally expensive simulation models when covering the entire Pareto frontier
is prohibitively expensive.
     In the mode (b) MGE sequentially starts from randomly distributed initial points. Since the initial points are
uniformly distributed in the design space, it is expected that Pareto optimal points found in multiple iterations will
cover the entire Pareto frontier (see FIG.3.)


                                            Minimize f1 = x12 + ( x2 − 1)
                                            Minimize f 2 = x12 + ( x2 + 1) 2 + 12                             (1)

                                            Minimize f 3 = ( x1 − 1) + x + 2
                                                                        2     2
                                                                              2

                                            − 2 ≤ x1 , x2 ≤ 2


     Table 1 and FIG.2 illustrate MGE algorithm in the mode of improvement of a given initial point.

                    Table 1 Improvement of a given design by MGE optimization algorithm

                                                   Evaluation #       f1            f2       f3
                      Initial Point                1                  12.26         5.394    14.05
                      Pareto Optimal Point         9                  3.65          1.38     2.84

    As follows from Table 1, the initial point has been significantly improved with respect to all objective functions.
The target Pareto optimal point was found after 9 model evaluations. After that, MGE spent 26 additional model
evaluations estimating gradients via DDRSM method, and tried to improve the point #9. MGE was stopped because


                                                                3
                                        American Institute of Aeronautics and Astronautics
further improvement of the point #9 was not possible, and the point was declared as Pareto optimal. Next, all
evaluated points have been compared against each other with respect to all objectives, and all dominated points were
declared as transitional points. The rest of points have been declared as Pareto optimal (see FIG.2.) The majority of
evaluated points from #10 to #35 happened to be Pareto optimal in this optimization run. Thus, the user has 15 Pareto
optimal points out of 35 model evaluations.




 FIG.2 shows results of improvement of a given point by MGE algorithm. MGE has started from the initial
 point (orange triangle marker on the diagrams), and performed a few steps towards Pareto frontier; MGE
                   has found 15 Pareto optimal points by the price of 35 model evaluations.

    The following FIG.3 illustrates the ability of MGE algorithm to cover the entire Pareto frontier. In this scenario
MGE sequentially starts from randomly distributed initial points, and iterates towards Pareto frontier based on MGA
technique.




    FIG. 3 shows Pareto optimal points found by MGE algorithm for the benchmark (1). MGE sequentially
started optimization from randomly distributed initial points, and covered the entire Pareto frontier evenly.

    FIG.3 shows that MGE algorithm approximates the entire Pareto frontier, and covers it evenly. MGE is
computationally efficient. It has spent 2420 model evaluations, and found 1156 Pareto optimal
points—2420/1156=2.1 model evaluations per Pareto optimal point.

     In addition to the unconstrained multi-objective optimization technique explained in this paper, and illustrated
by the two previous benchmark problems, MGE algorithm has means for constrained multi-objective optimization.

    The following simple benchmark (2) formulates a good known two bar truss constrained optimization problem,


                                                            4
                                     American Institute of Aeronautics and Astronautics
and illustrates the constrained optimization aspect of MGE algorithm:

                            Minimize Deflection = ( P ⋅ d ) /( 2 A ⋅ E ⋅ sin(t ) ⋅ cos(t ) 2
                            Minimize Weight = (2 ⋅ d ⋅ A ⋅ g ) / sin(t )
                              where
                              Stress = P /[ 2 ⋅ A ⋅ cos(t )] < 40                                           (2)
                               t = deg ree ⋅ a sin(1) / 90
                               d = 1000; E = 2.1 ⋅ 10 4 ; g = 6 ⋅ 10 −6
                               A ∈ [20;50]; deg ree ∈ [ 45;65]


    FIG.4 shows constrained optimization results found by MGE optimization algorithm for the benchmark (2).




         FIG. 4 shows all points evaluated by MGE optimization algorithm. The diagrams illustrate both
 objective space (left) and design space (right.) There are three categories of points on the diagrams: Pareto
optimal, feasible, and transitional. MGE sequentially started optimization from randomly distributed initial
points, and covered the entire Pareto frontier evenly. MGE has spent 400 model evaluations; it has found 100
                                 Pareto optimal points and 278 feasible points.

     MGE uses a technique similar to Modified Method of Feasible Directions (MMFD) [5] for constrained
optimization. Since MMFD was designed for constrained single-objective optimization, it could not be used as it is in
MGE algorithm, and it has been adjusted to the needs of multi-objective optimization.
     Current implementation of MGE algorithm uses the previously mentioned MMFD-like constrained optimization
approach for tasks with a relatively small number of constraints, and automatically shifts to Hybrid Multi-Gradient
Explorer (HMGE) optimization algorithm for tasks with a larger number of constraints. MGE algorithm employs the
hybrid HMGE code only in the infeasible area, and shifts back to the pure gradient based MGA technique as soon as
a feasible point has been found.
     HMGE algorithm has proved a high efficiency and reliability with the most challenging real life constrained
optimization tasks. It finds feasible areas faster and more reliably than pure gradient-based techniques. Thus, the
combination of MGE and HMGE is a powerful design optimization tool for real life tasks with up to thousands of
design variables, and up to hundreds of constraints.
     It is recommended to use MGE algorithm for multi-objective optimization of computationally expensive
simulation models when covering the entire Pareto frontier is prohibitively expensive. MGE allows improvement on
a given design with respect to several objectives (see this scenario on FIG.2), and usually delivers several Pareto
optimal points after 10-30 model evaluations.




                                                             5
                                     American Institute of Aeronautics and Astronautics
4. Multi-Gradient Pathfinder Algorithm
     Multi-Gradient Pathfinder (MGP) is the first multi-objective optimization algorithm which implements the idea
of directed optimization on Pareto frontier based on the user’s preferences.

      Directed optimization on Pareto frontier means that a search algorithm steps along Pareto frontier from a given
initial Pareto optimal point towards a desired Pareto optimal point. The search algorithm is supposed to stay on
Pareto frontier all the time throughout the optimization process until the desired Pareto optimal point will be reached.
Then all (or most) of the evaluated points will also be Pareto optimal.
      Moving along Pareto frontier improves some objectives and compromises other ones. This consideration gives a
clue as to how directed optimization needs to be organized to become beneficial for users. In fact, it is enough to
formulate which objective functions are preferable, and need to be improved first. This formulates a goal for the
directed search on Pareto frontier.
      In the case of L=2 objective functions, Pareto frontier is a line in the objective space. Thus, MGP algorithm has
just two directions to choose from: to improve 1st or 2nd objective function.
      In the case of L>2 objective functions, Pareto frontier is a multi-dimensional surface, and the algorithm has an
infinite number of directions to move from a given point along the surface. In any case, the user needs to determine a
change in direction based on his preferences.

     Based on the above considerations, the task of directed optimization on Pareto frontier can be formulated in the
following way:

                                 Minimize F ( X ) = [ F1 ( X ), F2 ( X ),..., Fm ( X )]T
                                    X PF ∈X                                                                      (3)
                              Minimize + P ( X ) = [ P1 ( X ), P2 ( X ),..., Pn ( X )]       T

                                   X PF ∈X


                                    subject to : q j ( X ) ≤ 0; j = 1,2,...k


     Where X PF ∈ X - is a subset of the design space X, which belongs to Pareto frontier; m – the number of
non-preferable objective functions F(X), and n – the number of preferable objective functions P(X) that determine the
direction of the move (directed search) on Pareto frontier. L=m+n – the total number of objective functions. Pareto
frontier is determined by both sets of objectives F(X) and P(X).
     Operator Minimize+ applied to P(X) means that it is required to find the best points on Pareto frontier with
respect to the preferable objectives P(X).

    How MGP operates:

     First of all, the user needs to determine which objective(s) are preferable (more important) for him. In this way,
the user indicates his area of interest on the Pareto frontier.

     MGP starts from a given Pareto optimal point and performs a required number of steps along Pareto frontier in a
direction of simultaneous improvement of preferable objectives. On each step, MGP solves two tasks (see FIG.5,
green and blue arrows):

    •      Improves preferable objectives’ values;
    •      Maintains a short distance from the current point to Pareto frontier.

     It is important to note that there are cases when a given initial point is not Pareto optimal. In this case MGP works
exactly as MGE algorithm. It approaches Pareto frontier first, and then starts stepping along the Pareto frontier in the
direction determined by preferable objectives.




                                                               6
                                        American Institute of Aeronautics and Astronautics
F1




                                                                                     F2

      FIG.5 illustrates the basic idea of MGP algorithm for the case when both objective functions F1 and F2
                      need to be minimized and F2 is considered as a preferable objective.

    On the first half-step, MGP steps in a direction of improvement of the preferable objective – see green arrows on
FIG.5. On the second half-step, MGP steps in a direction of simultaneous improvement in ALL objectives—see blue
arrows, and in this way maintains a short distance to Pareto frontier. Then MGP starts the next step from the newly
found Pareto optimal point.
    Main features of MGP algorithm are explained in the following pseudo-code.

    1 Begin
    2 Input initial Pareto optimal point X* and required number of steps N.
    3 i=1.
    4 Declare current point: Xc= X*.
    5 Evaluate gradients of all objective functions on Xc.
    6 Determine ASI(1) for preferable objectives.
    7 Make a step in ASI(1) improving only preferable objectives.
    8 Determine ASI(2) for ALL objectives.
    9 Make a step in ASI(2) improving ALL objectives; the resulting Pareto point is X**.
    10 If i < N then declare current point Xc= X**; i=i+1; go to 5.
    11 Report all the solutions found.
    12 End

    The abbreviations ASI(1) and ASI(2) in the above pseudo-code stand for Area of Simultaneous Improvement
(ASI) of preferable objectives and of all objectives correspondingly (see FIG.1A-1C).

    The multi-objective task formulation (4) determines three objectives to be minimized. According to the
optimization task formulation (3), two of them (f2 and f3) are preferable:

                                           Minimize f1 = x12 + ( x2 − 1) 2
                                           Minimize + f 2 = x12 + ( x2 + 1) 2 + 1                           (4)
                                           Minimize + f 3 = ( x1 − 1) + x + 2
                                                                         2     2
                                                                               2

                                           − 2 ≤ x1 , x2 ≤ 2


    The task formulation (4) is corresponded to the blue markers on FIG.6.




                                                               7
                                        American Institute of Aeronautics and Astronautics
FIG. 6 shows Pareto optimal points found by MGP algorithm for the benchmark task (4). MGP has
started optimization from the same circled point twice: (a) with one preferable objective f3 – see green points;
   (b) with two preferable objectives f2 and f3 – see blue points. Transitional points (red and magenta) were
                  evaluated to build local response surface models, and to estimate gradients.

    All evaluated points (optimal and non-optimal) are visualized on FIG.6, and we can make a few observations
confirming that MGP performs directed optimization on Pareto frontier:
            (a)        MGP algorithm performs search solely on Pareto frontier, and only in the area of interest; only
       a few of evaluated points are non-Pareto optimal.
            (b)        The direction of movement along Pareto frontier depends on the selection of preferable
       objectives, as expected. The green trajectory clearly indicates improvement of f3, and the blue trajectory
       indicates simultaneous improvement of f2 and f3;
            (c)        MGP is extremely efficient. The majority of evaluated points are Pareto optimal: 191 out of
       238 for f1 as preferable objective, and 281 out of 316 for two preferable objectives f2 and f3.

    The benchmark (5) and FIG.7 illustrate that in the case of two objective functions, MGP is able to start from one
end of Pareto frontier, and cover it completely to another end.
    The benchmark problem (5) has been chosen because it has a simple classical Pareto front, and allows one to
visualize MGP behavior in both objective space and design space.

                                    Minimize + f1 = x12 + x2
                                    Minimize f 2 = x2 + x1 ,
                                                    2
                                                                  x1 , x2 ∈ [ −10;10]                         (5)

     Operator Minimize+ in the task formulation (5) means that the objective f1 is preferable, and MGP needs to
step along Pareto frontier in the direction which improves the objective f1.
     The following FIG.7 illustrates a solution of a directed multi-objective optimization task (5) found by MGP
algorithm.




     FIG. 7 Pareto optimal and transitional points found by MGP algorithm for the benchmark (2). MGP
starts from the initial point, and sequentially steps along Pareto frontier until the end of the Pareto frontier
           is achieved. MGP has found 225 Pareto optimal points out of 273 model evaluations.

                                                            8
                                     American Institute of Aeronautics and Astronautics
The diagrams on FIG.7 illustrate all the points evaluated by MGP algorithm. All yellow markers are obscured
by green markers on the diagrams. It means that transitional points are located very close to Pareto optimal points,
and the majority of the points evaluated by MGP algorithm are Pareto optimal (225 of 273). MGP algorithm does
not have to iterate towards Pareto frontier repeatedly. Instead, it literally steps along Pareto frontier. In fact, MGP
has spent some model evaluations to estimate gradients by the finite difference method, and was able to stay on the
Pareto frontier on each step throughout the optimization process. Straight parts of the Pareto frontier have not
required evaluating transitional points at all. Every new point evaluated while MGP was stepping along straight
fragments of Pareto frontier was a Pareto optimal point. This can be recognized by an absence of large yellow
markers behind smaller green markers on a few parts of the Pareto front. However, stepping along the convex part
of the Pareto frontier required more transitional points to be evaluated in order to maintain a short distance to the
Pareto frontier (see FIG.7.)
     The benchmark problem (6) and FIG. 8 illustrate the ability of MGP algorithm to step along Pareto frontier with
a step size determined by the user, and the ability to find disjoint parts of Pareto frontier.

                              Minimize F1 = 1 + ( A1 + B1 ) 2 + ( A2 + B2 ) 2
                              Minimize + F2 = 1 + ( x1 + 3) 2 + ( x2 + 1) 2
                              A1 = 0.5 ⋅ sin(1) − 2 ⋅ cos(1) + sin( 2) − 1.5 ⋅ cos( 2)                         (6)
                              A2 = 1.5 ⋅ sin(1) − cos(1) + 2 ⋅ sin( 2) − 0.5 ⋅ cos( 2)
                              B1 = 0.5 ⋅ sin( x1 ) − 2 ⋅ cos( x1 ) + sin( x2 ) − 1.5 ⋅ cos( x2 )
                              B2 = 1.5 ⋅ sin( x1 ) − cos( x1 ) + 2 ⋅ sin( x2 ) − 0.5 ⋅ cos( x2 )
                              x1 , x2 ∈ [ −π , π ]




                                                              FIG.8A




                                                              FIG.8B

      FIG. 8 shows all evaluated points (Pareto optimal and transitional) found by MGP algorithm for the
 benchmark (6) with different values of the step size S, which determines the distance between points on the
Pareto frontier. MGP starts from the initial point, and steps along Pareto frontier in the direction improving
 the preferable objective F2. The results on FIG.8A are corresponded with S=0.005, and the results FIG.8B
                                           were found with S=0.015.


                                                                9
                                        American Institute of Aeronautics and Astronautics
The diagrams on FIG.8A show 118 Pareto optimal points found by the price of 684 model evaluations, which
is corresponded with the step size S=0.005. The diagrams on the FIG.8B show that with S=0.015, MGP covers
Pareto frontier by 55 Pareto optimal points, and spends just 351 model evaluations. In both cases the Pareto
frontier is covered evenly and completely. The run with the smaller step size is almost two times more
computationally expensive, but brings twice more Pareto optimal points; in other words, it is twice more accurate.
Thus, the user always has a choice to save model evaluations by increasing the step size, or to increase the
accuracy of the solution by decreasing the step size.
     MGP algorithm has demonstrated a relatively low efficiency for the benchmark (6) compared with the
benchmark (5) because it spent a significant number of model evaluations in transition from one disjoint part of
Pareto frontier to another (see yellow markers on FIG.8.)

     In this study most of the benchmark problems are used to illustrate the unusual capabilities of MGE and MGP
algorithms. Comparing optimization algorithms is not a key point of this paper. However, a few benchmarks will be
used to compare MGP algorithm with three state of the art multi-objective optimization algorithms developed by a
leading company of the Process Integration and Design Optimization (PIDO) market: Pointer, NSGA-II, and
AMGA. These commercial algorithms represent the highest level of optimization technology developed by the best
companies and are currently available on the PIDO market.
     For the algorithms AMGA, NSGA-II, Pointer, and MGP the only default parameter values have been used to
make sure that all algorithms are in equal conditions.

    The following benchmark ZDT3 (3) has two objective functions, and 30 design variables:


                                Minimize F1 = x1
                                                    ⎡    F F                  ⎤
                                Minimize + F2 = g ⋅ ⎢1 − 1 − 1 sin(10 π F1 )⎥
                                                    ⎣     g     g             ⎦                           (7)
                                        9 n
                                g =1+       ∑ xi ; 0 ≤ xi ≤ 1, i = 1,..n; n = 30
                                      n − 1 i =2


    The benchmark (7) has dozens of local Pareto fronts, and this is a challenge for most of multi-objective
optimization algorithms.

     The following FIG.9 shows that optimization search in the entire design space is not productive compared
to directed optimization on Pareto frontier performed by MGP algorithm.




    FIG. 9 Optimization results comparison graph for algorithms MGP (eArtius), NSGA-II, AMGA, and
  Pointer. All optimization algorithms performed an equal number (523) of function evaluations. Graph
   displays the criteria space and two projections of the design space with all evaluated points for each
               optimization algorithm. MGP algorithm used DDRSM to estimate gradients.




                                                           10
                                    American Institute of Aeronautics and Astronautics
As can be seen on FIG.9, MGP algorithm (green and red markers) performs a search in the area of global
Pareto frontier, and it covered the Pareto frontier evenly and completely. Other algorithms perform searches in the
entire design space, and have difficulties finding the global Pareto frontier. Only Pointer was able to find a few
Pareto optimal points in the central part of the global Pareto frontier. AMGA and NSGA-II have not found a single
Pareto optimal point after 523 model evaluations, and performed the majority of evaluations very far from the
global Pareto frontier.

                               5. Comparison with Weighted Sum Method
    The most common approach to gradient-based multi-objective optimization is the weighted sum method [1],
which employs the utility function (8):
                                                          k
                                                  U = ∑ wi Fi ( X )                                          (8)
                                                         i =1
                                                                                  k
    were w is a vector of weights typically set by users such that and          ∑w =1
                                                                                 i =1
                                                                                        i
                                                                                            w > 0.


      If all of the weights are positive, the minimum of (8) is Pareto optimal [6]. In other words, minimizing the
utility function (8) is sufficient for Pareto optimality. However, the formulation does not provide a necessary
condition for Pareto optimality [7].

     The biggest problem with the weighted sum approach is that it is impossible to obtain points on non-convex
portions of the Pareto optimal set in the criterion space.

    Theoretical reasons for this deficiency have been described in [8, 9, 10]. Also, varying the weights
consistently and continuously may not necessarily result in an even distribution of Pareto optimal points and a
complete representation of the Pareto optimal set [8].

    Let us consider a sample illustrating the above deficiencies.

    The following benchmark model (9) has a non-convex Pareto frontier.


                                Minimize f1 = x1
                                Minimize + f 2 = 1 + x2 − x1 − 0.1 ⋅ sin(3π ⋅ x1 )
                                                      2
                                                                                                             (9)
                                x1 ∈ [0;1]; x2 ∈ [ −2;2]

     Sequential Quadratic Programming (SQP) algorithm is one of the most popular gradient-based
single-objective optimization algorithms. An implementation of SQP has been used for minimizing the utility
function (3) for finding Pareto optimal points.


                                        Minimize U = w1 f1 + w2 f 2 ;                                 (10)
                                        w1 , w2 ∈ [0;1]; w1 + w2 = 1

    SQP algorithm has performed a single objective optimization for the utility function (10) 107 times, and
performed 1667 model evaluations in total. Every optimization run was performed with an incremented value
of w1 ∈ [0;1], and w2 = 1 − w1 . Since w1 values have covered the interval [0;1] evenly and completely, it was
                                .
expected that the diversity of found Pareto optimal points will be on a high enough level. However, 107 Pareto
optimal points have covered relatively small left and right convex parts of the Pareto frontier, and just one of the
Pareto optimal points is located on the middle part of the Pareto frontier (see blue markers on FIG.10A and 10B).




                                                                11
                                     American Institute of Aeronautics and Astronautics
FIG. 10A                                                               FIG.10B

    FIG. 10A compares Pareto optimal points found by MGP and SQP optimization algorithms for the
benchmark (2). MGP has found 153 Pareto optimal points out of 271 model evaluations. SQP has found
107 Pareto optimal points out of 1667 model evaluations.

      FIG.10B compares Pareto optimal points found by MGE and SQP optimization algorithms for the
    benchmark (2). MGE algorithm has found 173 Pareto optimal points out of 700 model evaluations.

      As can be seen from FIG.10A-10B, non-convex Pareto frontier is a significant issue for SQP algorithm, and
does not create any difficulties for MGP and MGE algorithms. Both MGP and MGE have covered the entire Pareto
frontier evenly and completely.
      The weighted sum method substitutes the multi-objective optimization task (9) by a single-objective
optimization task (10). However, optimization task (10) is not equivalent to the task (9), and has a different set of
optimal solutions visualized on FIG.10. Blue points on the diagrams represent a solution of the task (10), and
magenta points represent a solution of the task (9).
      Multi-Gradient Analysis (MGA) technique employed by both MGE and MGP algorithms resolves the issues
created by using different kinds of scalarization techniques. MGA allows solving multi-objective optimization tasks
as it is, without substituting them by a utility function such as the function U=w1f1+w2f2, used in this sample. At the
same time, MGA allows the benefits of gradient-based techniques such as high convergence and high accuracy.
      Also, MGA is much simpler compared to scalarization techniques. MGA determines a direction of simultaneous
improvement for all objective functions, and steps in this direction from any given point. In contrast to the weighted
sum method, MGA does not require developing an additional logic on top of SQP algorithm for varying weights in
the utility function throughout an optimization process. It just takes any given point, and makes a step improving the
point with respect to all objectives. Thus, MGA can be used as an element for developing any kinds of
multi-objective optimization algorithms. Particularly, MGA has been used for designing two pure gradient-based
optimization algorithms MGE and MGP, discussed in this paper, and two hybrid optimization algorithms HMGE
and HMGP based on GA- and gradient- techniques.

                        6. Dynamically Dimensioned Response Surface Method
     Dynamically Dimensioned Response Surface Method (DDRSM) is a new method to estimate gradients, which
is equally efficient for low-dimensional and high-dimensional tasks. DDRSM (patent pending) requires just 5-7
model evaluations to estimate gradients regardless of task dimension.

   eArtius DDRSM vs. Traditional RSM Table 2 shows the most important aspects of Response Surface
Methods (RSM), and compares traditional RSM with eArtius DDRSM.




                                                             12
                                      American Institute of Aeronautics and Astronautics
Table 2 Comparison of traditional response surface methods with DDRSM approach

RSM Aspects                      Traditional RSM                                eArtius DDRSM
Purpose                          Optimize fast surrogate functions              Quick gradients estimation for direct
                                 instead of computationally expensive           optimization of computationally expensive
                                 simulation models                              simulation models
Approximation type               Global approximation                           Local approximation
Domain                           Entire design space                            A small sub region
Use of surrogate functions       Optimization in entire design space            Gradient estimation at a single point
Accuracy requirements            High                                           Low
The number of sample points      Exponentially grows with increasing            5-7 sample points regardless of task
to build approximations          the task dimension                             dimension
Time required to build an        Minutes and hours                              Milliseconds
approximation
Task dimension limitations       30-50 design variables                         Up to 5,000 design variables
Sensitivity analysis             Required to reduce task dimension              Not required

     As follows from Table 2, the most common use of response surface methods is creating global approximations
based on DOE sample points, and further optimization of such surrogate models. This approach requires maintaining
a high level accuracy of the approximating surrogate function over the entire design space, which in turn requires a
large number of sample points.
     In contrast, DDRSM method builds local approximations in a small sub region around a given point, and uses
them for gradients estimation at the point. This reduces requirements to the accuracy of approximating models
because DDRSM does not have to maintain a high level accuracy over the entire design space.
     There is a common fundamental problem for all response surface methods that is named “curse of
dimensionality” [2]. The curse of dimensionality is the problem caused by the exponential increase in volume
associated with adding extra dimensions to a design space [2], which in turn requires an exponential increase in the
number of sample points to maintain the same level of accuracy for response surface models.
     For instance, we use 25=32 sample points to build an RSM model for 5 design variables, and decided to increase
the number of design variables from 5 to 20. Now we need to use 220= 1,048,576 sample points to maintain the same
level of accuracy for the RSM model. In real life we use just 100-300 sample points to build such RSM models, and
this causes quality degradation of the optimization results found by optimizing such RSM models.
     This is a strong limitation for all known response surface approaches. It forces engineers to artificially reduce an
optimization task dimension by assigning constant values to the most of design variables.
     DDRSM has successfully resolved the curse of dimensionality issue in the following way.
     DDRSM is based on a realistic assumption that most of real life design problems have a few significant design
variables, and the rest of design variables are not significant. Based on this assumption, DDRSM estimates the most
significant projections of gradients for all output variables on each optimization step.
     In order to achieve this, DDRSM generates 5-7 sample points in the current sub-region, and uses the points to
recognize the most significant design variables for each objective function. Then DDRSM builds local
approximations for all output variables which are utilized to estimate the gradients.
     Since an approximation does not include non-significant variables, the estimated gradient only has projections
that correspond to significant variables. All other projections of the gradient are equal to zero. Ignoring
non-significant variables slightly reduces the gradient’s accuracy, but allows estimating gradients by the price of 5-7
evaluations for tasks of practically any dimension.
     DDRSM recognizes the most significant design variables for each output variable (objective functions and
constraints) separately. Thus, each output variable has its own list of significant variables that will be included in its
approximating function. Also, DDRSM recognizes significant variables repeatedly on each optimization step every
time gradients need to be estimated. This is important because the topology of objective functions and constraints can
diverge in different parts of the design space throughout the optimization process, and specific topology details can
be associated with specific design variables.
     As follows from the previous explanation, DDRSM dynamically reduces the task dimension in each sub-region,
and does it independently for each output variable by ignoring non-significant design variables. The same variable
can be critically important for one of the objective functions in the current sub-region, and not significant for other
objective functions and constraints. Later in the optimization process, in a different sub-region, the topology of an
output variable can be changed, and DDRSM will create another list of significant design variables corresponded to
the variable’s topology in current sub-region of the search space. Thus, dynamic use of DDRSM on each


                                                             13
                                      American Institute of Aeronautics and Astronautics
optimization step makes it more adaptive to a function topology changes, and allows for an increase in the accuracy
of gradients estimation.
      DDRSM combines elements of RSM and sensitivity analysis. Thus, it makes sense to compare DDRSM to the
traditional sensitivity analysis approach.
      DDRSM vs. Traditional Sensitivity Analysis The most popular sensitivity analysis tools are designed to be
used before starting an optimization process. Thus, engineers are forced to determine a single static list of significant
variables for all objective and constraint functions based on their variations in entire design space. After the
sensitivity analysis is completed, all non-significant design variables get a constant value and never get changed over
the optimization process.
      The above approach gives satisfactory results for tasks with a small number of output variables, and has
difficulties when the number of constraint and objective functions is large.
      Generally speaking, each output variable has its own topology, own level of non-linearity, and own list of
significant variables. The same design variable can be significant for some of the output variables, and
non-significant for other ones. Also, the list of significant variables depends on the current sub-region location. Thus,
it is difficult or even impossible to determine a list of design variables that are equally significant for dozens and
hundreds of output variables. Also, traditional sensitivity analysis technology requires too many sample points for a
large number of design variables. This reduces the usefulness of the approach for high dimensional tasks.
      DDRSM completely eliminates the above described issues because it performs sensitivity analysis for each
output variable independently, and every time when gradients need to be estimated. Thus, DDRSM takes in account
specific details of each output variable in general and its local topology in particular. Also, DDRSM is equally
efficient with dozens and thousands of design variables.

    Implementation of DDRSM The following MGA-DDRSM pseudo code shows basic elements of the MGA
optimization step when DDRSM approach is used to estimate gradients:
   1 Begin
   2 Input initial point X*.
   3 Create a sub-region with center at X*.
   4 Generate and evaluate 5-7 sample points in the sub-region.
   5 Determine the most significant design variables for each objective function.
   6 Create an approximation for each objective function based only on the most significant
design variables.
   7 Use approximations for evaluation of criteria gradients on X*;
   8 Determine ASI for all criteria.
   9 Determine the direction of next step.
   10 Determine the length of the step.
   11 Perform the step, and evaluate new point X’ belonging to ASI.
   12 If X’ dominates X* then report X’ as an improved point, and go to 14.
   13 If X’ does not dominate X* then declare X* as Pareto optimal point.
   14 End


     The following benchmark problem (11) is intended to demonstrate (a) high efficiency of the DDRSM approach
to estimate gradients compared with the finite difference method, and (b) the ability of DDRSM to recognize
significant design variables.

     The benchmark ZDT1 (11) has 30 design variables, two objectives, and the Pareto frontier is convex. The global
Pareto-optimal front corresponds to x1 ∈ [0;1], xi = 0, i = 2,...,10 . The optimization task formulation used is as
follows:


                                        Minimize F1 = x1
                                                           ⎡      F ⎤
                                        Minimize + F2 = g ⎢1 − 1 ⎥                                   (11)
                                                           ⎣      g⎦
                                                 9 n
                                        g = 1+      ∑ xi , 0 ≤ xi ≤ 1, i = 1,..., n;
                                               n − 1 i =2
                                        n = 30


                                                             14
                                      American Institute of Aeronautics and Astronautics
FIG. 11 shows Pareto optimal points found by MGP algorithm for the benchmark (11). The finite
difference method has been used to estimate gradients. 18 Pareto optimal points were found out of 522 model
                                               evaluations.

     MGP algorithm started from the initial Pareto optimal point (see FIG.11), and performed 17 steps along Pareto
frontier until it hit the end of the Pareto frontier. FIG.11 shows ideally accurate global Pareto optimal points found by
MGP algorithm. The finite difference method was used in this optimization run to estimate gradients, and MGP had
to spend 31 model evaluations to estimate gradients on each optimization step. MGP has found 18 Pareto optimal
points out of 522 model evaluations.

    The distance between green and red markers along the axis x10 on FIG.11 (right diagram), indicates the spacing
parameter value (0.0001) of the finite difference equation.




    FIG. 12 shows Pareto optimal points found by MGP algorithm for the benchmark (11). DDRSM method
    has been used to estimate gradients. 18 Pareto optimal points were found out of 38 model evaluations.

     MGP algorithm started from the initial Pareto optimal point (see FIG.12), and performed 17 steps along Pareto
frontier until it hit the end of the Pareto frontier. FIG.12 shows 18 Pareto optimal points found by MGP algorithm.
This optimization run used the same algorithm parameters, but DDRSM method instead of the finite difference
method to estimate gradients. This time MGP spent just 38 model evaluations, and found the same Pareto optimal
solutions with the same accuracy.
     As can be seen on FIG.12, DDRSM has generated a number of points randomly (see red markers on left and
right diagrams.) The points have been used to build local approximations for estimating gradients.
     Clearly, both methods of gradient estimation allowed MGP to precisely determine the direction of improvement
of the preferable objective F1 on each step, and the direction of simultaneous improvement for both objectives. As a

                                                             15
                                      American Institute of Aeronautics and Astronautics
result, MGP algorithm was able to find, and step along the global Pareto frontier on each optimization step. All
Pareto optimal points match the conditions x1 ∈ [0;1], xi = 0, i = 2,...,10 , and this means that the optimal
solutions are exact in both cases. However, DDRSM has spent 522/38=13.7 less model evaluations to find the same
solutions 18 Pareto optimal points.
     The following FIG.13 allows one to see the conceptual advantage of directed optimization on Pareto frontier
performed by MGP algorithm compared with traditional multi-objective optimization approach performed by
NSGA-II, AMGA, and Pointer optimization algorithms.




      FIG. 13 shows Pareto optimal points found by four algorithms for the benchmark (11): NSGA-II,
 AMGA, and Pointer. MGP has spent 38 model evaluations, and found 18 Pareto optimal points. NSGA-II
 found 63 first rank points out of 3500 model evaluations. AMGA and Pointer have found 19 and 195 first
                          rank points respectively out of 5000 model evaluations.

     As follows from FIG.13, optimization algorithms NSGA-II and Pointer were able to approach global Pareto
frontier after 3500 and 5000 model evaluations respectively. However, the algorithms were not able to find precisely
accurate Pareto optimal points, and to cover the entire Pareto frontier. AMGA algorithm was not able to even
approach the global Pareto frontier after 5000 model evaluations.

     The following benchmark problem (12) is a challenging task because it has dozens of Pareto frontiers and five
disjoint segments of the global Pareto frontier. The results of MGP algorithm for this benchmark will be compared
with results of state of the art commercial multi-objective optimization algorithms developed by a leading design
optimization company: Pointer, NSGA-II and AMGA.
     Since the benchmark (12) has just 10 design variables and 2 objectives, the entire design space and objective
space can be visualized on just 6 scatter plots. Thus, we can see the optimization search pattern for each algorithm,
and compare directed optimization on Pareto frontier with the traditional optimization approach.


      Minimize F1 = x1;
      Minimize + F2 = g ⋅ h                                                                                                (12)
      where
       g = 1 + 10(n − 1) + ( x2 + x3 + ... + xn ) − 10 ⋅ [cos(4 ⋅ π ⋅ x2 ) + cos(4 ⋅ π ⋅ x3 ) + ... + cos(4 ⋅ π ⋅ xn )], n = 10;
                              2    2          2


      h = 1 − F1 / g − ( F1 / g ) ⋅ sin(10 ⋅ π ⋅ F1 );
       X ∈ [0;1]


    The following FIG.14 illustrates optimization results for the benchmark problem (12).




                                                                16
                                         American Institute of Aeronautics and Astronautics
FIG. 14 shows all points evaluated by MGP algorithm and by three other multi-objective optimization
  algorithms Pointer, NSGA-II, and AMGA. MGP has used DDRSM for gradients estimation. It spent 185
evaluations, and has covered all five segments of the global Pareto frontier. Each alternative algorithm spent
   2000 model evaluations with much worse results: NSGA-II was able to approach 3 of 5 segments on the
         global Pareto frontier. AMGA and Pointer have not found a single Pareto optimal solution.


    The global Pareto frontier for the benchmark (12) belongs to the straight line {x1=0…1, x2=x3=…=x10=0}. It
was critical for MGP algorithm to recognize that x1 is the most significant design variable. It was done by
DDRSM, and x1 was included in each local approximation model used for gradients estimation. As a result, MGP
was stepping along x1 axis from 0 to 1, and covered all five segments of the global Pareto frontier (see FIG.14). Also,
DDRSM helped to recognize that all other design variables are equal to zero for Pareto optimal points--one can see
green points in the origin on the charts on FIG.14. Thus, in contrast with other algorithms, MGP performed all model
evaluations in a small area around Pareto frontier in the design space (see green and red markers on FIG.14), which
improved accuracy and efficiency of the algorithm.

                                    7. eArtius Design Optimization Tool
         eArtius has developed a commercial product Pareto Explorer, which is a multi-objective optimization and
    design environment combining a process integration platform with sophisticated, superior optimization
    algorithms, and powerful post-processing capabilities.
         Pareto Explorer 2010 implements the described above optimization algorithms, and provides a complete
    set of functionality necessary for a design optimization tool:
              •       Intuitive and easy to use Graphical User Interface; advanced IDE paradigm similar to
         Microsoft Developer Studio 2010 (see FIG.22);
              •       Interactive 2D/3D graphics based on OpenGL technology;
              •       Graphical visualization of optimization process in real time;
              •       Process integration functionality;
              •       Statistical Analysis tools embedded in the system;
              •       Design of Experiments techniques;
              •       Response Surface Modeling;
              •       Pre- and post-processing of design information;
              •       Data import and export.

     All the diagrams included in this paper are generated by Pareto Explorer 2010. The diagrams give an idea about
the quality of data visualization, the ability to compare different datasets, and a flexible control over the diagrams
appearance.




                                                             17
                                      American Institute of Aeronautics and Astronautics
FIG. 14 shows a screenshot of Pareto Explorer’s main window.

    In addition to the design optimization environment implemented in Pareto Explorer, eArtius provides all the
described algorithms as plug-ins for Noesis OPTIMUS, ESTECO modeFrontier, and Simulia Isight design
optimization environments.
    Additional information about eArtius products and design optimization technology can be found at
www.eartius.com.

                                                  8. Conclusion
     Novel gradient-based algorithms MGE and MGP for multi-objective optimization have been developed at
eArtius. Both algorithms utilize the ability of MGA analysis to find a direction of simultaneous improvement for all
objective functions, and provide superior efficiency with 2-5 evaluations per Pareto optimal point.
     Both algorithms allow the user to decrease the volume of search space by determining an area of interest, and
reducing in this way the number of necessary model evaluations by orders of magnitude.
     MGE algorithm allows starting from a given design, and takes just 15-30 model evaluations to find an improved
design with respect to all objectives.
     MGP algorithm goes further: it uses Pareto frontier as a search space, and performs directed optimization on the
Pareto frontier in the user’s area of interest determined by a selection of preferred objectives. Avoiding a search in
the entire design space and searching only in the area of interest directly on Pareto frontier reduces the required
number of model evaluations dramatically. MGP needs just 2-5 evaluations per step, and each step brings a few new
Pareto optimal points.
     Both MGE and MGP algorithms are the best choice for multi-objective optimization of computationally
expensive simulation models taking hours or even days of computational time for performing a single evaluation.
     New response surface method DDRSM is also developed. DDRSM builds local approximation for output
variables on each optimization step, and estimates gradients consuming just 5-7 evaluations. DDRSM dynamically
recognizes the most significant design variables for each objective and constraint, and filters out non-significant
variables. This allows overcoming the famous “curse of dimensionality” problem: efficiency of MGE and MGP
algorithms does not depend on the number of design variables. eArtius optimization algorithms are equally efficient
with low dimensional and high dimensional (up to 5000 design variables) optimization tasks. Also, DDRSM
eliminates the necessity to use traditional response surface and sensitivity analysis methods, which simplifies the
design optimization technology and saves time for engineers.


                                                            18
                                     American Institute of Aeronautics and Astronautics
References
1. Marler, R. T., and Arora, J. S. (2004), "Survey of Multi-objective Optimization Methods for Engineering",
Structural and Multidisciplinary Optimization, 26, 6, 369-395.
2. Bellman, R.E. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ.
3. Simpson, T. W., Booker, A. J., Ghosh, D., Giunta, A. A., Koch, P. N., and Yang, R.-J. (2004)
Approximation Methods in Multidisciplinary Analysis and Optimization: A Panel Discussion, Structural and
Multidisciplinary Optimization, 27:5 (302-313)
4. Vladimir Sevastyanov, Oleg Shaposhnikov Gradient-based Methods for Multi-Objective Optimization. Patent
Application Serial No. 11/116,503 filed April 28, 2005.
5. Garret N. Vanderplaats, 2005. Numerical Optimization Techniques for Engineering Design: With Applications,
Fourth Edition, Vanderplaats Research & Development, Inc.
6. Zadeh, L. A. 1963: Optimality and Non-Scalar-Valued Performance Criteria. IEEE Transactions on Automatic
Control AC-8, 59-60.
7. Zionts, S. 1988: Multiple Criteria Mathematical Programming: An Updated Overview and Several Approaches.
In: Mitra, G. (ed.) Mathematical Models for Decision Support, 135-167. Berlin: Springer-Verlag.
8. Das, I.; Dennis, J. E. 1997: A Closer Look at Drawbacks of Minimizing Weighted Sums of Objectives for Pareto
Set Generation in Multicriteria Optimization Problems. Structural Optimization 14, 63-69.
9. Messac, A.; Sukam, C. P.; Melachrinoudis, E. 2000a: Aggregate Objective Functions and Pareto Frontiers:
Required Relationships and Practical Implications. Optimization and Engineering 1, 171-188.
10. Messac, A.; Sundararaj, G., J.; Tappeta, R., V.; Renaud, J., E. 2000b: Ability of Objective Functions to Generate
Points on Nonconvex Pareto Frontiers. AIAA Journal 38, 1084-1091.




                                                            19
                                     American Institute of Aeronautics and Astronautics

Mais conteúdo relacionado

Mais procurados

Multi-objective Optimization Scheme for PID-Controlled DC Motor
Multi-objective Optimization Scheme for PID-Controlled DC MotorMulti-objective Optimization Scheme for PID-Controlled DC Motor
Multi-objective Optimization Scheme for PID-Controlled DC MotorIAES-IJPEDS
 
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATABINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAacijjournal
 
A Self-Tuned Simulated Annealing Algorithm using Hidden Markov Mode
A Self-Tuned Simulated Annealing Algorithm using Hidden Markov ModeA Self-Tuned Simulated Annealing Algorithm using Hidden Markov Mode
A Self-Tuned Simulated Annealing Algorithm using Hidden Markov ModeIJECEIAES
 
Feature selection using modified particle swarm optimisation for face recogni...
Feature selection using modified particle swarm optimisation for face recogni...Feature selection using modified particle swarm optimisation for face recogni...
Feature selection using modified particle swarm optimisation for face recogni...eSAT Journals
 
Sca a sine cosine algorithm for solving optimization problems
Sca a sine cosine algorithm for solving optimization problemsSca a sine cosine algorithm for solving optimization problems
Sca a sine cosine algorithm for solving optimization problemslaxmanLaxman03209
 
AIAA-Aviation-2015-Mehmani
AIAA-Aviation-2015-MehmaniAIAA-Aviation-2015-Mehmani
AIAA-Aviation-2015-MehmaniOptiModel
 
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...ijfcstjournal
 
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...csandit
 
AMS_Aviation_2014_Ali
AMS_Aviation_2014_AliAMS_Aviation_2014_Ali
AMS_Aviation_2014_AliMDO_Lab
 
AIROPT: A Multi-Objective Evolutionary Algorithm based Aerodynamic Shape Opti...
AIROPT: A Multi-Objective Evolutionary Algorithm based Aerodynamic Shape Opti...AIROPT: A Multi-Objective Evolutionary Algorithm based Aerodynamic Shape Opti...
AIROPT: A Multi-Objective Evolutionary Algorithm based Aerodynamic Shape Opti...Abhishek Jain
 
Artificial Intelligence based optimization of weld bead geometry in laser wel...
Artificial Intelligence based optimization of weld bead geometry in laser wel...Artificial Intelligence based optimization of weld bead geometry in laser wel...
Artificial Intelligence based optimization of weld bead geometry in laser wel...IJMER
 
Parameter Estimation User Guide
Parameter Estimation User GuideParameter Estimation User Guide
Parameter Estimation User GuideAndy Salmon
 

Mais procurados (15)

Ih2616101615
Ih2616101615Ih2616101615
Ih2616101615
 
Multi-objective Optimization Scheme for PID-Controlled DC Motor
Multi-objective Optimization Scheme for PID-Controlled DC MotorMulti-objective Optimization Scheme for PID-Controlled DC Motor
Multi-objective Optimization Scheme for PID-Controlled DC Motor
 
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATABINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
 
A Self-Tuned Simulated Annealing Algorithm using Hidden Markov Mode
A Self-Tuned Simulated Annealing Algorithm using Hidden Markov ModeA Self-Tuned Simulated Annealing Algorithm using Hidden Markov Mode
A Self-Tuned Simulated Annealing Algorithm using Hidden Markov Mode
 
Feature selection using modified particle swarm optimisation for face recogni...
Feature selection using modified particle swarm optimisation for face recogni...Feature selection using modified particle swarm optimisation for face recogni...
Feature selection using modified particle swarm optimisation for face recogni...
 
Sca a sine cosine algorithm for solving optimization problems
Sca a sine cosine algorithm for solving optimization problemsSca a sine cosine algorithm for solving optimization problems
Sca a sine cosine algorithm for solving optimization problems
 
AIAA-Aviation-2015-Mehmani
AIAA-Aviation-2015-MehmaniAIAA-Aviation-2015-Mehmani
AIAA-Aviation-2015-Mehmani
 
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...
 
Chapter 18,19
Chapter 18,19Chapter 18,19
Chapter 18,19
 
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
CHECKING BEHAVIOURAL COMPATIBILITY IN SERVICE COMPOSITION WITH GRAPH TRANSFOR...
 
9 coldengine
9 coldengine9 coldengine
9 coldengine
 
AMS_Aviation_2014_Ali
AMS_Aviation_2014_AliAMS_Aviation_2014_Ali
AMS_Aviation_2014_Ali
 
AIROPT: A Multi-Objective Evolutionary Algorithm based Aerodynamic Shape Opti...
AIROPT: A Multi-Objective Evolutionary Algorithm based Aerodynamic Shape Opti...AIROPT: A Multi-Objective Evolutionary Algorithm based Aerodynamic Shape Opti...
AIROPT: A Multi-Objective Evolutionary Algorithm based Aerodynamic Shape Opti...
 
Artificial Intelligence based optimization of weld bead geometry in laser wel...
Artificial Intelligence based optimization of weld bead geometry in laser wel...Artificial Intelligence based optimization of weld bead geometry in laser wel...
Artificial Intelligence based optimization of weld bead geometry in laser wel...
 
Parameter Estimation User Guide
Parameter Estimation User GuideParameter Estimation User Guide
Parameter Estimation User Guide
 

Destaque

CSBP: A Fast Circuit Similarity-Based Placement for FPGA Incremental Design a...
CSBP: A Fast Circuit Similarity-Based Placement for FPGA Incremental Design a...CSBP: A Fast Circuit Similarity-Based Placement for FPGA Incremental Design a...
CSBP: A Fast Circuit Similarity-Based Placement for FPGA Incremental Design a...Xiaoyu Shi
 
The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detection
The Multi-Objective Genetic Algorithm Based Techniques for Intrusion DetectionThe Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detection
The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detectionijcsse
 
Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"
Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble" Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"
Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble" ieee_cis_cyprus
 
Multi-Objective Optimization in Rule-based Design Space Exploration (ASE 2014)
Multi-Objective Optimization in Rule-based Design Space Exploration (ASE 2014)Multi-Objective Optimization in Rule-based Design Space Exploration (ASE 2014)
Multi-Objective Optimization in Rule-based Design Space Exploration (ASE 2014)hani_abdeen
 
Multi-Objective Evolutionary Algorithms
Multi-Objective Evolutionary AlgorithmsMulti-Objective Evolutionary Algorithms
Multi-Objective Evolutionary AlgorithmsSong Gao
 
Method of solving multi objective optimization problem in the presence of unc...
Method of solving multi objective optimization problem in the presence of unc...Method of solving multi objective optimization problem in the presence of unc...
Method of solving multi objective optimization problem in the presence of unc...eSAT Journals
 
Multi objective optimization and Benchmark functions result
Multi objective optimization and Benchmark functions resultMulti objective optimization and Benchmark functions result
Multi objective optimization and Benchmark functions resultPiyush Agarwal
 
Cyber infrastructure in engineering design
Cyber infrastructure in engineering designCyber infrastructure in engineering design
Cyber infrastructure in engineering designAmogh Mundhekar
 
Pareto optimal
Pareto optimal    Pareto optimal
Pareto optimal rmpas
 
Multiobjective optimization and trade offs using pareto optimality
Multiobjective optimization and trade offs using pareto optimalityMultiobjective optimization and trade offs using pareto optimality
Multiobjective optimization and trade offs using pareto optimalityAmogh Mundhekar
 
Multi Objective Optimization
Multi Objective OptimizationMulti Objective Optimization
Multi Objective OptimizationNawroz University
 

Destaque (11)

CSBP: A Fast Circuit Similarity-Based Placement for FPGA Incremental Design a...
CSBP: A Fast Circuit Similarity-Based Placement for FPGA Incremental Design a...CSBP: A Fast Circuit Similarity-Based Placement for FPGA Incremental Design a...
CSBP: A Fast Circuit Similarity-Based Placement for FPGA Incremental Design a...
 
The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detection
The Multi-Objective Genetic Algorithm Based Techniques for Intrusion DetectionThe Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detection
The Multi-Objective Genetic Algorithm Based Techniques for Intrusion Detection
 
Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"
Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble" Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"
Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"
 
Multi-Objective Optimization in Rule-based Design Space Exploration (ASE 2014)
Multi-Objective Optimization in Rule-based Design Space Exploration (ASE 2014)Multi-Objective Optimization in Rule-based Design Space Exploration (ASE 2014)
Multi-Objective Optimization in Rule-based Design Space Exploration (ASE 2014)
 
Multi-Objective Evolutionary Algorithms
Multi-Objective Evolutionary AlgorithmsMulti-Objective Evolutionary Algorithms
Multi-Objective Evolutionary Algorithms
 
Method of solving multi objective optimization problem in the presence of unc...
Method of solving multi objective optimization problem in the presence of unc...Method of solving multi objective optimization problem in the presence of unc...
Method of solving multi objective optimization problem in the presence of unc...
 
Multi objective optimization and Benchmark functions result
Multi objective optimization and Benchmark functions resultMulti objective optimization and Benchmark functions result
Multi objective optimization and Benchmark functions result
 
Cyber infrastructure in engineering design
Cyber infrastructure in engineering designCyber infrastructure in engineering design
Cyber infrastructure in engineering design
 
Pareto optimal
Pareto optimal    Pareto optimal
Pareto optimal
 
Multiobjective optimization and trade offs using pareto optimality
Multiobjective optimization and trade offs using pareto optimalityMultiobjective optimization and trade offs using pareto optimality
Multiobjective optimization and trade offs using pareto optimality
 
Multi Objective Optimization
Multi Objective OptimizationMulti Objective Optimization
Multi Objective Optimization
 

Semelhante a Gradient-Based Multi-Objective Optimization Technology

Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationHybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationeArtius, Inc.
 
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...Amir Ziai
 
Directed Optimization on Pareto Frontier
Directed Optimization on Pareto FrontierDirected Optimization on Pareto Frontier
Directed Optimization on Pareto FrontiereArtius, Inc.
 
A robust multi criteria optimization approach
A robust multi criteria optimization approachA robust multi criteria optimization approach
A robust multi criteria optimization approachPhuong Dx
 
Methods of Optimization in Machine Learning
Methods of Optimization in Machine LearningMethods of Optimization in Machine Learning
Methods of Optimization in Machine LearningKnoldus Inc.
 
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATABINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATAijejournal
 
Linear programming models - U2.pptx
Linear programming models - U2.pptxLinear programming models - U2.pptx
Linear programming models - U2.pptxMariaBurgos55
 
Fast optimization intevacoct6_3final
Fast optimization intevacoct6_3finalFast optimization intevacoct6_3final
Fast optimization intevacoct6_3finaleArtius, Inc.
 
Efficient evaluation of flatness error from Coordinate Measurement Data using...
Efficient evaluation of flatness error from Coordinate Measurement Data using...Efficient evaluation of flatness error from Coordinate Measurement Data using...
Efficient evaluation of flatness error from Coordinate Measurement Data using...Ali Shahed
 
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...ijaia
 
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...gerogepatton
 
A fast non dominated sorting guided genetic algorithm for multi objective pow...
A fast non dominated sorting guided genetic algorithm for multi objective pow...A fast non dominated sorting guided genetic algorithm for multi objective pow...
A fast non dominated sorting guided genetic algorithm for multi objective pow...Pvrtechnologies Nellore
 
Adaptive Bayesian contextual hyperband: A novel hyperparameter optimization a...
Adaptive Bayesian contextual hyperband: A novel hyperparameter optimization a...Adaptive Bayesian contextual hyperband: A novel hyperparameter optimization a...
Adaptive Bayesian contextual hyperband: A novel hyperparameter optimization a...IAESIJAI
 
Parallel Artificial Bee Colony Algorithm
Parallel Artificial Bee Colony AlgorithmParallel Artificial Bee Colony Algorithm
Parallel Artificial Bee Colony AlgorithmSameer Raghuram
 
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHM
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMGRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHM
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMijscai
 
Notion of an algorithm
Notion of an algorithmNotion of an algorithm
Notion of an algorithmNisha Soms
 
Multi objective predictive control a solution using metaheuristics
Multi objective predictive control  a solution using metaheuristicsMulti objective predictive control  a solution using metaheuristics
Multi objective predictive control a solution using metaheuristicsijcsit
 

Semelhante a Gradient-Based Multi-Objective Optimization Technology (20)

Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationHybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective Optimization
 
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
 
Directed Optimization on Pareto Frontier
Directed Optimization on Pareto FrontierDirected Optimization on Pareto Frontier
Directed Optimization on Pareto Frontier
 
Ds33717725
Ds33717725Ds33717725
Ds33717725
 
Ds33717725
Ds33717725Ds33717725
Ds33717725
 
A robust multi criteria optimization approach
A robust multi criteria optimization approachA robust multi criteria optimization approach
A robust multi criteria optimization approach
 
Methods of Optimization in Machine Learning
Methods of Optimization in Machine LearningMethods of Optimization in Machine Learning
Methods of Optimization in Machine Learning
 
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATABINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
BINARY SINE COSINE ALGORITHMS FOR FEATURE SELECTION FROM MEDICAL DATA
 
Linear programming models - U2.pptx
Linear programming models - U2.pptxLinear programming models - U2.pptx
Linear programming models - U2.pptx
 
Fast optimization intevacoct6_3final
Fast optimization intevacoct6_3finalFast optimization intevacoct6_3final
Fast optimization intevacoct6_3final
 
Dj4201737746
Dj4201737746Dj4201737746
Dj4201737746
 
Efficient evaluation of flatness error from Coordinate Measurement Data using...
Efficient evaluation of flatness error from Coordinate Measurement Data using...Efficient evaluation of flatness error from Coordinate Measurement Data using...
Efficient evaluation of flatness error from Coordinate Measurement Data using...
 
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
 
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
A BI-OBJECTIVE MODEL FOR SVM WITH AN INTERACTIVE PROCEDURE TO IDENTIFY THE BE...
 
A fast non dominated sorting guided genetic algorithm for multi objective pow...
A fast non dominated sorting guided genetic algorithm for multi objective pow...A fast non dominated sorting guided genetic algorithm for multi objective pow...
A fast non dominated sorting guided genetic algorithm for multi objective pow...
 
Adaptive Bayesian contextual hyperband: A novel hyperparameter optimization a...
Adaptive Bayesian contextual hyperband: A novel hyperparameter optimization a...Adaptive Bayesian contextual hyperband: A novel hyperparameter optimization a...
Adaptive Bayesian contextual hyperband: A novel hyperparameter optimization a...
 
Parallel Artificial Bee Colony Algorithm
Parallel Artificial Bee Colony AlgorithmParallel Artificial Bee Colony Algorithm
Parallel Artificial Bee Colony Algorithm
 
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHM
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMGRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHM
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHM
 
Notion of an algorithm
Notion of an algorithmNotion of an algorithm
Notion of an algorithm
 
Multi objective predictive control a solution using metaheuristics
Multi objective predictive control  a solution using metaheuristicsMulti objective predictive control  a solution using metaheuristics
Multi objective predictive control a solution using metaheuristics
 

Último

How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Principled Technologies
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUK Journal
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024SynarionITSolutions
 

Último (20)

How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 

Gradient-Based Multi-Objective Optimization Technology

  • 1. Gradient-Based Multi-Objective Optimization Technology Vladimir Sevastyanov1 eArtius, Inc., Irvine, CA 92614, US EXTENDED ABSTRACT Multi-Gradient Analysis (MGA), and two multi-objective optimization methods based on MGA are presented: Multi-Gradient Explorer (MGE), and Multi Gradient Pathfinder (MGP) methods. Dynamically Dimensioned Response Surface Method (DDRSM) for dynamic reduction of task dimension and fast estimation of gradients is also disclosed. MGE and MGP are based on the MGA’s ability to analyze gradients and determine the area of simultaneous improvement (ASI) for all objective functions. MGE starts from a given initial point, and approaches Pareto frontier sequentially by stepping into the ASI area until a Pareto optimal point is obtained. MGP starts from a Pareto-optimal point, and steps along the Pareto surface in the direction that allows for improvement on a subset of the objective functions with higher priority. DDRSM works for optimization tasks with virtually any number (up to thousands) of design variables, and requires just 5-7 model evaluations per Pareto optimal point for the MGE and MGP algorithms regardless of task dimension. Both algorithms are designed to optimize computationally expensive models, and are able to optimize models with dozens, hundreds, and even thousands of design variables. 1. Introduction T here are two groups of multi-objective optimization methods: scalarization and non-scalarization methods [1]. Scalarization methods use a global criterion to combine multiple objective functions in a utility function, and require solving a sequence of single-objective problems. Absence of numerical methods designed specifically for multi-objective optimization caused the invention of such an artificial scalarization technique. The existing weighted sum approaches that are widely used for design optimization do not work well with the non-convex Pareto surfaces. Uniform distribution of Pareto optimal points cannot be guaranteed even if the weights are varying consistently and continuously. Hence, Pareto set will be incomplete and inaccurate [1]. Genetic Algorithm (GA) is one of the major techniques based on non-scalarization. It combines the use of random numbers and heuristic strategies inspired by evolutionary biology. GAs are computationally extremely intensive and resource-consuming, and do not provide adequate enough accuracy [1]. In order to overcome limitations of GAs and scalarization techniques, a new gradient-based technique has been invented at eArtius, Inc. (patented). The technique uses multi-gradient analysis (MGA), and allows the developing Multi-Gradient Explorer (MGE) algorithm of multi-objective optimization. Further research was inspired by two fundamental issues typical for traditional multi-objective optimization approaches, and by hardly increasing computational effort necessary for performing optimization: (a) necessity to search for optimal solutions in the entire design space while Pareto optimal points can only be found on Pareto frontier, and (b) necessity to cover the entire Pareto frontier by a large number of found Pareto optimal designs while the user needs just a few trade offs in his area of interest on the Pareto frontier. These two issues caused the use of brute force methods, such as parallelization of algorithms, in most of the prior art multi-objective optimization technologies. However, even brute-force methods cannot resolve fundamental problems related to the famous “curse of dimensionality” phenomenon. According to [2], adding extra dimensions to the design space requires an exponential increase in the number of Pareto optimal points to maintain the same quality of approximation for Pareto frontier. New Multi-Gradient Pathfinder (MGP) algorithm has been invented at eArtius (patent pending). MGP uses Pareto frontier as a search space, and performs directed optimization on Pareto frontier in the area of interest 1 Chief Executive Officer 1 American Institute of Aeronautics and Astronautics
  • 2. determined by the user, which increases algorithm efficiency by orders of magnitude, and gives the user more control over the optimization process. Another important area for improvements in optimization technology is related to response surface methods which are commonly used in engineering design to minimize the expense of running computationally expensive analyses and simulations. All known approximation techniques including Response Surface Methodology, Kriging Models, etc. are limited by 40-60 design variables [3] because of the same “curse of dimensionality” phenomenon. According to [2], adding extra dimensions to the design space requires exponential increase in the number of sample points necessary to build an adequate global surrogate model. A new response surface method named Dynamically Dimensioned Response Surface Method (DDRSM) has been invented at eArtius (patent pending), which successfully avoids the “curse of dimensionality” limitations, and efficiently works with up to thousands of design variables without ever increasing the number of sample points. New eArtius design optimization technology is made up out of the optimization algorithms MGE, MGP, HMGE, HMGP, response surface method DDRSM, and implemented in the eArtius design optimization tool Pareto Explorer. 2. Multi-Gradient Analysis Any traditional gradient-based optimization method comprises sequential steps from an initial point to an optimal point. Each step improves the current point with respect to the objective function. The most important element of such an algorithm is determining the direction of the next step. Traditional gradient-based algorithms use the fact that the gradient of the objective function indicates the direction of the steepest ascent of the objective function but what if several objective functions need to be optimized? In this case we need to find a point improving all objective functions simultaneously. The following diagrams (see FIG.1) illustrate graphically how MGA determines the area of simultaneous improvement for all objective functions. It is illustrated for the simplest multi-objective optimization task with two independent variables and two objective functions that need to be maximized. FIG. 1A FIG. 1B FIG. 1C FIG. 1A illustrates how the gradient G1 and the line L1 (G1 is perpendicular to L1) help to split the sub-region into the area of increased values A1 and the area of decreased values for the first objective function; FIG. 1B similarly illustrates splitting the sub-region for the second objective function; FIG. 1C illustrates that the area of simultaneous increasing ASI of both objective functions F1 and F2 is equal to intersection of areas A1 and A2: A1∩A2. The main problem of the Multi-Gradient Analysis is to find a point X '∈ ASI , which guarantees that the point X 0 will be improved by the point X ' with respect to all objective functions. MGA is illustrated with two objective functions on FIG.1, but it works in the same way with any reasonable number of objective functions and unlimited number of design variables. The MGA pseudo-code: 1 Begin 2 Input initial point X*. 3 Evaluate criteria gradients on X*. 4 Determine ASI for all criteria. 5 Determine the direction of simultaneous improvement for all objectives for the next step. 6 Determine the length of the step. 5 Perform the step, and evaluate new point X’ belonging to ASI. 7 If X’ dominates X* then report improved point X’ and go to 10. 2 American Institute of Aeronautics and Astronautics
  • 3. 8 If X’ does not dominate X* then report X* as Pareto optimal point. 10 End MGA can be implemented in a number of different ways. Some of them are discussed in [4]. In fact, the same technique is widely used for constrained gradient-based optimization with a single objective function [5]. However, the technique was never used for multi-objective optimization. Since MGA technique results in an improved point, it can be used as an element in any multi-objective optimization algorithm. The following two sections discuss two MGA-based multi-objective optimization algorithms. 3. Multi-Gradient Explorer Algorithm MGE uses a conventional approach for optimization practice. It starts from an initial point, and iterates toward Pareto frontier until a Pareto optimal point is found. Then it takes another initial point, iterates again, and so on. The MGE pseudo-code: 1 Begin 2 Generate required number of initial points X1,…,XN. 3 i=1. 4 Declare current point: Xc= Xi. 5 Apply MGA analysis to Xc for finding a point X’ in ASI. 6 If X’ dominates Xc then Xc=X’ and go to 5. 7 If X’ does not dominate Xc then declare Xc as Pareto optimal point; i=i+1 and go to 4. 8 Report all the solutions found. 9 End MGE algorithm can be used in two modes: (a) improvement of a given initial point, and (b) approximation of the entire Pareto frontier. In the mode (a) MGE usually performs about 4-7 steps, and finds several Pareto optimal points improving a given initial design (see FIG.2.) Assuming that DDRSM response surface method is used for estimating gradients, it usually takes just about 15-30 model evaluations to approach Pareto frontier regardless of task dimension. Thus, MGE is the best choice for computationally expensive simulation models when covering the entire Pareto frontier is prohibitively expensive. In the mode (b) MGE sequentially starts from randomly distributed initial points. Since the initial points are uniformly distributed in the design space, it is expected that Pareto optimal points found in multiple iterations will cover the entire Pareto frontier (see FIG.3.) Minimize f1 = x12 + ( x2 − 1) Minimize f 2 = x12 + ( x2 + 1) 2 + 12 (1) Minimize f 3 = ( x1 − 1) + x + 2 2 2 2 − 2 ≤ x1 , x2 ≤ 2 Table 1 and FIG.2 illustrate MGE algorithm in the mode of improvement of a given initial point. Table 1 Improvement of a given design by MGE optimization algorithm Evaluation # f1 f2 f3 Initial Point 1 12.26 5.394 14.05 Pareto Optimal Point 9 3.65 1.38 2.84 As follows from Table 1, the initial point has been significantly improved with respect to all objective functions. The target Pareto optimal point was found after 9 model evaluations. After that, MGE spent 26 additional model evaluations estimating gradients via DDRSM method, and tried to improve the point #9. MGE was stopped because 3 American Institute of Aeronautics and Astronautics
  • 4. further improvement of the point #9 was not possible, and the point was declared as Pareto optimal. Next, all evaluated points have been compared against each other with respect to all objectives, and all dominated points were declared as transitional points. The rest of points have been declared as Pareto optimal (see FIG.2.) The majority of evaluated points from #10 to #35 happened to be Pareto optimal in this optimization run. Thus, the user has 15 Pareto optimal points out of 35 model evaluations. FIG.2 shows results of improvement of a given point by MGE algorithm. MGE has started from the initial point (orange triangle marker on the diagrams), and performed a few steps towards Pareto frontier; MGE has found 15 Pareto optimal points by the price of 35 model evaluations. The following FIG.3 illustrates the ability of MGE algorithm to cover the entire Pareto frontier. In this scenario MGE sequentially starts from randomly distributed initial points, and iterates towards Pareto frontier based on MGA technique. FIG. 3 shows Pareto optimal points found by MGE algorithm for the benchmark (1). MGE sequentially started optimization from randomly distributed initial points, and covered the entire Pareto frontier evenly. FIG.3 shows that MGE algorithm approximates the entire Pareto frontier, and covers it evenly. MGE is computationally efficient. It has spent 2420 model evaluations, and found 1156 Pareto optimal points—2420/1156=2.1 model evaluations per Pareto optimal point. In addition to the unconstrained multi-objective optimization technique explained in this paper, and illustrated by the two previous benchmark problems, MGE algorithm has means for constrained multi-objective optimization. The following simple benchmark (2) formulates a good known two bar truss constrained optimization problem, 4 American Institute of Aeronautics and Astronautics
  • 5. and illustrates the constrained optimization aspect of MGE algorithm: Minimize Deflection = ( P ⋅ d ) /( 2 A ⋅ E ⋅ sin(t ) ⋅ cos(t ) 2 Minimize Weight = (2 ⋅ d ⋅ A ⋅ g ) / sin(t ) where Stress = P /[ 2 ⋅ A ⋅ cos(t )] < 40 (2) t = deg ree ⋅ a sin(1) / 90 d = 1000; E = 2.1 ⋅ 10 4 ; g = 6 ⋅ 10 −6 A ∈ [20;50]; deg ree ∈ [ 45;65] FIG.4 shows constrained optimization results found by MGE optimization algorithm for the benchmark (2). FIG. 4 shows all points evaluated by MGE optimization algorithm. The diagrams illustrate both objective space (left) and design space (right.) There are three categories of points on the diagrams: Pareto optimal, feasible, and transitional. MGE sequentially started optimization from randomly distributed initial points, and covered the entire Pareto frontier evenly. MGE has spent 400 model evaluations; it has found 100 Pareto optimal points and 278 feasible points. MGE uses a technique similar to Modified Method of Feasible Directions (MMFD) [5] for constrained optimization. Since MMFD was designed for constrained single-objective optimization, it could not be used as it is in MGE algorithm, and it has been adjusted to the needs of multi-objective optimization. Current implementation of MGE algorithm uses the previously mentioned MMFD-like constrained optimization approach for tasks with a relatively small number of constraints, and automatically shifts to Hybrid Multi-Gradient Explorer (HMGE) optimization algorithm for tasks with a larger number of constraints. MGE algorithm employs the hybrid HMGE code only in the infeasible area, and shifts back to the pure gradient based MGA technique as soon as a feasible point has been found. HMGE algorithm has proved a high efficiency and reliability with the most challenging real life constrained optimization tasks. It finds feasible areas faster and more reliably than pure gradient-based techniques. Thus, the combination of MGE and HMGE is a powerful design optimization tool for real life tasks with up to thousands of design variables, and up to hundreds of constraints. It is recommended to use MGE algorithm for multi-objective optimization of computationally expensive simulation models when covering the entire Pareto frontier is prohibitively expensive. MGE allows improvement on a given design with respect to several objectives (see this scenario on FIG.2), and usually delivers several Pareto optimal points after 10-30 model evaluations. 5 American Institute of Aeronautics and Astronautics
  • 6. 4. Multi-Gradient Pathfinder Algorithm Multi-Gradient Pathfinder (MGP) is the first multi-objective optimization algorithm which implements the idea of directed optimization on Pareto frontier based on the user’s preferences. Directed optimization on Pareto frontier means that a search algorithm steps along Pareto frontier from a given initial Pareto optimal point towards a desired Pareto optimal point. The search algorithm is supposed to stay on Pareto frontier all the time throughout the optimization process until the desired Pareto optimal point will be reached. Then all (or most) of the evaluated points will also be Pareto optimal. Moving along Pareto frontier improves some objectives and compromises other ones. This consideration gives a clue as to how directed optimization needs to be organized to become beneficial for users. In fact, it is enough to formulate which objective functions are preferable, and need to be improved first. This formulates a goal for the directed search on Pareto frontier. In the case of L=2 objective functions, Pareto frontier is a line in the objective space. Thus, MGP algorithm has just two directions to choose from: to improve 1st or 2nd objective function. In the case of L>2 objective functions, Pareto frontier is a multi-dimensional surface, and the algorithm has an infinite number of directions to move from a given point along the surface. In any case, the user needs to determine a change in direction based on his preferences. Based on the above considerations, the task of directed optimization on Pareto frontier can be formulated in the following way: Minimize F ( X ) = [ F1 ( X ), F2 ( X ),..., Fm ( X )]T X PF ∈X (3) Minimize + P ( X ) = [ P1 ( X ), P2 ( X ),..., Pn ( X )] T X PF ∈X subject to : q j ( X ) ≤ 0; j = 1,2,...k Where X PF ∈ X - is a subset of the design space X, which belongs to Pareto frontier; m – the number of non-preferable objective functions F(X), and n – the number of preferable objective functions P(X) that determine the direction of the move (directed search) on Pareto frontier. L=m+n – the total number of objective functions. Pareto frontier is determined by both sets of objectives F(X) and P(X). Operator Minimize+ applied to P(X) means that it is required to find the best points on Pareto frontier with respect to the preferable objectives P(X). How MGP operates: First of all, the user needs to determine which objective(s) are preferable (more important) for him. In this way, the user indicates his area of interest on the Pareto frontier. MGP starts from a given Pareto optimal point and performs a required number of steps along Pareto frontier in a direction of simultaneous improvement of preferable objectives. On each step, MGP solves two tasks (see FIG.5, green and blue arrows): • Improves preferable objectives’ values; • Maintains a short distance from the current point to Pareto frontier. It is important to note that there are cases when a given initial point is not Pareto optimal. In this case MGP works exactly as MGE algorithm. It approaches Pareto frontier first, and then starts stepping along the Pareto frontier in the direction determined by preferable objectives. 6 American Institute of Aeronautics and Astronautics
  • 7. F1 F2 FIG.5 illustrates the basic idea of MGP algorithm for the case when both objective functions F1 and F2 need to be minimized and F2 is considered as a preferable objective. On the first half-step, MGP steps in a direction of improvement of the preferable objective – see green arrows on FIG.5. On the second half-step, MGP steps in a direction of simultaneous improvement in ALL objectives—see blue arrows, and in this way maintains a short distance to Pareto frontier. Then MGP starts the next step from the newly found Pareto optimal point. Main features of MGP algorithm are explained in the following pseudo-code. 1 Begin 2 Input initial Pareto optimal point X* and required number of steps N. 3 i=1. 4 Declare current point: Xc= X*. 5 Evaluate gradients of all objective functions on Xc. 6 Determine ASI(1) for preferable objectives. 7 Make a step in ASI(1) improving only preferable objectives. 8 Determine ASI(2) for ALL objectives. 9 Make a step in ASI(2) improving ALL objectives; the resulting Pareto point is X**. 10 If i < N then declare current point Xc= X**; i=i+1; go to 5. 11 Report all the solutions found. 12 End The abbreviations ASI(1) and ASI(2) in the above pseudo-code stand for Area of Simultaneous Improvement (ASI) of preferable objectives and of all objectives correspondingly (see FIG.1A-1C). The multi-objective task formulation (4) determines three objectives to be minimized. According to the optimization task formulation (3), two of them (f2 and f3) are preferable: Minimize f1 = x12 + ( x2 − 1) 2 Minimize + f 2 = x12 + ( x2 + 1) 2 + 1 (4) Minimize + f 3 = ( x1 − 1) + x + 2 2 2 2 − 2 ≤ x1 , x2 ≤ 2 The task formulation (4) is corresponded to the blue markers on FIG.6. 7 American Institute of Aeronautics and Astronautics
  • 8. FIG. 6 shows Pareto optimal points found by MGP algorithm for the benchmark task (4). MGP has started optimization from the same circled point twice: (a) with one preferable objective f3 – see green points; (b) with two preferable objectives f2 and f3 – see blue points. Transitional points (red and magenta) were evaluated to build local response surface models, and to estimate gradients. All evaluated points (optimal and non-optimal) are visualized on FIG.6, and we can make a few observations confirming that MGP performs directed optimization on Pareto frontier: (a) MGP algorithm performs search solely on Pareto frontier, and only in the area of interest; only a few of evaluated points are non-Pareto optimal. (b) The direction of movement along Pareto frontier depends on the selection of preferable objectives, as expected. The green trajectory clearly indicates improvement of f3, and the blue trajectory indicates simultaneous improvement of f2 and f3; (c) MGP is extremely efficient. The majority of evaluated points are Pareto optimal: 191 out of 238 for f1 as preferable objective, and 281 out of 316 for two preferable objectives f2 and f3. The benchmark (5) and FIG.7 illustrate that in the case of two objective functions, MGP is able to start from one end of Pareto frontier, and cover it completely to another end. The benchmark problem (5) has been chosen because it has a simple classical Pareto front, and allows one to visualize MGP behavior in both objective space and design space. Minimize + f1 = x12 + x2 Minimize f 2 = x2 + x1 , 2 x1 , x2 ∈ [ −10;10] (5) Operator Minimize+ in the task formulation (5) means that the objective f1 is preferable, and MGP needs to step along Pareto frontier in the direction which improves the objective f1. The following FIG.7 illustrates a solution of a directed multi-objective optimization task (5) found by MGP algorithm. FIG. 7 Pareto optimal and transitional points found by MGP algorithm for the benchmark (2). MGP starts from the initial point, and sequentially steps along Pareto frontier until the end of the Pareto frontier is achieved. MGP has found 225 Pareto optimal points out of 273 model evaluations. 8 American Institute of Aeronautics and Astronautics
  • 9. The diagrams on FIG.7 illustrate all the points evaluated by MGP algorithm. All yellow markers are obscured by green markers on the diagrams. It means that transitional points are located very close to Pareto optimal points, and the majority of the points evaluated by MGP algorithm are Pareto optimal (225 of 273). MGP algorithm does not have to iterate towards Pareto frontier repeatedly. Instead, it literally steps along Pareto frontier. In fact, MGP has spent some model evaluations to estimate gradients by the finite difference method, and was able to stay on the Pareto frontier on each step throughout the optimization process. Straight parts of the Pareto frontier have not required evaluating transitional points at all. Every new point evaluated while MGP was stepping along straight fragments of Pareto frontier was a Pareto optimal point. This can be recognized by an absence of large yellow markers behind smaller green markers on a few parts of the Pareto front. However, stepping along the convex part of the Pareto frontier required more transitional points to be evaluated in order to maintain a short distance to the Pareto frontier (see FIG.7.) The benchmark problem (6) and FIG. 8 illustrate the ability of MGP algorithm to step along Pareto frontier with a step size determined by the user, and the ability to find disjoint parts of Pareto frontier. Minimize F1 = 1 + ( A1 + B1 ) 2 + ( A2 + B2 ) 2 Minimize + F2 = 1 + ( x1 + 3) 2 + ( x2 + 1) 2 A1 = 0.5 ⋅ sin(1) − 2 ⋅ cos(1) + sin( 2) − 1.5 ⋅ cos( 2) (6) A2 = 1.5 ⋅ sin(1) − cos(1) + 2 ⋅ sin( 2) − 0.5 ⋅ cos( 2) B1 = 0.5 ⋅ sin( x1 ) − 2 ⋅ cos( x1 ) + sin( x2 ) − 1.5 ⋅ cos( x2 ) B2 = 1.5 ⋅ sin( x1 ) − cos( x1 ) + 2 ⋅ sin( x2 ) − 0.5 ⋅ cos( x2 ) x1 , x2 ∈ [ −π , π ] FIG.8A FIG.8B FIG. 8 shows all evaluated points (Pareto optimal and transitional) found by MGP algorithm for the benchmark (6) with different values of the step size S, which determines the distance between points on the Pareto frontier. MGP starts from the initial point, and steps along Pareto frontier in the direction improving the preferable objective F2. The results on FIG.8A are corresponded with S=0.005, and the results FIG.8B were found with S=0.015. 9 American Institute of Aeronautics and Astronautics
  • 10. The diagrams on FIG.8A show 118 Pareto optimal points found by the price of 684 model evaluations, which is corresponded with the step size S=0.005. The diagrams on the FIG.8B show that with S=0.015, MGP covers Pareto frontier by 55 Pareto optimal points, and spends just 351 model evaluations. In both cases the Pareto frontier is covered evenly and completely. The run with the smaller step size is almost two times more computationally expensive, but brings twice more Pareto optimal points; in other words, it is twice more accurate. Thus, the user always has a choice to save model evaluations by increasing the step size, or to increase the accuracy of the solution by decreasing the step size. MGP algorithm has demonstrated a relatively low efficiency for the benchmark (6) compared with the benchmark (5) because it spent a significant number of model evaluations in transition from one disjoint part of Pareto frontier to another (see yellow markers on FIG.8.) In this study most of the benchmark problems are used to illustrate the unusual capabilities of MGE and MGP algorithms. Comparing optimization algorithms is not a key point of this paper. However, a few benchmarks will be used to compare MGP algorithm with three state of the art multi-objective optimization algorithms developed by a leading company of the Process Integration and Design Optimization (PIDO) market: Pointer, NSGA-II, and AMGA. These commercial algorithms represent the highest level of optimization technology developed by the best companies and are currently available on the PIDO market. For the algorithms AMGA, NSGA-II, Pointer, and MGP the only default parameter values have been used to make sure that all algorithms are in equal conditions. The following benchmark ZDT3 (3) has two objective functions, and 30 design variables: Minimize F1 = x1 ⎡ F F ⎤ Minimize + F2 = g ⋅ ⎢1 − 1 − 1 sin(10 π F1 )⎥ ⎣ g g ⎦ (7) 9 n g =1+ ∑ xi ; 0 ≤ xi ≤ 1, i = 1,..n; n = 30 n − 1 i =2 The benchmark (7) has dozens of local Pareto fronts, and this is a challenge for most of multi-objective optimization algorithms. The following FIG.9 shows that optimization search in the entire design space is not productive compared to directed optimization on Pareto frontier performed by MGP algorithm. FIG. 9 Optimization results comparison graph for algorithms MGP (eArtius), NSGA-II, AMGA, and Pointer. All optimization algorithms performed an equal number (523) of function evaluations. Graph displays the criteria space and two projections of the design space with all evaluated points for each optimization algorithm. MGP algorithm used DDRSM to estimate gradients. 10 American Institute of Aeronautics and Astronautics
  • 11. As can be seen on FIG.9, MGP algorithm (green and red markers) performs a search in the area of global Pareto frontier, and it covered the Pareto frontier evenly and completely. Other algorithms perform searches in the entire design space, and have difficulties finding the global Pareto frontier. Only Pointer was able to find a few Pareto optimal points in the central part of the global Pareto frontier. AMGA and NSGA-II have not found a single Pareto optimal point after 523 model evaluations, and performed the majority of evaluations very far from the global Pareto frontier. 5. Comparison with Weighted Sum Method The most common approach to gradient-based multi-objective optimization is the weighted sum method [1], which employs the utility function (8): k U = ∑ wi Fi ( X ) (8) i =1 k were w is a vector of weights typically set by users such that and ∑w =1 i =1 i w > 0. If all of the weights are positive, the minimum of (8) is Pareto optimal [6]. In other words, minimizing the utility function (8) is sufficient for Pareto optimality. However, the formulation does not provide a necessary condition for Pareto optimality [7]. The biggest problem with the weighted sum approach is that it is impossible to obtain points on non-convex portions of the Pareto optimal set in the criterion space. Theoretical reasons for this deficiency have been described in [8, 9, 10]. Also, varying the weights consistently and continuously may not necessarily result in an even distribution of Pareto optimal points and a complete representation of the Pareto optimal set [8]. Let us consider a sample illustrating the above deficiencies. The following benchmark model (9) has a non-convex Pareto frontier. Minimize f1 = x1 Minimize + f 2 = 1 + x2 − x1 − 0.1 ⋅ sin(3π ⋅ x1 ) 2 (9) x1 ∈ [0;1]; x2 ∈ [ −2;2] Sequential Quadratic Programming (SQP) algorithm is one of the most popular gradient-based single-objective optimization algorithms. An implementation of SQP has been used for minimizing the utility function (3) for finding Pareto optimal points. Minimize U = w1 f1 + w2 f 2 ; (10) w1 , w2 ∈ [0;1]; w1 + w2 = 1 SQP algorithm has performed a single objective optimization for the utility function (10) 107 times, and performed 1667 model evaluations in total. Every optimization run was performed with an incremented value of w1 ∈ [0;1], and w2 = 1 − w1 . Since w1 values have covered the interval [0;1] evenly and completely, it was . expected that the diversity of found Pareto optimal points will be on a high enough level. However, 107 Pareto optimal points have covered relatively small left and right convex parts of the Pareto frontier, and just one of the Pareto optimal points is located on the middle part of the Pareto frontier (see blue markers on FIG.10A and 10B). 11 American Institute of Aeronautics and Astronautics
  • 12. FIG. 10A FIG.10B FIG. 10A compares Pareto optimal points found by MGP and SQP optimization algorithms for the benchmark (2). MGP has found 153 Pareto optimal points out of 271 model evaluations. SQP has found 107 Pareto optimal points out of 1667 model evaluations. FIG.10B compares Pareto optimal points found by MGE and SQP optimization algorithms for the benchmark (2). MGE algorithm has found 173 Pareto optimal points out of 700 model evaluations. As can be seen from FIG.10A-10B, non-convex Pareto frontier is a significant issue for SQP algorithm, and does not create any difficulties for MGP and MGE algorithms. Both MGP and MGE have covered the entire Pareto frontier evenly and completely. The weighted sum method substitutes the multi-objective optimization task (9) by a single-objective optimization task (10). However, optimization task (10) is not equivalent to the task (9), and has a different set of optimal solutions visualized on FIG.10. Blue points on the diagrams represent a solution of the task (10), and magenta points represent a solution of the task (9). Multi-Gradient Analysis (MGA) technique employed by both MGE and MGP algorithms resolves the issues created by using different kinds of scalarization techniques. MGA allows solving multi-objective optimization tasks as it is, without substituting them by a utility function such as the function U=w1f1+w2f2, used in this sample. At the same time, MGA allows the benefits of gradient-based techniques such as high convergence and high accuracy. Also, MGA is much simpler compared to scalarization techniques. MGA determines a direction of simultaneous improvement for all objective functions, and steps in this direction from any given point. In contrast to the weighted sum method, MGA does not require developing an additional logic on top of SQP algorithm for varying weights in the utility function throughout an optimization process. It just takes any given point, and makes a step improving the point with respect to all objectives. Thus, MGA can be used as an element for developing any kinds of multi-objective optimization algorithms. Particularly, MGA has been used for designing two pure gradient-based optimization algorithms MGE and MGP, discussed in this paper, and two hybrid optimization algorithms HMGE and HMGP based on GA- and gradient- techniques. 6. Dynamically Dimensioned Response Surface Method Dynamically Dimensioned Response Surface Method (DDRSM) is a new method to estimate gradients, which is equally efficient for low-dimensional and high-dimensional tasks. DDRSM (patent pending) requires just 5-7 model evaluations to estimate gradients regardless of task dimension. eArtius DDRSM vs. Traditional RSM Table 2 shows the most important aspects of Response Surface Methods (RSM), and compares traditional RSM with eArtius DDRSM. 12 American Institute of Aeronautics and Astronautics
  • 13. Table 2 Comparison of traditional response surface methods with DDRSM approach RSM Aspects Traditional RSM eArtius DDRSM Purpose Optimize fast surrogate functions Quick gradients estimation for direct instead of computationally expensive optimization of computationally expensive simulation models simulation models Approximation type Global approximation Local approximation Domain Entire design space A small sub region Use of surrogate functions Optimization in entire design space Gradient estimation at a single point Accuracy requirements High Low The number of sample points Exponentially grows with increasing 5-7 sample points regardless of task to build approximations the task dimension dimension Time required to build an Minutes and hours Milliseconds approximation Task dimension limitations 30-50 design variables Up to 5,000 design variables Sensitivity analysis Required to reduce task dimension Not required As follows from Table 2, the most common use of response surface methods is creating global approximations based on DOE sample points, and further optimization of such surrogate models. This approach requires maintaining a high level accuracy of the approximating surrogate function over the entire design space, which in turn requires a large number of sample points. In contrast, DDRSM method builds local approximations in a small sub region around a given point, and uses them for gradients estimation at the point. This reduces requirements to the accuracy of approximating models because DDRSM does not have to maintain a high level accuracy over the entire design space. There is a common fundamental problem for all response surface methods that is named “curse of dimensionality” [2]. The curse of dimensionality is the problem caused by the exponential increase in volume associated with adding extra dimensions to a design space [2], which in turn requires an exponential increase in the number of sample points to maintain the same level of accuracy for response surface models. For instance, we use 25=32 sample points to build an RSM model for 5 design variables, and decided to increase the number of design variables from 5 to 20. Now we need to use 220= 1,048,576 sample points to maintain the same level of accuracy for the RSM model. In real life we use just 100-300 sample points to build such RSM models, and this causes quality degradation of the optimization results found by optimizing such RSM models. This is a strong limitation for all known response surface approaches. It forces engineers to artificially reduce an optimization task dimension by assigning constant values to the most of design variables. DDRSM has successfully resolved the curse of dimensionality issue in the following way. DDRSM is based on a realistic assumption that most of real life design problems have a few significant design variables, and the rest of design variables are not significant. Based on this assumption, DDRSM estimates the most significant projections of gradients for all output variables on each optimization step. In order to achieve this, DDRSM generates 5-7 sample points in the current sub-region, and uses the points to recognize the most significant design variables for each objective function. Then DDRSM builds local approximations for all output variables which are utilized to estimate the gradients. Since an approximation does not include non-significant variables, the estimated gradient only has projections that correspond to significant variables. All other projections of the gradient are equal to zero. Ignoring non-significant variables slightly reduces the gradient’s accuracy, but allows estimating gradients by the price of 5-7 evaluations for tasks of practically any dimension. DDRSM recognizes the most significant design variables for each output variable (objective functions and constraints) separately. Thus, each output variable has its own list of significant variables that will be included in its approximating function. Also, DDRSM recognizes significant variables repeatedly on each optimization step every time gradients need to be estimated. This is important because the topology of objective functions and constraints can diverge in different parts of the design space throughout the optimization process, and specific topology details can be associated with specific design variables. As follows from the previous explanation, DDRSM dynamically reduces the task dimension in each sub-region, and does it independently for each output variable by ignoring non-significant design variables. The same variable can be critically important for one of the objective functions in the current sub-region, and not significant for other objective functions and constraints. Later in the optimization process, in a different sub-region, the topology of an output variable can be changed, and DDRSM will create another list of significant design variables corresponded to the variable’s topology in current sub-region of the search space. Thus, dynamic use of DDRSM on each 13 American Institute of Aeronautics and Astronautics
  • 14. optimization step makes it more adaptive to a function topology changes, and allows for an increase in the accuracy of gradients estimation. DDRSM combines elements of RSM and sensitivity analysis. Thus, it makes sense to compare DDRSM to the traditional sensitivity analysis approach. DDRSM vs. Traditional Sensitivity Analysis The most popular sensitivity analysis tools are designed to be used before starting an optimization process. Thus, engineers are forced to determine a single static list of significant variables for all objective and constraint functions based on their variations in entire design space. After the sensitivity analysis is completed, all non-significant design variables get a constant value and never get changed over the optimization process. The above approach gives satisfactory results for tasks with a small number of output variables, and has difficulties when the number of constraint and objective functions is large. Generally speaking, each output variable has its own topology, own level of non-linearity, and own list of significant variables. The same design variable can be significant for some of the output variables, and non-significant for other ones. Also, the list of significant variables depends on the current sub-region location. Thus, it is difficult or even impossible to determine a list of design variables that are equally significant for dozens and hundreds of output variables. Also, traditional sensitivity analysis technology requires too many sample points for a large number of design variables. This reduces the usefulness of the approach for high dimensional tasks. DDRSM completely eliminates the above described issues because it performs sensitivity analysis for each output variable independently, and every time when gradients need to be estimated. Thus, DDRSM takes in account specific details of each output variable in general and its local topology in particular. Also, DDRSM is equally efficient with dozens and thousands of design variables. Implementation of DDRSM The following MGA-DDRSM pseudo code shows basic elements of the MGA optimization step when DDRSM approach is used to estimate gradients: 1 Begin 2 Input initial point X*. 3 Create a sub-region with center at X*. 4 Generate and evaluate 5-7 sample points in the sub-region. 5 Determine the most significant design variables for each objective function. 6 Create an approximation for each objective function based only on the most significant design variables. 7 Use approximations for evaluation of criteria gradients on X*; 8 Determine ASI for all criteria. 9 Determine the direction of next step. 10 Determine the length of the step. 11 Perform the step, and evaluate new point X’ belonging to ASI. 12 If X’ dominates X* then report X’ as an improved point, and go to 14. 13 If X’ does not dominate X* then declare X* as Pareto optimal point. 14 End The following benchmark problem (11) is intended to demonstrate (a) high efficiency of the DDRSM approach to estimate gradients compared with the finite difference method, and (b) the ability of DDRSM to recognize significant design variables. The benchmark ZDT1 (11) has 30 design variables, two objectives, and the Pareto frontier is convex. The global Pareto-optimal front corresponds to x1 ∈ [0;1], xi = 0, i = 2,...,10 . The optimization task formulation used is as follows: Minimize F1 = x1 ⎡ F ⎤ Minimize + F2 = g ⎢1 − 1 ⎥ (11) ⎣ g⎦ 9 n g = 1+ ∑ xi , 0 ≤ xi ≤ 1, i = 1,..., n; n − 1 i =2 n = 30 14 American Institute of Aeronautics and Astronautics
  • 15. FIG. 11 shows Pareto optimal points found by MGP algorithm for the benchmark (11). The finite difference method has been used to estimate gradients. 18 Pareto optimal points were found out of 522 model evaluations. MGP algorithm started from the initial Pareto optimal point (see FIG.11), and performed 17 steps along Pareto frontier until it hit the end of the Pareto frontier. FIG.11 shows ideally accurate global Pareto optimal points found by MGP algorithm. The finite difference method was used in this optimization run to estimate gradients, and MGP had to spend 31 model evaluations to estimate gradients on each optimization step. MGP has found 18 Pareto optimal points out of 522 model evaluations. The distance between green and red markers along the axis x10 on FIG.11 (right diagram), indicates the spacing parameter value (0.0001) of the finite difference equation. FIG. 12 shows Pareto optimal points found by MGP algorithm for the benchmark (11). DDRSM method has been used to estimate gradients. 18 Pareto optimal points were found out of 38 model evaluations. MGP algorithm started from the initial Pareto optimal point (see FIG.12), and performed 17 steps along Pareto frontier until it hit the end of the Pareto frontier. FIG.12 shows 18 Pareto optimal points found by MGP algorithm. This optimization run used the same algorithm parameters, but DDRSM method instead of the finite difference method to estimate gradients. This time MGP spent just 38 model evaluations, and found the same Pareto optimal solutions with the same accuracy. As can be seen on FIG.12, DDRSM has generated a number of points randomly (see red markers on left and right diagrams.) The points have been used to build local approximations for estimating gradients. Clearly, both methods of gradient estimation allowed MGP to precisely determine the direction of improvement of the preferable objective F1 on each step, and the direction of simultaneous improvement for both objectives. As a 15 American Institute of Aeronautics and Astronautics
  • 16. result, MGP algorithm was able to find, and step along the global Pareto frontier on each optimization step. All Pareto optimal points match the conditions x1 ∈ [0;1], xi = 0, i = 2,...,10 , and this means that the optimal solutions are exact in both cases. However, DDRSM has spent 522/38=13.7 less model evaluations to find the same solutions 18 Pareto optimal points. The following FIG.13 allows one to see the conceptual advantage of directed optimization on Pareto frontier performed by MGP algorithm compared with traditional multi-objective optimization approach performed by NSGA-II, AMGA, and Pointer optimization algorithms. FIG. 13 shows Pareto optimal points found by four algorithms for the benchmark (11): NSGA-II, AMGA, and Pointer. MGP has spent 38 model evaluations, and found 18 Pareto optimal points. NSGA-II found 63 first rank points out of 3500 model evaluations. AMGA and Pointer have found 19 and 195 first rank points respectively out of 5000 model evaluations. As follows from FIG.13, optimization algorithms NSGA-II and Pointer were able to approach global Pareto frontier after 3500 and 5000 model evaluations respectively. However, the algorithms were not able to find precisely accurate Pareto optimal points, and to cover the entire Pareto frontier. AMGA algorithm was not able to even approach the global Pareto frontier after 5000 model evaluations. The following benchmark problem (12) is a challenging task because it has dozens of Pareto frontiers and five disjoint segments of the global Pareto frontier. The results of MGP algorithm for this benchmark will be compared with results of state of the art commercial multi-objective optimization algorithms developed by a leading design optimization company: Pointer, NSGA-II and AMGA. Since the benchmark (12) has just 10 design variables and 2 objectives, the entire design space and objective space can be visualized on just 6 scatter plots. Thus, we can see the optimization search pattern for each algorithm, and compare directed optimization on Pareto frontier with the traditional optimization approach. Minimize F1 = x1; Minimize + F2 = g ⋅ h (12) where g = 1 + 10(n − 1) + ( x2 + x3 + ... + xn ) − 10 ⋅ [cos(4 ⋅ π ⋅ x2 ) + cos(4 ⋅ π ⋅ x3 ) + ... + cos(4 ⋅ π ⋅ xn )], n = 10; 2 2 2 h = 1 − F1 / g − ( F1 / g ) ⋅ sin(10 ⋅ π ⋅ F1 ); X ∈ [0;1] The following FIG.14 illustrates optimization results for the benchmark problem (12). 16 American Institute of Aeronautics and Astronautics
  • 17. FIG. 14 shows all points evaluated by MGP algorithm and by three other multi-objective optimization algorithms Pointer, NSGA-II, and AMGA. MGP has used DDRSM for gradients estimation. It spent 185 evaluations, and has covered all five segments of the global Pareto frontier. Each alternative algorithm spent 2000 model evaluations with much worse results: NSGA-II was able to approach 3 of 5 segments on the global Pareto frontier. AMGA and Pointer have not found a single Pareto optimal solution. The global Pareto frontier for the benchmark (12) belongs to the straight line {x1=0…1, x2=x3=…=x10=0}. It was critical for MGP algorithm to recognize that x1 is the most significant design variable. It was done by DDRSM, and x1 was included in each local approximation model used for gradients estimation. As a result, MGP was stepping along x1 axis from 0 to 1, and covered all five segments of the global Pareto frontier (see FIG.14). Also, DDRSM helped to recognize that all other design variables are equal to zero for Pareto optimal points--one can see green points in the origin on the charts on FIG.14. Thus, in contrast with other algorithms, MGP performed all model evaluations in a small area around Pareto frontier in the design space (see green and red markers on FIG.14), which improved accuracy and efficiency of the algorithm. 7. eArtius Design Optimization Tool eArtius has developed a commercial product Pareto Explorer, which is a multi-objective optimization and design environment combining a process integration platform with sophisticated, superior optimization algorithms, and powerful post-processing capabilities. Pareto Explorer 2010 implements the described above optimization algorithms, and provides a complete set of functionality necessary for a design optimization tool: • Intuitive and easy to use Graphical User Interface; advanced IDE paradigm similar to Microsoft Developer Studio 2010 (see FIG.22); • Interactive 2D/3D graphics based on OpenGL technology; • Graphical visualization of optimization process in real time; • Process integration functionality; • Statistical Analysis tools embedded in the system; • Design of Experiments techniques; • Response Surface Modeling; • Pre- and post-processing of design information; • Data import and export. All the diagrams included in this paper are generated by Pareto Explorer 2010. The diagrams give an idea about the quality of data visualization, the ability to compare different datasets, and a flexible control over the diagrams appearance. 17 American Institute of Aeronautics and Astronautics
  • 18. FIG. 14 shows a screenshot of Pareto Explorer’s main window. In addition to the design optimization environment implemented in Pareto Explorer, eArtius provides all the described algorithms as plug-ins for Noesis OPTIMUS, ESTECO modeFrontier, and Simulia Isight design optimization environments. Additional information about eArtius products and design optimization technology can be found at www.eartius.com. 8. Conclusion Novel gradient-based algorithms MGE and MGP for multi-objective optimization have been developed at eArtius. Both algorithms utilize the ability of MGA analysis to find a direction of simultaneous improvement for all objective functions, and provide superior efficiency with 2-5 evaluations per Pareto optimal point. Both algorithms allow the user to decrease the volume of search space by determining an area of interest, and reducing in this way the number of necessary model evaluations by orders of magnitude. MGE algorithm allows starting from a given design, and takes just 15-30 model evaluations to find an improved design with respect to all objectives. MGP algorithm goes further: it uses Pareto frontier as a search space, and performs directed optimization on the Pareto frontier in the user’s area of interest determined by a selection of preferred objectives. Avoiding a search in the entire design space and searching only in the area of interest directly on Pareto frontier reduces the required number of model evaluations dramatically. MGP needs just 2-5 evaluations per step, and each step brings a few new Pareto optimal points. Both MGE and MGP algorithms are the best choice for multi-objective optimization of computationally expensive simulation models taking hours or even days of computational time for performing a single evaluation. New response surface method DDRSM is also developed. DDRSM builds local approximation for output variables on each optimization step, and estimates gradients consuming just 5-7 evaluations. DDRSM dynamically recognizes the most significant design variables for each objective and constraint, and filters out non-significant variables. This allows overcoming the famous “curse of dimensionality” problem: efficiency of MGE and MGP algorithms does not depend on the number of design variables. eArtius optimization algorithms are equally efficient with low dimensional and high dimensional (up to 5000 design variables) optimization tasks. Also, DDRSM eliminates the necessity to use traditional response surface and sensitivity analysis methods, which simplifies the design optimization technology and saves time for engineers. 18 American Institute of Aeronautics and Astronautics
  • 19. References 1. Marler, R. T., and Arora, J. S. (2004), "Survey of Multi-objective Optimization Methods for Engineering", Structural and Multidisciplinary Optimization, 26, 6, 369-395. 2. Bellman, R.E. 1957. Dynamic Programming. Princeton University Press, Princeton, NJ. 3. Simpson, T. W., Booker, A. J., Ghosh, D., Giunta, A. A., Koch, P. N., and Yang, R.-J. (2004) Approximation Methods in Multidisciplinary Analysis and Optimization: A Panel Discussion, Structural and Multidisciplinary Optimization, 27:5 (302-313) 4. Vladimir Sevastyanov, Oleg Shaposhnikov Gradient-based Methods for Multi-Objective Optimization. Patent Application Serial No. 11/116,503 filed April 28, 2005. 5. Garret N. Vanderplaats, 2005. Numerical Optimization Techniques for Engineering Design: With Applications, Fourth Edition, Vanderplaats Research & Development, Inc. 6. Zadeh, L. A. 1963: Optimality and Non-Scalar-Valued Performance Criteria. IEEE Transactions on Automatic Control AC-8, 59-60. 7. Zionts, S. 1988: Multiple Criteria Mathematical Programming: An Updated Overview and Several Approaches. In: Mitra, G. (ed.) Mathematical Models for Decision Support, 135-167. Berlin: Springer-Verlag. 8. Das, I.; Dennis, J. E. 1997: A Closer Look at Drawbacks of Minimizing Weighted Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems. Structural Optimization 14, 63-69. 9. Messac, A.; Sukam, C. P.; Melachrinoudis, E. 2000a: Aggregate Objective Functions and Pareto Frontiers: Required Relationships and Practical Implications. Optimization and Engineering 1, 171-188. 10. Messac, A.; Sundararaj, G., J.; Tappeta, R., V.; Renaud, J., E. 2000b: Ability of Objective Functions to Generate Points on Nonconvex Pareto Frontiers. AIAA Journal 38, 1084-1091. 19 American Institute of Aeronautics and Astronautics