SlideShare uma empresa Scribd logo
1 de 10
Baixar para ler offline
International Journal of Mathematics and Statistics Invention (IJMSI)
E-ISSN: 2321 – 4767 P-ISSN: 2321 - 4759
www.ijmsi.org Volume 2 Issue 5 || May. 2014 || PP-20-29
www.ijmsi.org 20 | P a g e
On Path-Following Dynamical Systems Trajectories for Linear
Optimization Problems in Standard Form
Ukwu Chukwunenye
Department of Mathematics University of Jos P.M.B 2084 Jos, Nigeria
ABSTRACT : This article examined path-following methods for solving linear programming problems. Then
it collected key relevant results in Ukwu [1], formulated analogous continuous dynamical systems, and
established a sequence of propositions which were invoked to prove that the system’s trajectories converge to
the optimal solution of a linear programming problem in standard form, under appropriate conditions. These
results were made possible by the exploitation of norm properties, their derivatives and the theory of ordinary
differential equations, paving the way for the pursuits of satisficing solutions of some linear optimization
problems that may not require exact optimal solutions but particular solutions at specified tolerance levels,
within feasible guidelines or constraints.
KEYWORDS: Convergent, Dynamics, Karmarkar, Optimal, Trajectories.
I. INTRODUCTION AND MOTIVATION
The application of linear programming to business, management, engineering and structured decision
processes has been quite remarkable. Since the development of the simplex method in 1947 by G.B. Dantzing,
there has been a flurry of research activities in the designing of solution methods for linear programming,
mostly aimed at realizing more effective and efficient algorithmic computer implementations and computing
complexity reductions. In the Fall of 1984, Karmarkar [2] of AT & Bell Laboratories proposed a new
polynomial-time algorithm for solving linear optimization problems. The new algorithm not only possesses
better complexity than the Simplex method in the worst-case scenario, but also shows the potential to rival the
Simplex algorithm for large-scale, real-world applications. This development quickly captured the attention of
Operations Research community. Radically different from the Simplex method, Karmarkar’s original algorithm
considers a linear programming problem over a simplex structure and moves through the interior of the polytope
of feasible domain by transforming the space at each step to place the current solution at the center of the
polytope in the transformed space. Then the solution is moved in the direction of projected steepest descent far
enough to avoid the boundary of the feasible region in order to remain interior. Next, the inverse transformation
of the improved solution is taken to map it back to the original space to obtain a new interior solution. The
process is repeated until an optimum is obtained with a desired level of accuracy.
Karmarkar’s standard form for linear programming can be described as follows:
 1
min
. . 0
1
0
(1)
T
T
c x
s t Ax
LP
e x
x









where A is an m n matrix of full row rank,  1,1,...,1
T
e  is the column vector of n ones, c is an n-
dimensional column vector and T denotes transpose.
The basic assumptions of Karmarkar’s algorithm include:
0Ae  (2)
the optimal objective value of (1) is zero. (3)
Notice that if we define
0 e
x
n
 , then assumption (2) implies that x0
is a feasible solution of (1) and each
component of x0
has the positive value
1
n
.Any feasible solution x of (1) is called an interior feasible solution if
each component of x is positive. This implies that x is not on the boundary of the feasible region; needless to say
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 21 | P a g e
that the constraint 1
T
e x  in (1) implies that
1
.Therefore1j
n
j
x

  1,1,...,1 ,
T
e  leading to the conclusion that
a consistent problem in Karmarkar’s standard form has a finite optimum. In Fang and Puthenpura [3], it is
shown that any linear programming problem in standard form can be expressed in Karmarkar’s standard form.
Karmarkar’s algorithm and its specifics are well-exposed in [3]. Karmarkar’s algorithm is an interior-point
iterative scheme for solving linear programming problems.
Interior-point methods approach the optimal solution of the linear program from the interior of the
feasible region by generating a sequence of parameterized interior solutions. The specifics of these methods
have already been discussed in [1]. The primary focus of this article will be on path-following methods. The
basic idea of path-following is to incorporate a barrier function into the linear objective. By parameterizing the
barrier function, corresponding minimizers form a path that leads to an optimal solution of the linear program.
The main motivation for this work comes from the work of Shen and Fang [4], in which the
“generalized barrier functions” for linear programming were defined to create an ideal interior trajectory for
path-following. The key components such as the moving direction and the criterion of closeness required for a
path-following algorithm were introduced for designing a generic path-following algorithm with convergence
and polynomiality proofs under certain conditions.
This work is aimed at exploiting the convergence results in [4] to a parameterized continuous
dynamical system. This would lead to the construction of appropriate energy and Lyapunov functions which
would be utilized to show that the trajectories of the dynamical system converge to the optimal solution of the
linear program under appropriate assumptions.
One is not aware of any interior-point dynamic solver reported in the literature. Most dynamic solvers
have been used for the neural network approach. Such investigations can be referred to in Bertsekas [5], Cohen
and Grossberg [6], Hopfield and Tanks [7], Wang [8, 9, 10, 11],and Zah [12]. In section 4, we formulate our
supporting propositions and main results. Section 5 presents our conclusions and direction of a follow-up
research.
II. INTERIOR-POINT METHODS
Interior-point methods such as Affine Scaling Methods, Potential Reduction Methods and Path-
Following Methods are well exposed in [1]. Our main result hinges on Path-following methods.
2.1 Path-following Methods
Consider a linear program:
 
min
. . (4)
0
T
c x
P s t Ax b
x







Let    0
: , 0 and : , 0
n n
W x Ax b x W x Ax b x       R R be the interior feasible domain.
Let the following assumptions hold:
A has full row rank, (5)
0
W W   , where 0W is the closure of 0
W , and  is the empty set (6)
W is compact. (7)
2.2 Definition
A function :W  R is called a generalized barrier function for linear programming (GBLP), if
 1 :P W  R is proper, strictly convex and differentiable, where R is the extended real line.
Remark
The properness property of  is equivalent to the requirement that  x be strictly bounded below by  for
all x W and be strictly bounded above by  for some x W .
(P2) if the sequence   0
k
x W converges to x with the
th
i component, 0i
x  then   lim
k
k i
x
  
(P3) the effective domain of  contains 0
W . Equivalently,   0
. is finiteW x W s t x  .
Let 0  and define an augmented primal problem  u
P associated with a GBLP function follows:
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 22 | P a g e
 
 min
. . (8)
0
T
u
c x x
P s t Ax b
x








Then we have the following results from [4]:
[1]  u
P has a unique optimal solution, denoted by  x  , in 0
W ,
[2]   T
c z  is a monotone decreasing function in  ,
[3] The set   : 0x    characterizes an interior, continuous curve in 0
W .
[4] Given a decreasing positive sequence k
 such that  lim 0, if * limk k k k
x x  
  then x* is the
optimum of (P).
[5] Suppose that x is a given interior feasible solution to (P) and x solves the problem:
 
 
1
21 2
min
. . 0 (9)
1,
T
c x x
P s t A x
X x

 


  
 
  
   





where X is any positive definite symmetric matrix and 0 1  . Then, x defines a moving direction at x
below:
    
12
.
T T
x X I XA AX A AX X c x 

      
 
    
Under appropriate condition on X , c and  , it is proved in [4], that any convergent feasible sequence of
solutions to u
P must converge to the optimal solution to (P) as 0 .


The next section collects a sequence of lemmas in [1] needed in the proof of the asymptotic behavior of the
system’s trajectories.
III. CONTINUOUS DYNAMICAL SYSTEMS RESULTS
In this section, we formulate an analogous continuous dynamical system and prove that the system’s
trajectories converge to the optimal solution of (P) under some appropriate conditions.
Let A be an m x n matrix of full row rank. Let
n

R denote the set : 0
n
x x R . For x W , let z x r  for
some
n
r R . Then .Ax b Az Ar b    Taking  
1
,
T T
r A AA b

  we see that 0Az  . Therefore, the system
0Az  is consistent if and only if the system Ax b is consistent. In the sequel we let
n
r  R be such that
Ar b  and z x r  .
Let  be a constant such that 0 1  . Let X be any positive definite symmetric matrix of order n and let p
and q be norm conjugates of each other such that 1 ,p q   and 1 1
1p q  .
Let j
a denote the
th
j row of A for  1,2,...,j m . For fixed A, define the map   1
:
n
A 
 R R by
1
1
.
m
p
jp
j
p
Az a z


 
 
 

Remark
p
Az is called the p-norm of the function Az . Unless explicitly stated we use the 2-norm in this article.
Define
2
02
2
maxz
Az
A
z


  
 
  
. For all x W and for any given
n
rR such that z x r  consider the
function:
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 23 | P a g e
   
       
, , , ,
21 2
2
1
(10)
p p r
n
T
p j
j
E z E z
c z r p Az r z r z r X z
 
  
 


         
where
   
if 0 if 0
max ,0 and max ,0
0 if 0 0 if 0
w w w w
w w w w
w w
 
  
    
 
 
 
 
,
for ; 0, 0, 0, 0.w r p     R
Note the following:
[1] Property (P1) of definition 2.2 implies that,  z r    for all z in the null-space of A and
 z r    for at least one z in the null-space of A.
[2] Properties (P1) and (P2) of definition 2.2 imply that   2
z r    , for.
[3] Property (P2) of definition 2.2 implies that if  : 1,2,....
k
z k  is any positive convergent sequence in the
null-space of A such that   ˆlim
k
k
z r z r
   , with ˆ 0j
z r  , for some  1, 2,...,j n ,
then   lim
k
k z j
z r
    . The latter ensures that the minimization of  E z is never achieved at
the boundary of the set : 0z z r  , using a gradient projection method in the minimization program.
[4] p
p Az is penalty for the violation of  z N A .
[5]  
1
j
n
j
r z r


 penalizes violations of z r  0.
[6]  21 2
2
X z 

 is the Lagrangian term associated with the constraint
1 2 2
2
0X z 

  .
The stage is now set for our dynamical system formulation and the proof of our main result with the aid of the
results in [1].
Let 0 1  and
       
21 2
1 2 2
: 0 , : 0 , and : 0 ,
n n n
S z Az N A S z X z T z z r

           R R R
where r is given and defined as on the previous page. Let 0
andt t be any pair of time variables such
that 0
0t t  , and let z0 be an n-dimensional column vector.
For a differentiable function
1
:
n
D R R , let  z
D z denote the gradient of  D z with respect to z. Observe
that   n
z
D z  R for each n
zR .
Consider the following dynamical system:
    
 
0
0 0
0 2
, , , ,
; 0 (11)
( ) (12)
int . (13)
z p p r
z t E z t t t
z t z
z S T
 
   




Then  0
0
0
n
j
j
z r


  , using the definition,  max , 0w w

  and the fact that 0
0z r  , by virtue of z0
being in int(T).
System (11) can be treated as a control system of the form:
 1 (3
(14)
T
z c A     
where
   
     
   1 1 2 2
,Az p
z p Az z z r              and
   
   23 3 1 2
2z
z X z   

     .
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 24 | P a g e
   1 2
,  , and
 3
 can be regarded as controls. These controls will be implemented such that the trajectories of
(11), (12) and (13) will be forced into the feasible region  1 2
intS S t  and maintained there while moving in
a direction that decreases  T
c z r . The following sequence of lemmas will be found useful in the sequel.
Let n
I be the identity matrix of order n and let  
1T T
n
P I A AA A

  be the projection matrix onto the null-
space of A.
The following results were established in [1]
3.1 Lemma
The dynamics of system (11) when restricted to  1 2
intS S t  are described by:
  (15)
0 (16)
z
z P E z
Az
  


3.2 Lemma
If 1 2
0 z S S   , then  21 2
2
0z
P X z 

  
3.3 Lemma
For any 1p 
and 1
,z S 
        1 1 1
1 1 2 21
sgn , sgn , , sgn (17)
T
p p p
z m mp p
p
TA
Az a z a z a z a z a z a z
Az
  

  
3.4 Lemma (Noble [13], p429)
For any square matrix M,
 
1
2
2
maximum eigenvalue of
T
M M M
3.5 Definition
The core of the target S1 for the dynamical system:
  (18)zz E z 
is the set  0
n
z  R of all initial points that can be driven to the target S1 in finite time and maintained there,
thereafter by an appropriate implementation of some feasible control procedure.
Denote this set by  1
core S . The following lemma demonstrates that under certain conditions  1
core S is
nonempty.
3.6 Lemma
If:
   
1
(19)
T T T
pzz A AA E z k AzA

 
for some 1p  and 1
z S  , with some 0k  , then the trajectories of:
 
  0
(20)
(21)0
z
z E z
z z
 


hit 1
S in finite time and remain there thereafter.
Our task is to construct appropriate energy and Lyapunov functions and impose appropriate conditions under
which the trajectories of:
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 25 | P a g e
 
  0
(22)
(23)0
zz E z
z z
 


converge to z r
 , where z r
 is an optimal solution of (5) for each 0  . Then we can appeal to Theorem
2.2 of [4] to assert the convergence to the optimal solution of (P), noting that a positive decreasing sequence of
parameters
k
 with limit 0 may be used in place of  .
This above approach holds a lot of promise for an extension of our result to neural networks where dynamical
systems, energy and Lyapunov functions are used extensively.
IV. MAIN RESULT
The following tool is needed for the proof of the main result:
Let 1
d be a nonnegative constant such that:
   
1 1
1
2
(24)
T T T T T T
q
w A AA d w A AA
 

n
w  R and 1q  . See Stoer and Bulirsch [14], p. 185.
Let   
1
2
2 1 2
max , ,..., n
d u u u , where ui is an eigenvalue of  2
for 1, 2,...,X i n

 . See [13], p. 429.
Remark
The positive definiteness of X guarantees the existence and the positivity of 2
d . Also the existence of d1 is
assured by the equivalence property of p-norms. Therefore 1
d and 2
d are well-defined. Furthermore,
1 1 2
2 22
2
2
and ,X d X d
 
   using ([13], p. 429) and the fact that  is an eigenvalue of M if and only if  j
is an eigenvalue of
j
M for any positive integer j.
4.1 Theorem
Let 1 p   and let the following condition hold:
    
1
1 1 22 2
2
2 (25)
T T
z
p k d c z r d d A AA  

     
where ,k  and  are sufficiently small, p is sufficiently large and  2
intz S T  . If  1 2
int
c
z S S T   ,
then condition (19) of lemma (3.4) holds and the equilibrium of the system (3.2) is asymptotically stable.
Furthermore, if  in (3.2) is replaced by a positive decreasing sequence of parameters  , 1,2,...
k
k  with
limit being zero, then the solution of (3.2) in x-space converges to the optimal solution of the linear
programming problem (P).
Proof
The strategy for the proof will be as follows:
[1] We show that the condition (19) of lemma (3.4) holds, which will assure the feasibility of the trajectories of
(3.2) at some time t and thereafter.
[2] We show that the function E zp p r, , , , ( )  is a nonincreasing time function along the trajectories of (3.4) and
that the time derivative vanishes only at an equilibrium solution of (3.4).
[3] We show that the equilibria of (3.4) satisfy the Karush-Kuhn-Tucker Optimality Condition. This would
guarantee the uniqueness of the equilibrium.
[4] Finally we construct a Lyapunov function which is negative definite along the trajectories of the system
(3.4) and then invoke the Lyapunov theory to conclude that the trajectories converge to the unique
equilibrium of (3.4). Consequently, the results of Chapter two and, in particular, Theorem 2.2 of [15] can be
used to conclude that the trajectories converge to the optimal solution of problem P.
Suppose the trajectories of (3.2) are restricted to S1 . By lemma (3.1), the dynamics of (3.2) are given by:
  (26)z
z P E z  
If  1
int
c
z S T  , then Lemma (3.4) implies that the trajectories of (3.2) reach S1 in finite time and could be
forced to remain there thereafter, if (19) holds, now:
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 26 | P a g e
      21 2
2
(27)z z zp
z c p Az z r X z   

         
Next, we expand    
1T T T
z
z A AA A E z

 as follows:
   
     
     
1
1 1
1 1
2
1 2 3 4
2 (28)
T T T
z
T T T T T T
z p
T T T T T T
z
z A AA A E z
z A AA Ac z A AA Ap Az
z A AA A z r z A AA AX z T T T T  

 
 


  
       
From Lemma (3.3), we get:
       
 
     
       
1
1 1 1
2 1 1 2 21
1 1 1
1 1 2 21
1 1 1
1 2 1 1 2 21
sgn , sgn , , sgn
sgn , sgn , , sgn
, ,... sgn , sgn , , sgn (29)
T T T T
t
p p p
m mp
p
T
t
p p p
m mp
p
t
p p p
m m mp
p
pz A AA AA
T a z a z a z a z a z a z
Az
p Az
a z a z a z a z a z a z
Az
p
a z a z a z a z a z a z a z a z a z
az

  

  

  




  
  
  



Hence:
       
     
   
1
2 1 1
1 1
1
1 1 1
1
1 1
1 1 2
2 2
sgn using sgn
(30)
(31)
m m
p p
i i i ip p
i ip p
p
p pp
p
T T T T T T T T T
p
q
T T T T T T
p p
p p
T a z a z a z a z
Az Az
p
Az p Az
Az
T z A AA Ac c A AA Az c A AA Az
d c A AA Az d c A AA Az
  

 
 

  
 
  
 
  
 
 
(by Hölders inequality and [12], p. 482)
where p is the conjugate exponent of q; that is
1 1
1 , , 1p q
p q
     .
   
   
 
1 1
1 2
4
1 1
2 1 1
1 1 2 2
2 2
1
1 2
2
2 2
2 2
2
T T T T T T
p
q
T T T T T T
p p
T T
p
T z A AA AX z z X A AA Az
d z X A AA Az d z X X A AA Az
d d A AA Az
 
 

 
 
 
  

 
 

 
  
since 2
, 0 1z S    and in view of the definition of 2
d following (24) by a similar reasoning,
   
1
3 1 2
2
T T
z p
T d z r A AA Az 

  
Let: 1 2 3 4
(32)S T T T T   
Then: (33)p
S k Az
 z S1 with some k, if:
1 2 3 4
(34)p
T T T T k Az    
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 27 | P a g e
since    
1
1 2 3 4
T T T
z
z A AA A E z T T T T

      . Consequently (19) is satisfied if (34) is satisfied.
Moreover (34) is satisfied if:
    
1
1 22 2
2
2 0 (35)T T
z
p k d c z r d A AA  

      
So that:
    
1
1 22 22
2
2 (36)
T T
z
p k d c z r d A AA  

     
(36) is satisfied  1 2
int
c
z S S T    if ,k and  are sufficiently small, p is sufficiently large and for any
sequence zk
x
1
, such that   0 as
k
j
z r k    in view of (P2) of Definition 2.2. Now, (36)  (35) 
(34) (19). Hence, the null-space feasibility condition,  z t S 1 can be restored only in the interior of T, if
(36) holds.
To complete the proof, we need the following sequence of results which shows that starting from an interior
feasible point, the trajectories of (3.4) transformed to the x-space will converge to the optimal solution of the
corresponding linear program (P).
4.2 Proposition
The function E zp p r, , , , ( )  defined in (10) is a nonincreasing function of time on the trajectories of (3.4).
Proof
             
 
2
2
2
using
0 (37)
T T T T T
z z z z z
z
d
E z E z z E z P E z E z P P E z P P P
dt
P E z
          
   

as desired.
4.2 Proposition
The time derivative of  E z vanishes only at any equilibrium of (26).
Proof
An equilibrium of (26) satisfies the relation   0z
P E z  .
  0 (38)
d
E z
dt

if   2
2
0z
P E z   , by (37). Therefore:
  0 (39)z
P E z 
showing that any solution to (38) satisfies (39) and consequently is an equilibrium point (26).
4.3 Proposition
An equilibrium of (26) satisfies the Karush-Kuhn-Tucker Optimality Condition.
Proof
By the definition of equilibria:
  0 (40)z
P E z 
(40) implies that  z
E z is in null-space of P. 0AP  and so:
0 (41)
T T T
P A PA 
(40) and (41) show that  z
E z is in the range space of AT
, thus there exists
m
a wR such that:
  (42)
T
z
E z A w 
We deduce from (42) that:
`   2
2 (43)
T
z
c z r X z A w  

    
Let   2
2z
s z r X z  

      . Then:
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 28 | P a g e
  2
0 (44)
0
2
0.
T
z
A w s c
Az
z r X z s
z r
  

  

    
 

The condition (44) coincides with the Karush-Kuhn-Tucker Optimality Condition (46) of   z E z in [4] if and
only if:
2
0 (45)X z


in which case the positive definiteness of
~
X assures that:
0 (46)z 
Note also that the convexity of the set  1 2
intS S T  assures the uniqueness of solution to the Karush-Kuhn-
Tuker Optimality Condition and consequently the uniqueness of the equilibrium of (26). From (37) we deduce
that   0 ford
dt E z z z  , where z is the unique equilibrium of (26). This establishes that the equilibrium of
the system (26) is asymptotically stable. Now let 0  to deduce that z r x   *, the optimal solution of
(P). (See [4], theorem 2.2).
The condition 0z  in (46) is a complementary-slackness-like condition: If zj  0 for
some  j n 1 2, ,..., , then 0  ; and 0 
For potential Neural Network applications we also use the Lyapunov theory to show that the trajectories of (26)
converge to the unique equilibrium of (26). To this end, we need to construct an appropriate Lyapunov function.
4.5 Definition
A function :
n
V D  R R is called a Lyapunov function if V is continuously differentiable and positive
definite on D. if V is negative definite along the trajectories of the dynamical
system      where (0) 0,
d
x t f x t f
dt
  , and x D , then the equilibrium x  0 is asymptotically
stable. (See [13], pp. 203-205)
It is not clear that z  0is the unique equilibrium of (26); therefore we must perform a linear translation of the
equilibrium to the origin.
Let z be the equilibrium solution of (4.2). Let y z z   . Then:
      (47)z z z
z P E z z P E z P E z z  
         
since   0z
P E z
  .
Clearly 0z  is equilibrium of the transformed system and is unique. Let:
   
2
2
(48)z
V z P E z z
  
Then  0 0V  and   0 if 0V z z  . Also  V z is continuous in z. therefore  V z is a Lyapunov function.
Note that:
       
2
2
2 2 (49)
T
z z
d
V z t P E z z z P E z z
dt
 
      
which vanishes if and only if z  0. Therefore  V z the derivative of is negative definite along the solution
(on the trajectories) of (47). We now appeal to Lyapunov theory to assert that the origin is an asymptotically
stable solution of (47). Consequently the equilibrium z of the system (26) is asymptotically stable. Now
let   0. Then z r x   *, the optimal solution of (P). This comes from theorem 2.2 of [4].
On Path-Following Dynamical Systems Trajectories…
www.ijmsi.org 29 | P a g e
VI. CONCLUSION
This work was motivated partly by interior-point concepts and largely by the path-finding methods in
[4] for solving linear programming problems. Many real-life problems which could be formulated as linear
programming problems are dynamic in nature; for example, the inventory level of some item at a given time and
changes in demand levels of some consumer goods due to price fluctuations and seasonal variation. Also on-line
optimization may be required in many application areas, such as satellite guidance, robotics and oil outputs from
oil wells and refinery operations. Some of these problems may not require exact optimal solutions but particular
solution at specified tolerance levels, within feasible guidelines or constraints. In particular solutions at positive
levels may be desired for all decision variables, implying that interior solutions are desired. These and many
other problems of the continuous variety could be more realistically modelled by continuous dynamical systems.
Unfortunately, research in this direction has been based mainly on neural network approach, none of which is
interior-point oriented. In section 3, we formulated an interior-point based dynamical system for solving linear
programming problems in standard form. The key ideas for this formulation came from the examination of [4].
Then, we stated that under certain conditions, the solutions of our dynamical system would converge to the
solution of a corresponding linear programming problem in standard form. We proceeded cite some relevant
results in [1] and establish a sequence of propositions which would be needed to prove that our solutions would
have the right convergence property-the convergence of the trajectories of the dynamical system to the optimal
solution of the linear program under appropriate assumptions. This approach holds a lot of promise for an
extension of our result to neural networks where dynamical systems, energy and Lyapunov functions are used
extensively.
REFERENCES
[1] Ukwu (2014o). “On interior-point methods, related dynamical systems results and cores of targets for linear programming”.
International Journal of Mathematics and Statistics Invention (IJMSI), Vol. 2, Issue 5, April 2014.
[2] Karmarkar, N.K.,(1984).“A new polynomial-time algorithm for linear programming,” Combinatorica, Vol.4, pp. 373-395.
[3] Fang, S.C., and Puthenpura, S.(1984).“Linear optimization and extensions,” Prentice Hall, Englewood Cliffs, N.J.
[4] Shen, R.L. and Fang, S.C.(1994).“On the generalized path-following methods for linear programming,” Optimization, Vol. 30,
pp. 235-249.
[5] Bertsekas, D.P.(1975).“Necessary and sufficient conditions for a penalty methods to be exact,” Mathematical programming, Vol.
9, pp. 87-99.
[6] Cohen, A., and Grossberg, S.(1983).“Absolute stability of global pattern formation and parallel memory storage by
competitive neural networks,” IEEE Transactions on Systems, Man, and Cybernetics, Sept./Oct., Vol. SMC-13, No. 5, pp. 815-
825.
[7] Hopfield, J. and Tanks, D.(1983).“Simple neural optimizations networks: An A/D converter signal decision circuit, and, and a
linear programming circuit,” IEEE Trans., CAS-33, (5), pp.533-541.
[8] Wang, J. (1991).“A time-varying recurrent neural system for convex programming,” Proc. Int. Joint Conf. on Neural Networks,
Seattle, Vol. 1, pp. 147-152.
[9] Wang, J.(1992).“Analogue neural network for solving the assignment problem,” Electronics Letters, May , Vol. 28, No.11, pp.
1047-1050.
[10] Wang, J.(1993).“Analysis and design of recurrent neural network for linear programming,” IEEE Trans. On Circuits and
Systems- I, Sept., Vol. 40, No. 9, pp. 613-618.
[11] Wang, J.(1991).“On the asymptotic properties of recurrent neural networks for optimization,” Int. J. Pattern Recognition
and Artificial Intelligence, Vol. 5, No. 4, pp. 581-601.
[12] Zah, H. (1995). “Solving linear programming problems with neural networks: A comparative study,” IEEE Transactions on
NeuralNetworks, Jan. Vol. 6, No.1. pp. 94-105.
[13] Noble, B..(1969). “Applied linear algebra,” Prentice-Hall, Inc. N.J.
[14] Stoer, J., and Bulirsch, R.(1991).“Introduction to numerical analysis,” 2nd Ed., Springer-Verlg, N.Y.

Mais conteúdo relacionado

Mais procurados

B02110105012
B02110105012B02110105012
B02110105012
theijes
 
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
Project KRIT
 

Mais procurados (18)

Unger
UngerUnger
Unger
 
MHT Multi Hypothesis Tracking - Part2
MHT Multi Hypothesis Tracking - Part2MHT Multi Hypothesis Tracking - Part2
MHT Multi Hypothesis Tracking - Part2
 
Design and analysis of ra sort
Design and analysis of ra sortDesign and analysis of ra sort
Design and analysis of ra sort
 
The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
A method for solving quadratic programming problems having linearly factoriz...
A method for solving quadratic programming problems having  linearly factoriz...A method for solving quadratic programming problems having  linearly factoriz...
A method for solving quadratic programming problems having linearly factoriz...
 
Quadratic Programming : KKT conditions with inequality constraints
Quadratic Programming : KKT conditions with inequality constraintsQuadratic Programming : KKT conditions with inequality constraints
Quadratic Programming : KKT conditions with inequality constraints
 
Analytical and Exact solutions of a certain class of coupled nonlinear PDEs u...
Analytical and Exact solutions of a certain class of coupled nonlinear PDEs u...Analytical and Exact solutions of a certain class of coupled nonlinear PDEs u...
Analytical and Exact solutions of a certain class of coupled nonlinear PDEs u...
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
Markov Chain Basic
Markov Chain BasicMarkov Chain Basic
Markov Chain Basic
 
Composition of Clans for Solving Linear Systems on Parallel Architectures
Composition of Clans for Solving Linear Systems on Parallel ArchitecturesComposition of Clans for Solving Linear Systems on Parallel Architectures
Composition of Clans for Solving Linear Systems on Parallel Architectures
 
B02110105012
B02110105012B02110105012
B02110105012
 
Coordinate Descent method
Coordinate Descent methodCoordinate Descent method
Coordinate Descent method
 
Quantum Meta-Heuristic Algorithm Based on Harmony Search
Quantum Meta-Heuristic Algorithm Based on Harmony SearchQuantum Meta-Heuristic Algorithm Based on Harmony Search
Quantum Meta-Heuristic Algorithm Based on Harmony Search
 
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
M.G.Goman, A.V.Khramtsovsky (1997) - Global Stability Analysis of Nonlinear A...
 
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
 
A Mathematical Model to Solve Nonlinear Initial and Boundary Value Problems b...
A Mathematical Model to Solve Nonlinear Initial and Boundary Value Problems b...A Mathematical Model to Solve Nonlinear Initial and Boundary Value Problems b...
A Mathematical Model to Solve Nonlinear Initial and Boundary Value Problems b...
 
Greedy algorithm
Greedy algorithmGreedy algorithm
Greedy algorithm
 
An algorithm for solving integer linear programming problems
An algorithm for solving integer linear programming problemsAn algorithm for solving integer linear programming problems
An algorithm for solving integer linear programming problems
 

Destaque

Materi makanan dan penyembelihan
Materi makanan dan penyembelihanMateri makanan dan penyembelihan
Materi makanan dan penyembelihan
M. ALI AMIRUDDIN
 
Angles And Measures.pdf
Angles And Measures.pdfAngles And Measures.pdf
Angles And Measures.pdf
bwlomas
 
dBest - Um Framework para Avaliação de Desempenho de Banco de Dados - SBBD 2011
dBest - Um Framework para Avaliação de Desempenho de Banco de Dados - SBBD 2011dBest - Um Framework para Avaliação de Desempenho de Banco de Dados - SBBD 2011
dBest - Um Framework para Avaliação de Desempenho de Banco de Dados - SBBD 2011
Lucas Lemos
 
Generation Y Volunteerism and Civic Action: Engaging Future Leaders and Entre...
Generation Y Volunteerism and Civic Action: Engaging Future Leaders and Entre...Generation Y Volunteerism and Civic Action: Engaging Future Leaders and Entre...
Generation Y Volunteerism and Civic Action: Engaging Future Leaders and Entre...
Sean P. Bender
 
Goal planner 2012
Goal planner 2012Goal planner 2012
Goal planner 2012
Muthu Kumar
 
Question 2 Solution
Question 2 SolutionQuestion 2 Solution
Question 2 Solution
Shinobi
 
4.besp –
4.besp –4.besp –
4.besp –
libfsb
 
Binomial Distributions
Binomial DistributionsBinomial Distributions
Binomial Distributions
JRisi
 
Victorino Table Of Contents
Victorino Table Of ContentsVictorino Table Of Contents
Victorino Table Of Contents
Joyce Victorino
 
Rima Nasser Fine Arts Portfolio
Rima Nasser Fine Arts PortfolioRima Nasser Fine Arts Portfolio
Rima Nasser Fine Arts Portfolio
rima97
 

Destaque (19)

O Carnaval Das Marcas
O Carnaval Das MarcasO Carnaval Das Marcas
O Carnaval Das Marcas
 
03 java programming
03 java programming03 java programming
03 java programming
 
Materi makanan dan penyembelihan
Materi makanan dan penyembelihanMateri makanan dan penyembelihan
Materi makanan dan penyembelihan
 
Angles And Measures.pdf
Angles And Measures.pdfAngles And Measures.pdf
Angles And Measures.pdf
 
dBest - Um Framework para Avaliação de Desempenho de Banco de Dados - SBBD 2011
dBest - Um Framework para Avaliação de Desempenho de Banco de Dados - SBBD 2011dBest - Um Framework para Avaliação de Desempenho de Banco de Dados - SBBD 2011
dBest - Um Framework para Avaliação de Desempenho de Banco de Dados - SBBD 2011
 
L25052056
L25052056L25052056
L25052056
 
Generation Y Volunteerism and Civic Action: Engaging Future Leaders and Entre...
Generation Y Volunteerism and Civic Action: Engaging Future Leaders and Entre...Generation Y Volunteerism and Civic Action: Engaging Future Leaders and Entre...
Generation Y Volunteerism and Civic Action: Engaging Future Leaders and Entre...
 
Goal planner 2012
Goal planner 2012Goal planner 2012
Goal planner 2012
 
ENFERMEDADES RESPIRATORIAS
ENFERMEDADES RESPIRATORIASENFERMEDADES RESPIRATORIAS
ENFERMEDADES RESPIRATORIAS
 
Training a
Training aTraining a
Training a
 
Theoretical study of the effect of hydroxy subgroup on the electronic and spe...
Theoretical study of the effect of hydroxy subgroup on the electronic and spe...Theoretical study of the effect of hydroxy subgroup on the electronic and spe...
Theoretical study of the effect of hydroxy subgroup on the electronic and spe...
 
Cibernetica
CiberneticaCibernetica
Cibernetica
 
Hf2412861289
Hf2412861289Hf2412861289
Hf2412861289
 
Ruggles Place
Ruggles PlaceRuggles Place
Ruggles Place
 
Question 2 Solution
Question 2 SolutionQuestion 2 Solution
Question 2 Solution
 
4.besp –
4.besp –4.besp –
4.besp –
 
Binomial Distributions
Binomial DistributionsBinomial Distributions
Binomial Distributions
 
Victorino Table Of Contents
Victorino Table Of ContentsVictorino Table Of Contents
Victorino Table Of Contents
 
Rima Nasser Fine Arts Portfolio
Rima Nasser Fine Arts PortfolioRima Nasser Fine Arts Portfolio
Rima Nasser Fine Arts Portfolio
 

Semelhante a C025020029

Numerical Solutions of Burgers' Equation Project Report
Numerical Solutions of Burgers' Equation Project ReportNumerical Solutions of Burgers' Equation Project Report
Numerical Solutions of Burgers' Equation Project Report
Shikhar Agarwal
 
Beyond function approximators for batch mode reinforcement learning: rebuildi...
Beyond function approximators for batch mode reinforcement learning: rebuildi...Beyond function approximators for batch mode reinforcement learning: rebuildi...
Beyond function approximators for batch mode reinforcement learning: rebuildi...
Université de Liège (ULg)
 
Batch mode reinforcement learning based on the synthesis of artificial trajec...
Batch mode reinforcement learning based on the synthesis of artificial trajec...Batch mode reinforcement learning based on the synthesis of artificial trajec...
Batch mode reinforcement learning based on the synthesis of artificial trajec...
Université de Liège (ULg)
 
Numerical
NumericalNumerical
Numerical
1821986
 

Semelhante a C025020029 (20)

Numerical Solutions of Burgers' Equation Project Report
Numerical Solutions of Burgers' Equation Project ReportNumerical Solutions of Burgers' Equation Project Report
Numerical Solutions of Burgers' Equation Project Report
 
Averaging Schemes For Solving Fixed Point And Variational Inequality Problems
Averaging Schemes For Solving Fixed Point And Variational Inequality ProblemsAveraging Schemes For Solving Fixed Point And Variational Inequality Problems
Averaging Schemes For Solving Fixed Point And Variational Inequality Problems
 
Semi-global Leaderless Consensus with Input Saturation Constraints via Adapti...
Semi-global Leaderless Consensus with Input Saturation Constraints via Adapti...Semi-global Leaderless Consensus with Input Saturation Constraints via Adapti...
Semi-global Leaderless Consensus with Input Saturation Constraints via Adapti...
 
R180304110115
R180304110115R180304110115
R180304110115
 
A New Neural Network For Solving Linear Programming Problems
A New Neural Network For Solving Linear Programming ProblemsA New Neural Network For Solving Linear Programming Problems
A New Neural Network For Solving Linear Programming Problems
 
9.pdf
9.pdf9.pdf
9.pdf
 
Quantum algorithm for solving linear systems of equations
 Quantum algorithm for solving linear systems of equations Quantum algorithm for solving linear systems of equations
Quantum algorithm for solving linear systems of equations
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
 
A time study in numerical methods programming
A time study in numerical methods programmingA time study in numerical methods programming
A time study in numerical methods programming
 
A time study in numerical methods programming
A time study in numerical methods programmingA time study in numerical methods programming
A time study in numerical methods programming
 
The International Journal of Engineering and Science (The IJES)
 The International Journal of Engineering and Science (The IJES) The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
A New SR1 Formula for Solving Nonlinear Optimization.pptx
A New SR1 Formula for Solving Nonlinear Optimization.pptxA New SR1 Formula for Solving Nonlinear Optimization.pptx
A New SR1 Formula for Solving Nonlinear Optimization.pptx
 
Parellelism in spectral methods
Parellelism in spectral methodsParellelism in spectral methods
Parellelism in spectral methods
 
Beyond function approximators for batch mode reinforcement learning: rebuildi...
Beyond function approximators for batch mode reinforcement learning: rebuildi...Beyond function approximators for batch mode reinforcement learning: rebuildi...
Beyond function approximators for batch mode reinforcement learning: rebuildi...
 
Linear algebra havard university
Linear algebra havard universityLinear algebra havard university
Linear algebra havard university
 
Ch06_1-2_Simplex_Method.pdf
Ch06_1-2_Simplex_Method.pdfCh06_1-2_Simplex_Method.pdf
Ch06_1-2_Simplex_Method.pdf
 
15.sp.dictionary_draft.pdf
15.sp.dictionary_draft.pdf15.sp.dictionary_draft.pdf
15.sp.dictionary_draft.pdf
 
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
Fractional Newton-Raphson Method and Some Variants for the Solution of Nonlin...
 
Batch mode reinforcement learning based on the synthesis of artificial trajec...
Batch mode reinforcement learning based on the synthesis of artificial trajec...Batch mode reinforcement learning based on the synthesis of artificial trajec...
Batch mode reinforcement learning based on the synthesis of artificial trajec...
 
Numerical
NumericalNumerical
Numerical
 

Último

Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Último (20)

ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 

C025020029

  • 1. International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 – 4767 P-ISSN: 2321 - 4759 www.ijmsi.org Volume 2 Issue 5 || May. 2014 || PP-20-29 www.ijmsi.org 20 | P a g e On Path-Following Dynamical Systems Trajectories for Linear Optimization Problems in Standard Form Ukwu Chukwunenye Department of Mathematics University of Jos P.M.B 2084 Jos, Nigeria ABSTRACT : This article examined path-following methods for solving linear programming problems. Then it collected key relevant results in Ukwu [1], formulated analogous continuous dynamical systems, and established a sequence of propositions which were invoked to prove that the system’s trajectories converge to the optimal solution of a linear programming problem in standard form, under appropriate conditions. These results were made possible by the exploitation of norm properties, their derivatives and the theory of ordinary differential equations, paving the way for the pursuits of satisficing solutions of some linear optimization problems that may not require exact optimal solutions but particular solutions at specified tolerance levels, within feasible guidelines or constraints. KEYWORDS: Convergent, Dynamics, Karmarkar, Optimal, Trajectories. I. INTRODUCTION AND MOTIVATION The application of linear programming to business, management, engineering and structured decision processes has been quite remarkable. Since the development of the simplex method in 1947 by G.B. Dantzing, there has been a flurry of research activities in the designing of solution methods for linear programming, mostly aimed at realizing more effective and efficient algorithmic computer implementations and computing complexity reductions. In the Fall of 1984, Karmarkar [2] of AT & Bell Laboratories proposed a new polynomial-time algorithm for solving linear optimization problems. The new algorithm not only possesses better complexity than the Simplex method in the worst-case scenario, but also shows the potential to rival the Simplex algorithm for large-scale, real-world applications. This development quickly captured the attention of Operations Research community. Radically different from the Simplex method, Karmarkar’s original algorithm considers a linear programming problem over a simplex structure and moves through the interior of the polytope of feasible domain by transforming the space at each step to place the current solution at the center of the polytope in the transformed space. Then the solution is moved in the direction of projected steepest descent far enough to avoid the boundary of the feasible region in order to remain interior. Next, the inverse transformation of the improved solution is taken to map it back to the original space to obtain a new interior solution. The process is repeated until an optimum is obtained with a desired level of accuracy. Karmarkar’s standard form for linear programming can be described as follows:  1 min . . 0 1 0 (1) T T c x s t Ax LP e x x          where A is an m n matrix of full row rank,  1,1,...,1 T e  is the column vector of n ones, c is an n- dimensional column vector and T denotes transpose. The basic assumptions of Karmarkar’s algorithm include: 0Ae  (2) the optimal objective value of (1) is zero. (3) Notice that if we define 0 e x n  , then assumption (2) implies that x0 is a feasible solution of (1) and each component of x0 has the positive value 1 n .Any feasible solution x of (1) is called an interior feasible solution if each component of x is positive. This implies that x is not on the boundary of the feasible region; needless to say
  • 2. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 21 | P a g e that the constraint 1 T e x  in (1) implies that 1 .Therefore1j n j x    1,1,...,1 , T e  leading to the conclusion that a consistent problem in Karmarkar’s standard form has a finite optimum. In Fang and Puthenpura [3], it is shown that any linear programming problem in standard form can be expressed in Karmarkar’s standard form. Karmarkar’s algorithm and its specifics are well-exposed in [3]. Karmarkar’s algorithm is an interior-point iterative scheme for solving linear programming problems. Interior-point methods approach the optimal solution of the linear program from the interior of the feasible region by generating a sequence of parameterized interior solutions. The specifics of these methods have already been discussed in [1]. The primary focus of this article will be on path-following methods. The basic idea of path-following is to incorporate a barrier function into the linear objective. By parameterizing the barrier function, corresponding minimizers form a path that leads to an optimal solution of the linear program. The main motivation for this work comes from the work of Shen and Fang [4], in which the “generalized barrier functions” for linear programming were defined to create an ideal interior trajectory for path-following. The key components such as the moving direction and the criterion of closeness required for a path-following algorithm were introduced for designing a generic path-following algorithm with convergence and polynomiality proofs under certain conditions. This work is aimed at exploiting the convergence results in [4] to a parameterized continuous dynamical system. This would lead to the construction of appropriate energy and Lyapunov functions which would be utilized to show that the trajectories of the dynamical system converge to the optimal solution of the linear program under appropriate assumptions. One is not aware of any interior-point dynamic solver reported in the literature. Most dynamic solvers have been used for the neural network approach. Such investigations can be referred to in Bertsekas [5], Cohen and Grossberg [6], Hopfield and Tanks [7], Wang [8, 9, 10, 11],and Zah [12]. In section 4, we formulate our supporting propositions and main results. Section 5 presents our conclusions and direction of a follow-up research. II. INTERIOR-POINT METHODS Interior-point methods such as Affine Scaling Methods, Potential Reduction Methods and Path- Following Methods are well exposed in [1]. Our main result hinges on Path-following methods. 2.1 Path-following Methods Consider a linear program:   min . . (4) 0 T c x P s t Ax b x        Let    0 : , 0 and : , 0 n n W x Ax b x W x Ax b x       R R be the interior feasible domain. Let the following assumptions hold: A has full row rank, (5) 0 W W   , where 0W is the closure of 0 W , and  is the empty set (6) W is compact. (7) 2.2 Definition A function :W  R is called a generalized barrier function for linear programming (GBLP), if  1 :P W  R is proper, strictly convex and differentiable, where R is the extended real line. Remark The properness property of  is equivalent to the requirement that  x be strictly bounded below by  for all x W and be strictly bounded above by  for some x W . (P2) if the sequence   0 k x W converges to x with the th i component, 0i x  then   lim k k i x    (P3) the effective domain of  contains 0 W . Equivalently,   0 . is finiteW x W s t x  . Let 0  and define an augmented primal problem  u P associated with a GBLP function follows:
  • 3. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 22 | P a g e    min . . (8) 0 T u c x x P s t Ax b x         Then we have the following results from [4]: [1]  u P has a unique optimal solution, denoted by  x  , in 0 W , [2]   T c z  is a monotone decreasing function in  , [3] The set   : 0x    characterizes an interior, continuous curve in 0 W . [4] Given a decreasing positive sequence k  such that  lim 0, if * limk k k k x x     then x* is the optimum of (P). [5] Suppose that x is a given interior feasible solution to (P) and x solves the problem:     1 21 2 min . . 0 (9) 1, T c x x P s t A x X x                       where X is any positive definite symmetric matrix and 0 1  . Then, x defines a moving direction at x below:      12 . T T x X I XA AX A AX X c x                 Under appropriate condition on X , c and  , it is proved in [4], that any convergent feasible sequence of solutions to u P must converge to the optimal solution to (P) as 0 .   The next section collects a sequence of lemmas in [1] needed in the proof of the asymptotic behavior of the system’s trajectories. III. CONTINUOUS DYNAMICAL SYSTEMS RESULTS In this section, we formulate an analogous continuous dynamical system and prove that the system’s trajectories converge to the optimal solution of (P) under some appropriate conditions. Let A be an m x n matrix of full row rank. Let n  R denote the set : 0 n x x R . For x W , let z x r  for some n r R . Then .Ax b Az Ar b    Taking   1 , T T r A AA b    we see that 0Az  . Therefore, the system 0Az  is consistent if and only if the system Ax b is consistent. In the sequel we let n r  R be such that Ar b  and z x r  . Let  be a constant such that 0 1  . Let X be any positive definite symmetric matrix of order n and let p and q be norm conjugates of each other such that 1 ,p q   and 1 1 1p q  . Let j a denote the th j row of A for  1,2,...,j m . For fixed A, define the map   1 : n A   R R by 1 1 . m p jp j p Az a z          Remark p Az is called the p-norm of the function Az . Unless explicitly stated we use the 2-norm in this article. Define 2 02 2 maxz Az A z           . For all x W and for any given n rR such that z x r  consider the function:
  • 4. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 23 | P a g e             , , , , 21 2 2 1 (10) p p r n T p j j E z E z c z r p Az r z r z r X z                    where     if 0 if 0 max ,0 and max ,0 0 if 0 0 if 0 w w w w w w w w w w                   , for ; 0, 0, 0, 0.w r p     R Note the following: [1] Property (P1) of definition 2.2 implies that,  z r    for all z in the null-space of A and  z r    for at least one z in the null-space of A. [2] Properties (P1) and (P2) of definition 2.2 imply that   2 z r    , for. [3] Property (P2) of definition 2.2 implies that if  : 1,2,.... k z k  is any positive convergent sequence in the null-space of A such that   ˆlim k k z r z r    , with ˆ 0j z r  , for some  1, 2,...,j n , then   lim k k z j z r     . The latter ensures that the minimization of  E z is never achieved at the boundary of the set : 0z z r  , using a gradient projection method in the minimization program. [4] p p Az is penalty for the violation of  z N A . [5]   1 j n j r z r    penalizes violations of z r  0. [6]  21 2 2 X z    is the Lagrangian term associated with the constraint 1 2 2 2 0X z     . The stage is now set for our dynamical system formulation and the proof of our main result with the aid of the results in [1]. Let 0 1  and         21 2 1 2 2 : 0 , : 0 , and : 0 , n n n S z Az N A S z X z T z z r             R R R where r is given and defined as on the previous page. Let 0 andt t be any pair of time variables such that 0 0t t  , and let z0 be an n-dimensional column vector. For a differentiable function 1 : n D R R , let  z D z denote the gradient of  D z with respect to z. Observe that   n z D z  R for each n zR . Consider the following dynamical system:        0 0 0 0 2 , , , , ; 0 (11) ( ) (12) int . (13) z p p r z t E z t t t z t z z S T           Then  0 0 0 n j j z r     , using the definition,  max , 0w w    and the fact that 0 0z r  , by virtue of z0 being in int(T). System (11) can be treated as a control system of the form:  1 (3 (14) T z c A      where              1 1 2 2 ,Az p z p Az z z r              and        23 3 1 2 2z z X z          .
  • 5. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 24 | P a g e    1 2 ,  , and  3  can be regarded as controls. These controls will be implemented such that the trajectories of (11), (12) and (13) will be forced into the feasible region  1 2 intS S t  and maintained there while moving in a direction that decreases  T c z r . The following sequence of lemmas will be found useful in the sequel. Let n I be the identity matrix of order n and let   1T T n P I A AA A    be the projection matrix onto the null- space of A. The following results were established in [1] 3.1 Lemma The dynamics of system (11) when restricted to  1 2 intS S t  are described by:   (15) 0 (16) z z P E z Az      3.2 Lemma If 1 2 0 z S S   , then  21 2 2 0z P X z      3.3 Lemma For any 1p  and 1 ,z S          1 1 1 1 1 2 21 sgn , sgn , , sgn (17) T p p p z m mp p p TA Az a z a z a z a z a z a z Az        3.4 Lemma (Noble [13], p429) For any square matrix M,   1 2 2 maximum eigenvalue of T M M M 3.5 Definition The core of the target S1 for the dynamical system:   (18)zz E z  is the set  0 n z  R of all initial points that can be driven to the target S1 in finite time and maintained there, thereafter by an appropriate implementation of some feasible control procedure. Denote this set by  1 core S . The following lemma demonstrates that under certain conditions  1 core S is nonempty. 3.6 Lemma If:     1 (19) T T T pzz A AA E z k AzA    for some 1p  and 1 z S  , with some 0k  , then the trajectories of:     0 (20) (21)0 z z E z z z     hit 1 S in finite time and remain there thereafter. Our task is to construct appropriate energy and Lyapunov functions and impose appropriate conditions under which the trajectories of:
  • 6. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 25 | P a g e     0 (22) (23)0 zz E z z z     converge to z r  , where z r  is an optimal solution of (5) for each 0  . Then we can appeal to Theorem 2.2 of [4] to assert the convergence to the optimal solution of (P), noting that a positive decreasing sequence of parameters k  with limit 0 may be used in place of  . This above approach holds a lot of promise for an extension of our result to neural networks where dynamical systems, energy and Lyapunov functions are used extensively. IV. MAIN RESULT The following tool is needed for the proof of the main result: Let 1 d be a nonnegative constant such that:     1 1 1 2 (24) T T T T T T q w A AA d w A AA    n w  R and 1q  . See Stoer and Bulirsch [14], p. 185. Let    1 2 2 1 2 max , ,..., n d u u u , where ui is an eigenvalue of  2 for 1, 2,...,X i n   . See [13], p. 429. Remark The positive definiteness of X guarantees the existence and the positivity of 2 d . Also the existence of d1 is assured by the equivalence property of p-norms. Therefore 1 d and 2 d are well-defined. Furthermore, 1 1 2 2 22 2 2 and ,X d X d      using ([13], p. 429) and the fact that  is an eigenvalue of M if and only if  j is an eigenvalue of j M for any positive integer j. 4.1 Theorem Let 1 p   and let the following condition hold:      1 1 1 22 2 2 2 (25) T T z p k d c z r d d A AA          where ,k  and  are sufficiently small, p is sufficiently large and  2 intz S T  . If  1 2 int c z S S T   , then condition (19) of lemma (3.4) holds and the equilibrium of the system (3.2) is asymptotically stable. Furthermore, if  in (3.2) is replaced by a positive decreasing sequence of parameters  , 1,2,... k k  with limit being zero, then the solution of (3.2) in x-space converges to the optimal solution of the linear programming problem (P). Proof The strategy for the proof will be as follows: [1] We show that the condition (19) of lemma (3.4) holds, which will assure the feasibility of the trajectories of (3.2) at some time t and thereafter. [2] We show that the function E zp p r, , , , ( )  is a nonincreasing time function along the trajectories of (3.4) and that the time derivative vanishes only at an equilibrium solution of (3.4). [3] We show that the equilibria of (3.4) satisfy the Karush-Kuhn-Tucker Optimality Condition. This would guarantee the uniqueness of the equilibrium. [4] Finally we construct a Lyapunov function which is negative definite along the trajectories of the system (3.4) and then invoke the Lyapunov theory to conclude that the trajectories converge to the unique equilibrium of (3.4). Consequently, the results of Chapter two and, in particular, Theorem 2.2 of [15] can be used to conclude that the trajectories converge to the optimal solution of problem P. Suppose the trajectories of (3.2) are restricted to S1 . By lemma (3.1), the dynamics of (3.2) are given by:   (26)z z P E z   If  1 int c z S T  , then Lemma (3.4) implies that the trajectories of (3.2) reach S1 in finite time and could be forced to remain there thereafter, if (19) holds, now:
  • 7. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 26 | P a g e       21 2 2 (27)z z zp z c p Az z r X z               Next, we expand     1T T T z z A AA A E z   as follows:                 1 1 1 1 1 2 1 2 3 4 2 (28) T T T z T T T T T T z p T T T T T T z z A AA A E z z A AA Ac z A AA Ap Az z A AA A z r z A AA AX z T T T T                     From Lemma (3.3), we get:                         1 1 1 1 2 1 1 2 21 1 1 1 1 1 2 21 1 1 1 1 2 1 1 2 21 sgn , sgn , , sgn sgn , sgn , , sgn , ,... sgn , sgn , , sgn (29) T T T T t p p p m mp p T t p p p m mp p t p p p m m mp p pz A AA AA T a z a z a z a z a z a z Az p Az a z a z a z a z a z a z Az p a z a z a z a z a z a z a z a z a z az                             Hence:                   1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 sgn using sgn (30) (31) m m p p i i i ip p i ip p p p pp p T T T T T T T T T p q T T T T T T p p p p T a z a z a z a z Az Az p Az p Az Az T z A AA Ac c A AA Az c A AA Az d c A AA Az d c A AA Az                           (by Hölders inequality and [12], p. 482) where p is the conjugate exponent of q; that is 1 1 1 , , 1p q p q      .           1 1 1 2 4 1 1 2 1 1 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 T T T T T T p q T T T T T T p p T T p T z A AA AX z z X A AA Az d z X A AA Az d z X X A AA Az d d A AA Az                          since 2 , 0 1z S    and in view of the definition of 2 d following (24) by a similar reasoning,     1 3 1 2 2 T T z p T d z r A AA Az      Let: 1 2 3 4 (32)S T T T T    Then: (33)p S k Az  z S1 with some k, if: 1 2 3 4 (34)p T T T T k Az    
  • 8. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 27 | P a g e since     1 1 2 3 4 T T T z z A AA A E z T T T T        . Consequently (19) is satisfied if (34) is satisfied. Moreover (34) is satisfied if:      1 1 22 2 2 2 0 (35)T T z p k d c z r d A AA           So that:      1 1 22 22 2 2 (36) T T z p k d c z r d A AA          (36) is satisfied  1 2 int c z S S T    if ,k and  are sufficiently small, p is sufficiently large and for any sequence zk x 1 , such that   0 as k j z r k    in view of (P2) of Definition 2.2. Now, (36)  (35)  (34) (19). Hence, the null-space feasibility condition,  z t S 1 can be restored only in the interior of T, if (36) holds. To complete the proof, we need the following sequence of results which shows that starting from an interior feasible point, the trajectories of (3.4) transformed to the x-space will converge to the optimal solution of the corresponding linear program (P). 4.2 Proposition The function E zp p r, , , , ( )  defined in (10) is a nonincreasing function of time on the trajectories of (3.4). Proof                 2 2 2 using 0 (37) T T T T T z z z z z z d E z E z z E z P E z E z P P E z P P P dt P E z                 as desired. 4.2 Proposition The time derivative of  E z vanishes only at any equilibrium of (26). Proof An equilibrium of (26) satisfies the relation   0z P E z  .   0 (38) d E z dt  if   2 2 0z P E z   , by (37). Therefore:   0 (39)z P E z  showing that any solution to (38) satisfies (39) and consequently is an equilibrium point (26). 4.3 Proposition An equilibrium of (26) satisfies the Karush-Kuhn-Tucker Optimality Condition. Proof By the definition of equilibria:   0 (40)z P E z  (40) implies that  z E z is in null-space of P. 0AP  and so: 0 (41) T T T P A PA  (40) and (41) show that  z E z is in the range space of AT , thus there exists m a wR such that:   (42) T z E z A w  We deduce from (42) that: `   2 2 (43) T z c z r X z A w         Let   2 2z s z r X z          . Then:
  • 9. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 28 | P a g e   2 0 (44) 0 2 0. T z A w s c Az z r X z s z r                 The condition (44) coincides with the Karush-Kuhn-Tucker Optimality Condition (46) of   z E z in [4] if and only if: 2 0 (45)X z   in which case the positive definiteness of ~ X assures that: 0 (46)z  Note also that the convexity of the set  1 2 intS S T  assures the uniqueness of solution to the Karush-Kuhn- Tuker Optimality Condition and consequently the uniqueness of the equilibrium of (26). From (37) we deduce that   0 ford dt E z z z  , where z is the unique equilibrium of (26). This establishes that the equilibrium of the system (26) is asymptotically stable. Now let 0  to deduce that z r x   *, the optimal solution of (P). (See [4], theorem 2.2). The condition 0z  in (46) is a complementary-slackness-like condition: If zj  0 for some  j n 1 2, ,..., , then 0  ; and 0  For potential Neural Network applications we also use the Lyapunov theory to show that the trajectories of (26) converge to the unique equilibrium of (26). To this end, we need to construct an appropriate Lyapunov function. 4.5 Definition A function : n V D  R R is called a Lyapunov function if V is continuously differentiable and positive definite on D. if V is negative definite along the trajectories of the dynamical system      where (0) 0, d x t f x t f dt   , and x D , then the equilibrium x  0 is asymptotically stable. (See [13], pp. 203-205) It is not clear that z  0is the unique equilibrium of (26); therefore we must perform a linear translation of the equilibrium to the origin. Let z be the equilibrium solution of (4.2). Let y z z   . Then:       (47)z z z z P E z z P E z P E z z             since   0z P E z   . Clearly 0z  is equilibrium of the transformed system and is unique. Let:     2 2 (48)z V z P E z z    Then  0 0V  and   0 if 0V z z  . Also  V z is continuous in z. therefore  V z is a Lyapunov function. Note that:         2 2 2 2 (49) T z z d V z t P E z z z P E z z dt          which vanishes if and only if z  0. Therefore  V z the derivative of is negative definite along the solution (on the trajectories) of (47). We now appeal to Lyapunov theory to assert that the origin is an asymptotically stable solution of (47). Consequently the equilibrium z of the system (26) is asymptotically stable. Now let   0. Then z r x   *, the optimal solution of (P). This comes from theorem 2.2 of [4].
  • 10. On Path-Following Dynamical Systems Trajectories… www.ijmsi.org 29 | P a g e VI. CONCLUSION This work was motivated partly by interior-point concepts and largely by the path-finding methods in [4] for solving linear programming problems. Many real-life problems which could be formulated as linear programming problems are dynamic in nature; for example, the inventory level of some item at a given time and changes in demand levels of some consumer goods due to price fluctuations and seasonal variation. Also on-line optimization may be required in many application areas, such as satellite guidance, robotics and oil outputs from oil wells and refinery operations. Some of these problems may not require exact optimal solutions but particular solution at specified tolerance levels, within feasible guidelines or constraints. In particular solutions at positive levels may be desired for all decision variables, implying that interior solutions are desired. These and many other problems of the continuous variety could be more realistically modelled by continuous dynamical systems. Unfortunately, research in this direction has been based mainly on neural network approach, none of which is interior-point oriented. In section 3, we formulated an interior-point based dynamical system for solving linear programming problems in standard form. The key ideas for this formulation came from the examination of [4]. Then, we stated that under certain conditions, the solutions of our dynamical system would converge to the solution of a corresponding linear programming problem in standard form. We proceeded cite some relevant results in [1] and establish a sequence of propositions which would be needed to prove that our solutions would have the right convergence property-the convergence of the trajectories of the dynamical system to the optimal solution of the linear program under appropriate assumptions. This approach holds a lot of promise for an extension of our result to neural networks where dynamical systems, energy and Lyapunov functions are used extensively. REFERENCES [1] Ukwu (2014o). “On interior-point methods, related dynamical systems results and cores of targets for linear programming”. International Journal of Mathematics and Statistics Invention (IJMSI), Vol. 2, Issue 5, April 2014. [2] Karmarkar, N.K.,(1984).“A new polynomial-time algorithm for linear programming,” Combinatorica, Vol.4, pp. 373-395. [3] Fang, S.C., and Puthenpura, S.(1984).“Linear optimization and extensions,” Prentice Hall, Englewood Cliffs, N.J. [4] Shen, R.L. and Fang, S.C.(1994).“On the generalized path-following methods for linear programming,” Optimization, Vol. 30, pp. 235-249. [5] Bertsekas, D.P.(1975).“Necessary and sufficient conditions for a penalty methods to be exact,” Mathematical programming, Vol. 9, pp. 87-99. [6] Cohen, A., and Grossberg, S.(1983).“Absolute stability of global pattern formation and parallel memory storage by competitive neural networks,” IEEE Transactions on Systems, Man, and Cybernetics, Sept./Oct., Vol. SMC-13, No. 5, pp. 815- 825. [7] Hopfield, J. and Tanks, D.(1983).“Simple neural optimizations networks: An A/D converter signal decision circuit, and, and a linear programming circuit,” IEEE Trans., CAS-33, (5), pp.533-541. [8] Wang, J. (1991).“A time-varying recurrent neural system for convex programming,” Proc. Int. Joint Conf. on Neural Networks, Seattle, Vol. 1, pp. 147-152. [9] Wang, J.(1992).“Analogue neural network for solving the assignment problem,” Electronics Letters, May , Vol. 28, No.11, pp. 1047-1050. [10] Wang, J.(1993).“Analysis and design of recurrent neural network for linear programming,” IEEE Trans. On Circuits and Systems- I, Sept., Vol. 40, No. 9, pp. 613-618. [11] Wang, J.(1991).“On the asymptotic properties of recurrent neural networks for optimization,” Int. J. Pattern Recognition and Artificial Intelligence, Vol. 5, No. 4, pp. 581-601. [12] Zah, H. (1995). “Solving linear programming problems with neural networks: A comparative study,” IEEE Transactions on NeuralNetworks, Jan. Vol. 6, No.1. pp. 94-105. [13] Noble, B..(1969). “Applied linear algebra,” Prentice-Hall, Inc. N.J. [14] Stoer, J., and Bulirsch, R.(1991).“Introduction to numerical analysis,” 2nd Ed., Springer-Verlg, N.Y.