This document provides information about various matrix operations and functions in Scilab, including:
1. flipdim, which flips a matrix with respect to a specified dimension.
2. permute, which permutes matrix elements into new array dimensions.
3. repmat, which replicates a given matrix to make a new matrix with specified row and column dimensions.
4. Various functions for cumulative operations on matrices like cumprod (cumulative product), cumsum (cumulative summation), and their use with different orientation arguments.
5. Other matrix functions covered include kron (Kronecker product), prod (product), sum (summation), and their use with different orientation arguments.
1. 2.2. ALGEBRA 101
✞
ans =
1
✌
✆
Again, changing the function Q2 = 2 − 3s, we have
f(s) =
1
s3 ∗ (2 − 3s)
Expanding the relation
f(s) =
1
2s3
+
3
4s2
+
9
8s
+
27
16
+
81
32
s + . . .
The coefficient b1, i.e. the numeric coefficient of 1/s is called the residue of the function.
Here, it is 1.125.
✞
--> s=poly (0,’s’);
2 --> P=1;
--> Q1=s^3;
4 --> Q2 =(2 -3*s);
--> residu(p, Q1 , Q2)
✌
✆
✞
ans =
1.125
✌
✆
2.2.17 Roots of Polynomial (roots)
Roots of a polynomial are those values which on substituted in place of variable gives
polynomial value zero. A polynomial having degree of ‘n’ has n roots. Root values may
be real numbers (either integers or fractions) or complex numbers.
Roots of Quadratic Equation
An equation of variable ‘x’ having degree ‘2’ is known as quadratic equation. Roots of
the quadratic equation (Eg ax2
+ bx + c = 0) can be obtained by using Shridharacharys
Formula
x =
−b ±
√
b2 − 4ac
2a
Scilab example is given below
✞
--> x=poly (0,’x’)
2 --> roots(x^2 -3 * x + 2)
✌
✆
✞
ans =
2.
1.
✌
✆
All the roots are real in integers. If equation is modified then roots becomes
2. 102 Linear Algebra
✞
1 --> x=poly (0,’x’)
--> roots(x^2 -3 * x + 3)
✌
✆
✞
ans =
1.5 + 0.8660254 i
1.5 - 0.8660254 i
✌
✆
Roots of Polynomials
Roots of higher degree polynomials can be obtained as
✞
1 --> x=poly (0,’x’)
--> roots(x^3 - 6 * x^2 + 11 * x -6)
✌
✆
✞
ans =
3.
2.
1.
✌
✆
All are the real roots. roots function also calculates the complex roots.
✞
--> x=poly (0,’x’)
2 --> roots(x^3 - 6 * x^2 + 11 * x + 1)
✌
✆
✞
ans =
3.04 + 1.50 i
3.04 - 1.50 i
- 0.08
✌
✆
roots returns all the roots of a polynomial function. Roots of a polynomial function
are those possible values which satisfy to the given polynomial to zero. For example,
x2
− 2x + 1 = 0 is a polynomial of degree ‘2’ and is known as quadratic equation. The
roots of this polynomial are x = +1 and x = +1. When in place of ‘x’, 1 is put then
polynomial value is zero.
p(1) = 12
− 2 × 1 + 1 = 0
Roots of a polynomial is obtained either by fraction method or by substitution and re-
duction method.
✞
--> p=poly ([1,2,3], ’s’)
2 --> roots(p)
✌
✆
✞
ans =
3.
2.
1.
✌
✆
3. 2.2. ALGEBRA 103
2.2.18 Simplification (simp)
simp simplifies rational function. It returns numerator & denominator as result. This
functions takes two inputs as its arguments. First argument is numerator of algebraic
function and second argument is denominator of the algebraic function. For example,
assume a polynomial rational fraction
f =
(s + 1) ∗ (s + 2)
(s + 1) ∗ (s − 2)
On simplification of it, the result is
f =
s + 2
s − 2
Scilab code for this function is given below:
✞
--> s=poly (0,’s’);
2 --> [n,d]= simp ((s+1) *(s+2) ,(s+1)*(s-2))
✌
✆
✞
d =
- 2 + s
n =
2 + s
✌
✆
2.2.19 Flip Matrix Dimension (flipdim)
flipdim function flips the matrix with respect to the dimension as specified in the function.
It is similar to the row or column interchange in the matrix during solving of a problem.
Consider a matrix A as given below:
A =
19. See the example as given below:
✞
--> x=[1 2 3 4; 5 6 7 8]
2 --> dim =1;
--> y=flipdim (x,dim)
✌
✆
✞
x =
1. 2. 3. 4.
5. 6. 7. 8.
y =
5. 6. 7. 8.
1. 2. 3. 4.
✌
✆
20. 104 Linear Algebra
2.2.20 Permutation (permute)
In mathematics, the notion of permutation relates to the act of arranging all the mem-
bers of a set into some sequence or order, or if the set is already ordered, rearranging
(reordering) its elements, a process called permuting. permute permutes matrix elements
or array elements into new dimensions of an array.
✞
--> x=[1 2 3;4 5 6];
2 --> y=permute (x,[2 1])
✌
✆
✞
y =
1. 4.
2. 5.
3. 6.
✌
✆
2.2.21 Matrix Replication (repmat)
repmat replicates the given matrix and makes new matrix. This function has three argu-
ments. First is the elements, second and third are the rows and columns of the matrix to
be arranged respectively. First argument should be a list, array or vector.
✞
--> repmat (1:3 ,2 ,2)
✌
✆
✞
ans =
1. 2. 3. 1. 2. 3.
1. 2. 3. 1. 2. 3.
✌
✆
2.2.22 Cumulative Product (cumprod)
It returns the cumulative product of array elements of a matrix. It takes two arguments,
first, a matrix reference and second, cumulative sum orientation. Cumulative product
orientation has three values.
‘*’ For the direction of cumulative product along-the-column-then-row by default.
‘1’ For the direction of cumulative product along-the-column.
‘2’ For the direction of cumulative product along-the-row.
Assume a matrix which is given below and orientation argument in the cumprod function
is omitted, then the cumulative product is performed along-the-column-then-row.
A =
36. For a given matrix of size m × n, the cumulative product for an element aij is given by
37. 2.2. ALGEBRA 105
✞
1 /* Perform cumulative product columnwise for an element */
for (l = 0; l < n; l++) {// columns
3 if (l < j) {
/* All row elements of previous columns */
5 for (k = 0; k < m; k++) {
r *= a[k][l];
7 }
} else if (l == j) {
9 /* Row element of current col at and before this row*/
for (k = 0; k <= i; k++) {
11 r *= a[k][l];
}
13 }
}
✌
✆
The example for cumulative product is given below:
✞
--> A=[1 ,2;3 ,4];
2 --> a=cumprod (A)
✌
✆
✞
a =
1. 6.
4. 10.
✌
✆
The same matrix which is given above is supplied to the cumprod function along with
second argument ‘1’, then the cumulative product is performed along-the-column.
A =
53. The example for cumulative product is given below:
✞
1 --> A=[1 ,2;3 ,4];
--> b=cumprod (A,1)
✌
✆
✞
b =
1. 2.
4. 6.
✌
✆
Second argument in the function cumprod is orientation of cumulative product. The
possible values are ‘1’ for right orientation and ‘2’ for center orientation. Similarly, the
same matrix is supplied to the cumprod function along with second argument ‘2’, then
the cumulative product is performed along-the-rows.
A =
70. 2.2.23 Cumulative Summation (cumsum)
It returns the cumulative sum of array elements of a matrix. It takes two arguments,
first, a matrix reference and second, cumulative summation orientation. Cumulative
summation orientation has three values.
‘*’ For the direction of cumulative sum along-the-column-then-row by default.
‘1’ For the direction of cumulative sum along-the-column.
‘2’ For the direction of cumulative sum along-the-row.
Assume a matrix which is given below, and orientation argument in the cumsum function
is omitted, then the cumulative sum is performed along-the-column-then-row.
A =
86. For a given matrix of size m × n, the cumulative sum for an element aij is given by
✞
1 /* Perform cumulative sum columnwise for an element */
for (l = 0; l < n; l++) {// columns
3 if (l < j) {
/* All row elements of previous columns */
5 for (k = 0; k < m; k++) {
r += a[k][l];
7 }
} else if (l == j) {
9 /* Row elementd of current col at and before this row*/
for (k = 0; k <= i; k++) {
11 r += a[k][l];
}
13 }
}
✌
✆
The example for cumulative product is given below:
✞
--> A=[1 ,2;3 ,4];
2 --> a=cumsum(A)
✌
✆
✞
a =
1. 6.
4. 10.
✌
✆
87. 2.2. ALGEBRA 107
The same matrix which is given above is supplied to the cumsum function along with
second argument ‘1’, then the cumulative sum is performed along-the-column.
A =
103. The example for cumulative product is given below:
✞
1 --> A=[1 ,2;3 ,4];
--> b=cumsum(A,1)
✌
✆
✞
b =
1. 2.
4. 6.
✌
✆
Second argument in the function is orientation of cumulative sum. The possible values
are ‘1’ for right orientation and ‘2’ for center orientation. Similarly, the same matrix is
supplied to the cumsum function along with second argument ‘2’, then the cumulative
sum is performed along-the-rows.
A =
136. 108 Linear Algebra
2.2.25 Product (prod)
prod returns the product of an array of elements of a matrix. Second argument to this
function is the orientation of the product. If orientation argument is not provided then
prod returns the product of all elements mutually. If product orientation argument is
‘1’ then product is performed along the column and if product orientation argument is
‘2’ then product is performed along the row. Product of a matrix, without orientation
argument, is performed is shown below:
A =
172. A Scilab example is given below:
✞
1 --> A=[1 ,2;3 ,4];
--> a=prod (A)
3 --> b=prod (A,1)
--> c=prod (A,2)
✌
✆
✞
a =
24.
b =
3. 8.
c =
2.
12.
✌
✆
173. 2.2. ALGEBRA 109
2.2.26 Summation (sum)
sum returns the sum of elements of a given matrix. Second argument to this function is the
orientation of the summation. If summation orientation argument is ‘1’ then summation is
performed along the column and if summation orientation argument is ‘2’ then summation
is performed along the row. Summation of a matrix, without orientation argument, is
performed is shown below:
A =
209. A Scilab example is given below:
✞
1 --> A=[1 ,2;3 ,4];
--> a=sum(A)
3 --> b=sum(A,1)
--> c=sum(A,2)
✌
✆
✞
a =
10.
b =
4. 6.
c =
3.
7.
✌
✆
210. 110 Linear Algebra
2.3 Matrices
A matrix is arrangement of coefficients of a set of algebraic equations in rows and columns.
Each row represents to a distinct algebraic equation while each column represents to
the coefficients of same variable of each equation. For example, take a set of algebraic
equations
ax + by = p; cx + dy = q
The coefficients of this set of algebraic equation shall be arranged in matrix form as
a b
c d
x
y
=
q
q
The word “matrix” refers only to the arrangement of coefficients of x and y variables
only. So, matrix is
a b
c d
First row has coefficients of equation ax + by = p and second row has coefficients of
equation cx + dy = q. First column has coefficients of variable x and second column
has coefficients of variable y. The number of rows is equal to the number of algebraic
equation in the given set and number of columns is equal to the number of distinct
variables. Numerical computational software have deep rooted application of matrices
or matrix structure. Thus we must have familiarity with matrices, their operations and
reading/writing matrix elements. Before, explaining the matrix operations, we will discuss
here about the reading/writing of matrix elements. For this purpose we take experimental
matrix
A =
1 2 3
4 5 6
7 8 9
✞
1 -- A=[1 2 3; 4 5 6; 7 8 9]
✌
✆
✞
A =
1. 2. 3.
4. 5. 6.
7. 8. 9.
✌
✆
Elements of matrix can be accessed by supplying indices in two ways. (i) have only one
argument, i.e. element-wise and (ii) have two arguments for row and column indices each
separated by comma. First argument is for rows and second argument is for columns. In
Scilab, indices starts from ‘1’ not from ‘0’. For example, if only one argument is supplied
as
✞
-- A(1:4)
✌
✆
Then it will return elements column-wise, i.e. from top to bottom in first column and
then second column and so on.
211. 2.3. MATRICES 111
✞
ans =
1.
4.
7.
2.
5.
✌
✆
If two arguments are supplied as
✞
-- A(1:1 ,2:2)
✌
✆
Then it will return elements from row one to row one and from column two to column
two. Thus it will only return element ‘2’.
✞
ans =
2.
✌
✆
If column range is changed from column two to column three, then output will be
changed.
✞
-- A(1:1 ,2:3) // First row and 2nd and 3rd columns
✌
✆
✞
ans =
2. 3.
✌
✆
For row or column access, respective row or column index is provided and other argument
is supplied as range operator ‘:’.
✞
-- A(1,:) // First row and all columns
✌
✆
✞
ans =
1. 2. 3.
✌
✆
✞
-- A(:,2) // All rows and second columns
✌
✆
✞
ans =
2.
5.
8.
✌
✆
There are several types of matrices. The most common type of matrix is square matrix.
In square matrix, rows and columns are of equal size.
✞
-- x=1:1:9;
2 -- matrix(x,3,3)
✌
✆
212. 112 Linear Algebra
✞
ans =
1. 4. 7.
2. 5. 8.
3. 6. 9.
✌
✆
Two or more matrices can be added, subtracted, multiplied and divided if matrices follow
respective rules. If A, B and C are three matrices of order n × n then sum of all the
matrices is
Sij = Aij + Bij + Cij
✞
-- A=[1 ,2 ,3;4 ,5 ,6;7 ,8 ,9]
2 -- B=[5 ,6 ,7;8 ,9 ,10;11 ,12 ,13]
-- C=[10 ,11 ,12;13 ,14 ,15;16 ,17 ,18]
4 -- S=A+B+C
✌
✆
✞
A =
1. 2. 3.
4. 5. 6.
7. 8. 9.
B =
5. 6. 7.
8. 9. 10.
11. 12. 13.
C =
10. 11. 12.
13. 14. 15.
16. 17. 18.
S =
16. 19. 22.
25. 28. 31.
34. 37. 40.
✌
✆
Similarly subtraction of the two matrices can be obtained.
Dij = Aij − Bij
✞
-- A=[1 ,2 ,3;4 ,5 ,6;7 ,8 ,9]
2 -- B=[5 ,6 ,7;8 ,9 ,10;11 ,12 ,13]
-- D=A-B
✌
✆
✞
A =
1. 2. 3.
4. 5. 6.
7. 8. 9.
B =
5. 6. 7.
8. 9. 10.
11. 12. 13.
213. 2.3. MATRICES 113
D =
- 4. - 4. - 4.
- 4. - 4. - 4.
- 4. - 4. - 4.
✌
✆
Sum and difference of the matrices are elementwise operations, hence matrices undergoing
addition and subtraction must be of same order. Product of the matrices is of two
type. (i) Dot (scalar) product and (ii) Cross (vector) product. Dot product or scalar
product is elementwise operation. It gives resultant matrix by multiplying elements by
elements. Due to elementwise operation, matrix addition, dot product are commutative.
Subtraction of matrices is not commutative as 2−3 6= 3−2. The dot product of matrices
symbol is group of dot and asterisk symbol (.*).
Pij = Aij. ∗ Bij
Example of dot product is given below:
✞
-- A=[1 ,2 ,3;4 ,5 ,6;7 ,8 ,9]
2 -- B=[5 ,6 ,7;8 ,9 ,10;11 ,12 ,13]
-- P=A.*B
✌
✆
✞
A =
1. 2. 3.
4. 5. 6.
7. 8. 9.
B =
5. 6. 7.
8. 9. 10.
11. 12. 13.
P =
5. 12. 21.
32. 45. 60.
77. 96. 117.
✌
✆
Cross product is special operation of the product of matrix. For cross product, number
of rows of one matrix must be equal to the number of columns of the other matrix.
A = [1, 2]; B =
3
4
The product A × B is possible. Using cross product rules, the result is
C = [1, 2] ×
3
4
= [11]
✞
-- A=[1 ,2]
2 -- B=[3;4]
-- C=A*B
✌
✆
214. 114 Linear Algebra
✞
A =
1. 2.
B =
3.
4.
C =
11.
✌
✆
If this rule is not followed then Scilab shows error of inconsistent multiplication. Let we
have two matrices A and B as
A = [1, 2]; B = [3, 4]
✞
1 -- A=[1 ,2]
-- B=[3 ,4]
3 -- C=A*B
✌
✆
✞
A =
1. 2.
B =
3. 4.
C=AB
!--error 10
Inconsistent multiplication .
✌
✆
Cross product may or may not commutative. Most of matrices gives A× B 6= B × A. Let
two matrices are A and B of order n × n then
A
B
=
A
B
×
B−1
B−1
=
A × B−1
I
= A × B−1
Here, cross product of matrix and its inverse matrix is identity matrix. Note that dot
product of a matrix and its inverse matrix is not equal to identity matrix. See the following
Scilab codes for the division of the matrices.
✞
1 -- A=[1 ,2;3 ,4] //a 2x2 matrix
-- B=[5 ,6;7 ,8] //a 2x2 matrix
3 -- r=A/B // division of matrix p by matrix q
-- s=inv(B) // inverse of matrix q
5 -- t=A*s // cross product of p and inverse of q
✌
✆
✞
A =
1. 2.
3. 4.
B =
5. 6.
7. 8.
r =
215. 2.3. MATRICES 115
3. - 2.
2. - 1.
s =
- 4. 3.
3.5 - 2.5
t =
3. - 2.
2. - 1.
✌
✆
2.3.1 Determinant
The determinant det(A) or |A| of a square matrix A is a number encoding certain prop-
erties of the matrix. A matrix is invertible if and only if its determinant is nonzero.
Determinant of the matrix
A =
227. is given by
|A| = a(e × i − h × f) − b(d × i − g × f) + c(d × h − g × e)
It is a numerical value and crucial for the solution of the given algebraic equations or
matrix. Scilab example is
✞
1 -- det ([2 ,3 ,5;8 ,1 ,5;9 ,7 ,2])
✌
✆
✞
ans =
256.
✌
✆
2.3.2 Transpose Matrix
Transpose matrix is the change of rows into columns and vice versa of a square matrix.
✞
-- x=1:1:9;
2 -- y=matrix(x,3,3),y’
✌
✆
✞
y =
1. 4. 7.
2. 5. 8.
3. 6. 9.
ans =
1. 2. 3.
4. 5. 6.
7. 8. 9.
✌
✆
228. 116 Linear Algebra
2.3.3 Diagonal Matrix
A matrix which has only non-zero diagonal elements and other elements are zero, is called
diagonal matrix. Mathematically
A =
aij when i = j
0 when i 6= j
In Scilab diagonal matrix is
✞
-- diag ([1 ,2 ,3]) // elements in [] are diagonal elements .
✌
✆
✞
ans =
1. 0. 0.
0. 2. 0.
0. 0. 3.
✌
✆
2.3.4 Identity Matrix
Identity matrix has only unity diagonal elements and other elements are zeros. Mathe-
matically
I =
aij = 1 when i = j
= 0 when i 6= j
In Scilab identity matrix is
✞
-- eye (2,3)// eye(row , columns)
✌
✆
✞
ans =
1. 0. 0.
0. 1. 0.
✌
✆
The product of a matrix and identity matrix is that matrix, i.e.
A × I = I × A = A
2.3.5 Inverse of Matrix
From the definition of inverse, if x is a number and y is its inverse then x × y = 1.
Similarly, if P is a invertible square matrix and P−1
is its inverse then
PP−1
= I
Where I is unit identity matrix having same order as the matrix P has. A matrix should
be square matrix for being invertible. All square matrices are not invertible. The square
matrix which has an inverse is called invertible or non-singular matrix. A square matrix
is invertible when its determinant is not zero, i.e. det(A) 6= 0. Inverse of a square matrix
is given by
A−1
=
Adj(A)
Det(A)
229. 2.3. MATRICES 117
Adj(A) of the matrix A is transpose matrix of the co-factor matrix of matrix A. Co-factor
of a matrix of m × n order in respect of ith
row and jth
column is given by Aij that is
equal to the product of (−1)i×j
and determinant of remaining matrix after eliminating
ith
row and jth
column. Let a matrix is given like
A =
1 2
3 4
The co-factors of the matrix are
a11 = (−1)1×1
× 4 a11 = (−1)1×2
× 3
a21 = (−1)2×1
× 2 a22 = (−1)2×2
× 1
Now co-factors matrix of matrix A is
Acf =
4 −3
−2 1
Now Adj(A) of matrix A is
Adj(A) =
4 −2
−3 1
Now the determinant of the matrix A is
Det(A) = −2
Finally, inverse of matrix A is
A−1
=
Adj(A)
Det(A)
=
−2 1
1.5 −0.5
Inverse of a 3 × 3 square matrix
P =
1 3 5
4 8 9
2 1 6
is find by using Scilab as
✞
1 -- P=[1 ,3 ,5;4 ,8 ,9;2 ,1 ,6]
-- inv(P)
✌
✆
✞
ans =
- 1. 0.3333333 0.3333333
0.1538462 0.1025641 - 0.2820513
0.3076923 - 0.1282051 0.1025641
✌
✆
230. 118 Linear Algebra
2.3.6 Normalization of Matrix
A matrix T is said to be normalized if T 2
= I where, I is identity matrix. Take a matrix
A as
A =
238. Let k is factor that must be multiply to matrix A such that, T = kA and kA × kA = I.
Now, kA × kA shall be transform in such manner that A2
= KI. Now the new form of
the matrix is k2
A2
= I. It give
k2
KI = I
Or
k =
1
√
K
Now, the normalized matrix is T = kA, which gives T 2
= I. Take a matrix
A =
270. See the Scilab example as given below:
✞
-- A = [3,2;-2,-3]
2 -- k = 1/5^0.5
-- N = k*A
4 -- N*N
✌
✆
✞
A =
3. 2.
- 2. - 3.
k =
0.4472136
N =
1.3416408 0.8944272
- 0.8944272 - 1.3416408
271. 2.3. MATRICES 119
ans =
1. 0.
0. 1.
✌
✆
A matrix may or may not be normalized.
2.3.7 Normalzation Factor (norm)
norm returns the normalized form of a matrix. A matrix T is said to be normalized if
T 2
= I where, I is identity matrix. Take a matrix A as
A =
279. Let k is factor that must be multiply to matrix A such that, T = kA and kA × kA = I.
Now, kA × kA shall be transform in such manner that A2
= KI. Now the new form of
the matrix is k2
A2
= I. It give
k2
KI = I
The norm function returns the value of K associated with a matrix, i.e.
✞
-- K=norm (A)
✌
✆
Further proceeding successive steps for finding of the normalized matrix.
k =
1
√
K
Now, the normalized matrix is T = kA which givens T 2
= I. Take a matrix
A =
311. See the Scilab example as given below:
✞
1 -- A=[3,2;-2,-3];
-- k=1/ norm (A)^0.5
3 -- N=k*A
-- N*N
✌
✆
312. 120 Linear Algebra
✞
A =
3. 2.
- 2. - 3.
k =
0.4472136
N =
1.3416408 0.8944272
- 0.8944272 - 1.3416408
ans =
1. 0.
0. 1.
✌
✆
There are following types of normalization of a matrix.
Norm Type Description
norm(x,2) Largest singular value of x. It is com-
puted like (max(svd(x)))
norm(x,1) The l 1 norm x. It is sum
of largest column. Its value is
max(sum(abs(x), ‘r′
)).
norm(x,‘inf’) The infinity norm of x. It is computed
by max(sum(abs(x), ‘c′
)).
norm(x,‘fro’) Frobenius norm. Computed by
sqrt(sum(diag(x′
∗ x))) for a vector.
norm(v,p) The l p norm. It is computed from re-
lation (sum(v(i)p
))(1/p)
.
2.3.8 Permutation Transposition (pertrans)
pertrans is the permutation and transposition of the elements of a matrix simultaneously.
Take a matrix
A =
320. Its permutation and transposition, rows are transform into columns and then columns
are arranged from right to left in matrix form. For example, the permutation and trans-
position of matrix, A is B (say)
B =
361. ✞
-- A = [1 ,2 ,3;3 ,4 ,4;7 ,6 ,8;0 ,9 ,8];
2 -- pertrans (A)
✌
✆
✞
ans =
8. 8. 4. 3.
9. 6. 4. 2.
0. 7. 3. 1.
✌
✆
2.3.9 Orthogonal Matrix
An orthogonal matrix is a square matrix with real entries whose columns and rows are
orthogonal unit vectors. Equivalently, a matrix A is orthogonal if its transpose is equal
to its inverse:
AT
= A−1
We can find the orthogonal matrix of a given matrix by decomposing it as explained in
function svd. It is equal to the Matrix U of the svd decomposition.
✞
-- orth ([1 ,2;3 ,4])
✌
✆
✞
ans =
- 0.4045536 - 0.9145143
- 0.9145143 0.4045536
✌
✆
2.3.10 Complex Matrix
Complex matrix is a matrix whose at least one element is a complex number. A complex
numbers are written in form of z = a + ib. A real number is also a complex number with
zero imaginary part, i.e. z = 2 + i0. The function complex accepts two matrices of equal
shape and size as its argument. First matrix represents real part of the complex number,
while second matrix represents to the imaginary part of the complex numbers.
362. 122 Linear Algebra
✞
1 -- // First part is real , second part is imaginary
-- complex ([1 2;3 4],[3 4;5 8])
✌
✆
✞
ans =
1. + 3.i 2. + 4.i
3. + 5.i 4. + 8.i
✌
✆
2.3.11 Matrix Product
Vector product of two matrices (different from elementwise product) is performed by
special mathematical operations. Matrix product is possible, if number of rows in left
matrix is equal to the number of columns in right matrix. Let we have two matrices of
equal size 2 × 2 as given below:
A =
378. The vector product of the matrices is given by C = A × B as
Cij =
mn
X
ij
AijBji
Here corresponding elements of columns of right matrix are multiplied by respective
elements of rows of left matrix and their respective products are added with each other.
The vector product of matrices A and B is
A × B =
386. ✞
1 -- a=[1 ,3 ,5;4 ,8 ,9;2 ,1 ,6];
-- b=[1 ,3 ,5;4 ,8 ,9;2 ,1 ,6];
3 -- a*b
✌
✆
✞
ans =
23. 32. 62.
54. 85. 146.
18. 20. 55.
✌
✆
Scalar matrix product (element-wise matrix product) is performed by multiplying corre-
sponding elements of the two matrices. Scalar product is possible if both matrices are of
equal size. The scalar matrix product of two matrices is given by C = A · B as
Cij = Aij · Bij
Assume two matrices
A =
411. ✞
-- a=[1 ,3 ,5;4 ,8 ,9;2 ,1 ,6];
2 -- b=[1 ,3 ,5;4 ,8 ,9;2 ,1 ,6];
-- a.*b
✌
✆
✞
ans =
1. 9. 25.
16. 64. 81.
4. 1. 36.
✌
✆
2.3.12 Eigenvalues of Matrix
An eigenvector of a square matrix A is a non-zero vector v that, when the matrix is
multiplied by v, yields a constant multiple of v, the multiplier being commonly denoted
by λ. That is:
Av = λv
The number λ is called the eigenvalue of A corresponding to Av. To find the eigenvalues,
we solve above relations as
Av − λv = 0 ⇒ (A − λI)v = 0
To get the values of λ we will solve the relation A − λI. Note that, v can not be equal to
zero. So, A − λI must be zero. Matrix equation will be found if determinant of A − λI
is equal to zero.
|A − λI| = 0
Expand the determinants and find all the values of λ. Values of λ are eigenvalues of the
given matrix. Assume a matrix A as
A =
1 2
3 4
For its eigenvalues, we have
427. = 0
It gives
λ2
− 5λ − 2 = 0
On solving it, we have λ = −0.37 and λ = 5.37. The Scilab codes are
428. 124 Linear Algebra
✞
-- spec ([1 ,2;3 ,4]) // eigenvectors
✌
✆
✞
ans =
- 0.3722813
5.3722813
✌
✆
2.3.13 Triangular Lower Matrix (tril)
tril returns the lower triangular matrix. If a matrix is given like
A =
a f g
f b h
g h c
Now the lower triangular matrix of matrix A is
A =
a 0 0
f b 0
g h c
✞
1 -- s=poly (0,’s’);
-- tril ([s,s;s,1])
✌
✆
✞
ans =
s 0
s 1
✌
✆
2.3.14 Triangular Upper Matrix (triu)
tril returns the upper triangular matrix. If a matrix is given like
A =
a f g
f b h
g h c
Now the lower triangular matrix of matrix A is
A =
a f g
0 b h
0 0 c
✞
-- s=poly (0,’s’);
2 -- triu ([s,s;s,1])
✌
✆
429. 2.3. MATRICES 125
✞
ans =
s s
0 1
✌
✆
2.3.15 Lower Upper Matrix (lu)
This function factorize a given matrix in lower triangular and upper triangular matrices.
It is used as
✞
-- [L,U,E]=lu(matrix name );
✌
✆
Where input matrix is m×n real or complex matrix. L is lower triangular real or complex
matrix of size m × min(m, n) and U is upper triangular real or complex matrix of size
min(m, n) × n. E is a n × n permutation matrix.
✞
1 -- A=[1 ,3;2 ,1]
-- [L,U,E]=lu(A)
✌
✆
✞
E =
0. 1.
1. 0.
U =
2. 1.
0. 2.5
L =
1. 0.
0.5 1
✌
✆
2.3.16 Diagonal Matrix (diag)
For a matrix
A =
441. The diagonal elements are given as d = [a, e, i]. This function returns the diagonals of a
matrix or diagonal matrix. Diagonal elements are taken along the main diagonal of the
matrix. Its syntax is
✞
1 -- diag ( matrix name )
✌
✆
Take a matrix A of size 3 × 3
A =
466. If matrix name supplied to this function is of m × 1 size then it convert it into the matrix
of m×m size with all the input elements at the diagonals of the output matrix. Elements
other than diagonal are sets to zero. Take input matrix of type
d =
490. Scilab example is
✞
1 -- A=[1, 2, 3; 4, 1, 5; 8, 8, 9]
-- d=diag (A)
✌
✆
✞
A =
1. 2. 3.
4. 1. 5.
8. 8. 9.
d =
1.
1.
9.
✌
✆
The diagonal matrix is constructed from the diagonals of the given matrix ‘A’ by using
the diag() function.
✞
1 -- D=diag (d)
✌
✆
✞
D =
1. 0. 0.
0. 1. 0.
0. 0. 9.
✌
✆
If a matrix is not a square matrix, then rows and columns from the top-left position
are chosen, which ultimately form a perfect square matrix. For example, if a non-square
matrix is
A =
537. Here elements are taken along the main diagonal of the matrix.
2.3.17 Jordan Canonical Form (bdiag)
It is also called diagonalization of a matrix. A matrix is said to be diagonalizable if it
can be transform into Jordan canonical (normal) form as given below:
A = P · D · P−1
Where D is diagonal matrix. Matrix function can be used with diagonalized matrix as
f(A) = P · f(D) · P−1
Here, matrix function is operated only with diagonal matrix not with P or P−1
. For
example, suppose a matrix A as
A =
1 3
2 1
It can be transformed into P · D · P−1
form as
A = P
0.344 0
0 −1.449
#
P−1
Where
P =
0.774 −0.790
0.632 0.645
Now, the matrix function operation, for example inverse of matrix A is given as
A−1
= P · D−1
· P−1
And solving above relation by substituting the values of P, D we get A−1
and it is
A−1
=
−0.2 0.6
0.4 −0.2
538. 128 Linear Algebra
✞
-- A=[1 ,3;2 ,1];
2 -- [D,P,d]= bdiag(A)
✌
✆
✞
d =
1.
1.
P =
0.7745967 - 0.7905694
0.6324555 0.6454972
D =
3.4494897 0.
0. - 1.4494897
✌
✆
Here, d is diagonals of the matrix A. On computation of P · D · P−1
), we get the matrix
A.
✞
1 -- P*D*inv(P)
✌
✆
✞
ans =
1. 3.
2. 1.
✌
✆
Applying the inverse matrix operation, we get the same result by P ·D−1
·P−1
) and A−1
.
✞
1 -- I=P*inv(D)*inv(P)
-- i=inv(A)
✌
✆
✞
I =
- 0.2 0.6
0.4 - 0.2
i =
- 0.2 0.6
0.4 - 0.2
✌
✆
2.3.18 Cholensky Factorization (chol)
The Cholesky decomposition of a Hermitian positive-definite matrix A, is a decomposition
of the form
A = LL
′
where L is a lower triangular matrix with real and positive diagonal entries, and L∗
denotes the conjugate transpose of L. A positive-definite matrix has positive determinant.
539. 2.3. MATRICES 129
Scilab function chol is used for the cholensky factorization of a matrix. Let we have a
matrix
3 2
3 4
Its determinant is |A| = 12 − 6 = 6 0. So, it is positive-definite matrix. Now, The
A = LL
′
decomposition is
3 2
3 4
=
1.7320508 0.
1.1547005 1.6329932
1.7320508 1.1547005
0. 1.6329932
✞
1 --A=[3 ,2;3 ,4]
--chol (A)
✌
✆
✞
A =
3. 2.
3. 4.
ans =
1.7320508 1.1547005
0. 1.6329932
✌
✆
Every Hermitian positive-definite matrix has a unique Cholesky decomposition. Let we
have a Hermitian matrix
A =
4 2 + 2i
2 − 2i 3
Its determinant |A| 0, hence it has Cholesky decomposition. Scilab codes for this
matrix are given below:
✞
1 -- A=[4 ,2+2* %i ;2-2*%i ,3]
-- R=chol (x)
✌
✆
✞
A =
4. 2. + 2.i
2. - 2.i 3.
R =
2. 1. + i
0. 1.
✌
✆
2.3.19 Determinant of Matrix (det)
det returns the determinant of a square vector matrix. If a matrix is given like
A =
1 2 3
4 5 6
7 8 9
540. 130 Linear Algebra
Then, determinant of this matrix A is
|A| = 1(45 − 48) − 2(36 − 42) + 3(32 − 35)
= 0
Hence determinant of the matrix A is zero. Scilab call of determinant of the matrix is
✞
1 -- det ([1 ,2 ,3;4 ,5 ,6;7 ,8 ,9])
✌
✆
✞
ans =
6.661D-16
✌
✆
2.3.20 Inverse Matrix (inv)
inv returns the inverse of a square matrix. Inverse of a matrix can be obtained if it is
convertible. All square matrices are not convertible. The square matrix which has an
inverse is called convertible or non-singular matrix. A square matrix is convertible when
its determinant is not zero, i.e. det(A) 6= 0. If A is a matrix then
A × A−1
= I
i.e. vector product of matrix and its inverse matrix is unitary matrix. Dot product of
matrix and its inverse matrix is not unitary matrix.
✞
-- A=[1 ,2;3 ,4] // Matrix of 2x2 order
2 -- B=A*inv(A) // Cross product of matrix and its inverse
-- C=A.* inv(A) // Dot product of matrix and its inverse
✌
✆
✞
A =
1. 2.
3. 4.
B =
1. 0.
8.882D-16 1.
C =
- 2. 2.
4.5 - 2.
✌
✆
Inverse of a square matrix is given by
A−1
=
Adj(A)
Det(A)
Adj(A) of the matrix A is transpose matrix of the co-factor matrix of matrix A. Co-factor
of a matrix of m × n order in respect of ith
row and jth
column is given by Aij that is
541. 2.3. MATRICES 131
equal to the product of (−1)i×j
and determinant of remaining matrix after eliminating
ith
row and jth
column. Let a matrix is given like
A =
1 2
3 4
The co-factors of the matrix are
a11 = (−1)1×1
× 4; a11 = (−1)1×2
× 3;
a21 = (−1)2×1
× 2; a22 = (−1)2×2
× 1;
Now co-factors matrix of matrix A is
Acf =
4 −3
−2 1
Now Adj(A) of matrix A is
Adj(A) =
4 −2
−3 1
Now the determinant of the matrix A is
Det(A) = −2
Finally, inverse of matrix A is
A−1
=
Adj(A)
Det(A)
=
−2 1
1.5 −0.5
This result can be obtained by calling Scilab function inv like
✞
-- inv ([1 ,2;3 ,4])
✌
✆
✞
ans =
- 2. 1.
1.5 - 0.5
✌
✆
2.3.21 Orthogonal Matrix (orth)
In linear algebra, an orthogonal matrix is a square matrix whose columns and rows are
orthogonal unit vectors (orthonormal vectors). Orthogotal matrix is represented as
AT
A = AAT
= I
Where, AT
is transpose of the matrix A. I is identity matrix. From the relation
A−1
A = AA−1
= I
we can say that a matrix A is orthogonal if its transpose is equal to its inverse, i.e.
AT
= A−1
. For example,
1 0
0 1
0.96 −0.28
0.28 0.96
542. 132 Linear Algebra
An orthogonal basis for an inner product space V is a basis for V whose vectors are
mutually orthogonal. If the vectors of an orthogonal basis are normalized, the resulting
basis is an orthonormal basis. In Scilab, orth returns the orthogonal basis of a given
matrix. To illustrate method of orth function, consider a matrix
A =
a b
c d
where, u =
a
c
v =
b
d
The orthogonal basis for first column shall be found by dividing all column elements by
column normalisation value as shown in below matrix.
c1 =
a
√
a2 + c2
c
√
a2 + c2
Second column is calculated by using relation
c2 = v −
u · v
u · u
× u
Or
c2 =
b
d
−
ab + cd
a2 + b2
×
a
c
Now, divide all elements of second columns by column normalisation value. The orthog-
onal basis shall be matrix constructed with columns c1 and c2.
✞
1 B=orth (A)
✌
✆
returns an orthonormal basis for the range of A. The columns of B span the same space as
the columns of A, and the columns of B are orthogonal, so that BT
∗ B = eye(rank(A)).
The number of columns of B is the rank of A. Take matrix
A =
3 2
1 2
where, u =
3
1
v =
2
2
The orthogonal basis for first column shall be found by dividing all column elements by
column normalisation value as shown in below matrix.
c1 =
3
√
32 + 12
= 0.948683298
1
√
32 + 12
= 0.316227766
Second column is calculated by using relation
c2 =
2
2
−
3 × 2 + 1 × 2
32 + 12
×
3
1
This gives
c2 =
−0.4
1.2
543. 2.3. MATRICES 133
Dividing it with normalised column value, we get
c2 =
−0.4
p
(−0.4)2 + 1.22
= −0.316227766
1.2
p
(−0.4)2 + 1.22
= 0.948683298
The orthogonal basis shall be matrix constructed with columns c1 and c2.
c2 =
0.948683298 −0.316227766
0.316227766 0.948683298
✞
1 -- A= [3 2;1 2]
-- orth (A)
✌
✆
✞
ans =
- 0.8649101 - 0.5019268
- 0.5019268 0.8649101
✌
✆
This result is different from what we have calculated. This is due to different way of
Gram-Schmidt Orthonormal Method implementation. Orthogonal basis implementation
of above illustrated method in scilab script is shown below:
✞
1 -- function A = myOrth(A)
-- k=size (A,2);
3 -- function w = proj (u,v)
-- w = (sum(v.*u)/sum(u.*u)) * u;
5 -- endfunction
-- for r = 1:1: k
7 -- A(:,r) = A(:,r) / norm (A(:,r))
-- for c = r+1:1:k
9 -- ; u v
-- A(:,c) = A(:,c) - proj (A(:,r),A(:,c))
11 -- end
-- end
13 -- endfunction
-- A = [3 ,2;1 ,2]
15 -- myOrth(A)
✌
✆
2.3.22 Rank of Matrix (rank)
rank determines the minimum non-zero rows of a matrix when reduces to echelon form.
Suppose a matrix A is in echelon form as
A =
1 0
0 1
544. 134 Linear Algebra
and it has two non zero rows. Now the rank of matrix is 2. Similarly, rank of following
matrix
B =
1 2 0
0 0 1
0 0 0
is also two. Rank of a matrix can be obtained by Scilab by using function rank function
like
✞
1 -- rank ([1, 0; 0, 1])
✌
✆
✞
ans =
2.
✌
✆
Example with other matrix of 3 × 3 order:
✞
-- rank ([1 ,3 ,5;4 ,8 ,9;2 ,1 ,6])
✌
✆
✞
ans =
3.
✌
✆
2.3.23 Eigenvalues Eigenvectors (spec)
spec evaluates eigenvalues and eigenvectors of a square matrix. The eigenvalues of a
matrix is given by
|A − λI| = 0
Assume a matrix of order 2 × 2
A =
1 2
3 4
Now its eigenvalues are given by |A − λI| = 0 ie
560. = 0
Or
(1 − λ) × (4 − λ) − 6 = 0
On solving it
λ = −0.372281; 5.372281
Or eigenvalues in matrix form, when they are arranged in descending order is
d =
5.372281 0.000000
0.000000 −0.372281
561. 2.3. MATRICES 135
For λ = 5.372281, eigenvector (v1) is (A − λI)v1 = 0. So,
1 − 5.372281 2
3 4 − 5.372281
x
y
= 0
−4.372281 2
3 −1.372281
x
y
= 0
Or
−4.372281x + 2y = 0; 3x − 1.372281y = 0
To get solutions, put x = 1 in −4.372281x + 2y = 0, we get the value of y = 2.186140.
To get eigenvectors, we shall normalize these two values as
x =
1
√
12 + 2.1861402
; y =
2.186140
√
12 + 2.1861402
It gives, x = 0.415973 and y = 0.909376. As coefficients of above two eigenvector equa-
tions are of opposite signs, hence values of x and y shall be either both negative or both
positive. Now, we shall submit x and y values in equation g = 3x − 1.372281y to get
minimum positive value.
g = 3 × 0.415973 − 1.372281 × 0.909376 = 0.000000407
Taking sign convention, the other possible set of solution be x = −0.415973 and y =
−0.909376.
g = 3 × −0.415973 − 1.372281 × −0.909376 = 0.000000407
When x = −0.415973 and y = −0.909376, we have positive f value. This gives first
eigenvector corresponding to λ = 5.372281.
v1 =
−0.415973
−0.909376
We can also find normalized x and y solution from 3x − 1.372281y = 0 and minimum
positive value of f = −4.372281x+2y can be found by substituting x and y values taking
sign conventions. For λ = −0.372281, eigenvector (v2) is (A − λI)v2 = 0. So,
1 − (−0.372281) 2
3 4 − (−0.372281)
x
y
= 0
1.372281 2
3 4.372281
x
y
= 0
Or
1.372281x + 2y = 0; 3x + 4.372281y = 0
As coefficients of above two eigenvector equations are of same signs, hence values of x
and y shall be in opposite signs. On solving these two algebraic equations, as explained
for λ = 5.372281, we have x = −0.824564 and y = 0.565767 or x = 0.824564 and y =
562. 136 Linear Algebra
−0.565767. From the numberline, acceptable values are x = −0.824564 and y = 0.565767.
This gives second eigenvector corresponding to λ = −0.372281.
v2 =
−0.824564
0.565767
The corresponding eigenvector matrix from above two eigenvectors (v1 and v2) is
v =
−0.415973 −0.824564
−0.909376 0.565767
From Scilab, eigenvalues and eigenvectors are obtained by using function spec like
✞
-- [vec , val]= spec ([1 ,2; 3,4])
✌
✆
✞
val =
- 0.3722813 0
0 5.3722813
vec =
- 0.8245648 - 0.4159736
0.5657675 - 0.9093767
✌
✆
2.3.24 Square Root of Matrix
Assume a matrix of order 2 × 2
A =
1 2
3 4
Now its eigenvalues are given by |A − λI| = 0 i.e.
578. = 0
Or
(1 − λ) × (4 − λ) − 6 = 0
On solving it
λ = −0.372281; 5.372281
Or eigenvalues in matrix form, when they are arranged in descending order is
d =
5.372281 0.000000
0.000000 −0.372281
For λ = 5.372281, eigenvector (v1) is (A − λI)v1 = 0. So,
1 − 5.372281 2
3 4 − 5.372281
x
y
= 0
579. 2.3. MATRICES 137
−4.372281 2
3 −1.372281
x
y
= 0
Or
−4.372281x + 2y = 0; 3x − 1.372281y = 0
To get solutions, put x = 1 in −4.372281x + 2y = 0, we get the value of y = 2.186140.
To get eigenvectors, we shall normalize these two values as
x =
1
√
12 + 2.1861402
; y =
2.186140
√
12 + 2.1861402
It gives, x = 0.415973 and y = 0.909376. As coefficients of above two eigenvector equa-
tions are of opposite signs, hence values of x and y shall be either both negative or both
positive. Now, we shall submit x and y values in equation g = 3x − 1.372281y to get
minimum positive value.
g = 3 × 0.415973 − 1.372281 × 0.909376 = 0.000000407
Taking sign convention, the other possible set of solution be x = −0.415973 and y =
−0.909376.
g = 3 × −0.415973 − 1.372281 × −0.909376 = 0.000000407
When x = −0.415973 and y = −0.909376, we have positive f value. This gives first
eigenvector corresponding to λ = 5.372281.
v1 =
−0.415973
−0.909376
For λ = −0.372281, eigenvector (v2) is (A − λI)v2 = 0. So,
1 − (−0.372281) 2
3 4 − (−0.372281)
x
y
= 0
1.372281 2
3 4.372281
x
y
= 0
Or
1.372281x + 2y = 0; 3x + 4.372281y = 0
As coefficients of above two eigenvector equations are of same signs, hence values of x
and y shall be in opposite signs. On solving these two algebraic equations, as explained
for λ = 5.372281, we have x = −0.824564 and y = 0.565767 or x = 0.824564 and
y = −0.565767. This gives second eigenvector corresponding to λ = −0.372281.
v2 =
−0.824564
0.565767
The corresponding eigenvector matrix from above two eigenvectors (v1 and v2) is
v =
−0.415973 −0.824564
−0.909376 0.565767
580. 138 Linear Algebra
Note that, each column of eignevectors is arranged to the corresponding eigenvalues. Now,
the matrix A can be written as A = v × d × v−1
. The square root of the matrix is given
by
A
1
2 = v × d
1
2 × v−1
✞
-- A=[1 ,2; 3,4]
2 -- [vec , val]= spec (A)
-- B=vec*sqrt (val)*inv(vec)
4 -- M=B*B
✌
✆
✞
A =
1. 2.
3. 4.
val =
- 0.372281 0
0 5.372281
vec =
- 0.824564 - 0.415973
0.565767 - 0.909376
B =
0.553688 + 0.464394 i 0.806960 - 0.212426 i
1.210441 - 0.318639 i 1.764129 + 0.145754 i
M =
1. + 1.110D-16i 2. + 4.163D-17i
3. + 8.327D-17i 4.
✌
✆
Here, matrix M is same as matrix A.
2.3.25 Hermitian Factorisation (sqroot)
A Hermitian matrix (A) (or self-adjoint matrix) is a complex square matrix that is equal
to its own conjugate transpose (AT ). In other words, the ijth
element of Hermitian matrix
is equal to the complex conjugate of the jith
element of Hermitian matrix for all indices
i and j. Elements of a Hermitian matrix are satisfy aij = aji. If A is a Hermitian matrix
then sqroot of matrix A returns a matrix B such that A = BBT
.In Scilab, sqroot function
is used to find the square root of the Hermitian matrix.
✞
-- A=[1 ,2;2 ,1]
2 -- B=sqroot(A)
-- B*B’
✌
✆
✞
A =
1. 2.
2. 1.
B =
- 1.2247449 0.7071068
- 1.2247449 - 0.7071068
ans =
581. 2.3. MATRICES 139
1. 2.
2. 1.
✌
✆
2.3.26 Signaular Matrix
A matrix is said to be singular whose determinant is zero. A two dimensional singular
matrix is
A =
589. Its determinant is |A| = 1 − 1 = 0. Inverse of a singular matrix is not possible as its
determinant is zero.
2.3.27 Singular Value Approximation (sva)
sva is acronym of Singular Value Approximation of a matrix.
✞
1 -- sva ([1 ,2;3 ,4])
✌
✆
✞
ans =
-0.4045536 -0.9145143
-0.9145143 0.4045536
✌
✆
This returns left singular vectors of the given matrix. Mathematically, sva is matrix U
of the Singular Value Decomposition (see svd).
2.3.28 Singular Value Decomposition (svd)
svd acronym of Singular Value Decomposition of a matrix. Singular Value Decomposition
(SVD) is a factorization of a real or complex matrix. Singular value decomposition of a
rectangular matrix Am×n where m is rows represents genes and n is column represents
the experimental conditions. The SVD theorem states than
Am×n = Um×m Sm×n V T
n×n
Where U and V are orthogonal, i.e. UT
U = Im×m and V T
V = In×n. Here, the columns
of U are the left singular vectors; S has singular values and is diagonal; and V T
has rows
that are the right singular vectors. SVD calculation consists
1. Eigenvalues and Eigenvectors of AAT
and AT
A.
2. Eigenvectors of AAT
make up the columns of U.
3. The Eigenvectors of AT
A make up the columns of V .
4. The singular values in S are square roots of eigenvalues from AAT
or AT
A.
590. 140 Linear Algebra
The singular values are the diagonal entries of the S matrix and are arranged in descending
order. The singular values are always real numbers. If the matrix A is a real matrix, then
U and V are also real. Let a matrix A and its transpose matrix AT
are respectively
A =
1 2
3 4
AT
=
1 3
2 4
Now, AAT
is
1 2
3 4
1 3
2 4
=
5 11
11 25
Eigenvalues of this matrix is found when |AAT
− λI| = 0. So,
598. = (5 − λ)(25 − λ) − 121 = 0
This gives eigenvalues of the matrix AAT
and eigenvalues are arranged in descending
order.
val =
29.866069 0.000000
0.000000 0.133931
and find the corresponding eigenvectors as
606. x
y
= 0
It gives,
−24.866069x + 11y = 0; 11x − 4.866069y = 0
The eigenvector (v1) corresponding to λ = 29.866069 is found by solving these two equa-
tions. Recall the solutions of algebraic equations, these equations have only one solution,
that is x = 0 and y = 0. This is true if we take x, y ∈ I. If x, y ∈ R, then we can
approximate the values of x and y so that above algebraic equations are satisfied approxi-
mately to zero, i.e. left hand side solution tends to zero. As x and y have “opposite sign”
coefficients, therefore, values of x and y may be either both positive or both negative. To
solve the relation, put x = 1 in −24.866069x + 11y = 0, we have
−24.866069 × 1 + 11y = 0
This gives y = 2.260551. Normalizing these values we have,
x =
1
√
12 + 2.2605512
; y =
2.260551
√
12 + 2.2605512
Or
x = 0.40455369; y = 0.914514249
Substituting these values in equation g = 11x − 4.866069y, we have
g+
1 = 11 × 0.40455369 − 4.866069 × 0.914514249 = −0.000001153
g−
1 = 11 × −0.404552907 − 4.866069 × −0.914514595 = 0.000001153
607. 2.3. MATRICES 141
Note that there are two equations for λ = 29.866069 therefore approximate values of x
and y may be found either of the two equations. But here, we shall find x and y from
each equation separately and their values will be put in other equation to check which
solution set is more close to zero. Take x = 1, in 11x − 4.866069y = 0, we get the value
of y
11 × 1 − 4.866069y = 0
This gives y = 2.260551587. Normalizing these values we have,
x =
1
√
2.2605515872 + 12
; y =
2.260551587
√
2.2605515872 + 12
Or
x = 0.404553602; y = 0.914514288
Substituting values of x and y in equation f = −24.866069x + 11y, we have
f+
1 = −24.866069 × 0.404553602 + 11 × 0.914514288 = −0.000000614
f−
2 = −24.866069 × −0.404553602 + 11 × −0.914514288 = 0.000000614
Here, x = −0.40455369, y = −0.914514249, give minimum positive value. Why we
consider minimum positive value? To understand it, consider modulo function
f(x) =
−x when x 0
x when x ≥ 0
It means 0 is positive side. That’s why, 0.000000614 is considered more closure to zero
than −0.000000614. Hence acceptable eignevector is
v1 =
−0.404553
−0.914514
Now, we shall find the corresponding eigenvectors for eigenvalue λ = 0.133931.
615. x
y
= 0
It gives,
4.866068x + 11y = 0; 11x + 24.866068y = 0
The eigenvector (v2) corresponding to λ = 0.133931 is found by solving these two equa-
tions. Recall the solutions of algebraic equations, these equations have only one solution,
that is x = 0 and y = 0. This is true if we take x, y ∈ I. If x, y ∈ R, then we can
approximate the values of x and y so that above algebraic equations are satisfied approx-
imately to zero, i.e. left hand side solution tends to zero. As x and y have “same sign”
coefficients, therefore, values of x and y shall have opposite signs. We know that any
number when multiplied by a constant |k| 1, i.e. −1 k +1, result approaches to
zero as k → 0. So, that the solutions of these two equations shall be within (−1, +1). To
solve the relation, take x = 1 in 4.866068x + 11y = 0, we have
4.866068 × 1 + 11y = 0
616. 142 Linear Algebra
This gives y = −0.442369818. Normalizing these values we have,
x =
1
p
(−0.442369818)2 + 12
; y =
−0.442369818
p
(−0.442369818)2 + 12
Or
x = 0.914514319; y = −0.404553533
Substituting these values in equation g = 11x + 24.866068y, we have
g+
1 = 11 × 0.914514319 + 24.866068 × −0.404553533 = 0.000001848
g−
1 = 11 × −0.914514319 + 24.866068 × 0.404553533 = −0.000001848
To solve the second relation 11x + 24.866068y = 0, take x = 1, we have
11 × 1 + 24.866068y = 0
This gives y = −0.442369899. Normalizing these values we have,
x =
1
p
(−0.442369899)2 + 12
; y =
−0.442369899
p
(−0.442369899)2 + 12
Or
x = 0.914514291; y = −0.404553595
Substituting these values in equation f = 4.866068x + 11y, we have
f+
1 = 4.866068 × 0.914514291 + 11 × −0.404553595 = −0.000000818
f−
1 = 4.866068 × −0.914514291 + 11 × 0.404553595 = 0.000000818
The minimum positive value is obtained when x = −0.914514291 and y = 0.404553595.
So, the eignevector is
v2 =
−0.914514
0.404553
Eigenvector of the matrix AAT
is column arrangement of v1 and v2.
vec =
−0.404553 −0.914514
−0.914514 0.404553
Here,
U =
−0.404553 −0.914514
−0.914514 0.404553
Similarly, eigenvalues and eigenvectors of AT
A shall be computed as computed for AAT
above. The matrix relation AT
A is
1 3
2 4
1 2
3 4
=
10 14
14 20
Eigenvalues of this matrix is found when |AT
A − λI| = 0. So,
625. 2.3. MATRICES 143
This gives eigenvalues and corresponding eigenvectors as
val =
29.866069 0.000000
0.000000 0.133931
; vec =
−0.576048 0.817415
−0.817415 −0.576048
Here,
V =
−0.576048 0.817415
−0.817415 −0.576048
S is the square root of the eigenvalues from AAT
or AT
A. So,
S =
√
29.866069 0.000000
0.000000
√
0.133931
=
5.464985 0.000000
0.000000 0.365966
Here, each column of val matrix represents the eignevalues of AAT
or AT
A matrices
and corresponding columns of U or V are eigenvectors corresponding to the eigenvalues
respectively. Relation between U, S and V with A as
A = USV T
Now, from the above explained example, the modified values of U, S and V are
U =
−0.404553 −0.914514
−0.914514 0.404553
V =
−0.576048 0.817415
−0.817415 −0.576048
S =
5.464985 0.000000
0.000000 0.365966
In Scilab, svd of given matrix is given by
✞
1 -- [U,S,V]= svd ([1 ,2;3 ,4])
✌
✆
This function has three outputs as given below:
✞
V =
- 0.5760484 0.8174156
- 0.8174156 - 0.5760484
S =
5.4649857 0.
0. 0.3659662
U =
- 0.4045536 - 0.9145143
- 0.9145143 0.4045536
✌
✆
2.3.29 Trace of Matrix (trace)
Tracing of a square matrix is sum of its diagonal elements. If A is a square matrix of
order n × n then
T r(A) =
n
X
i=1
aii
626. 144 Calculus
If λi are the eigenvalues of a matrix A, then tracing of the matrix is
T r(A) =
X
i
λi
In Scilab, trace returns the tracing of a matrix.
✞
1 -- A=rand (3,3)
-- trace(A)
✌
✆
✞
A =
0.3312931 0.7221236 0.3707945
0.0518477 0.0774625 0.2116117
0.4149242 0.5855878 0.1903269
ans =
0.5990825
✌
✆
627. 3.1. DERIVATIVE CALCULUS 145
3Calculus
3.1 Derivative Calculus
3.1.1 Derivative
Derivative of a function f(x) is given by
f′
(x) = lim
h→0
f(x + h) − f(x)
h
where, h is approximately zero but not zero. Symbolic derivative of a function in Scilab
is calculated by derivat function.
✞
-- s=poly (0,’s’);
2 -- derivat (1/s)
✌
✆
ans =
−1
s2
derivative function calculates the numerical derivative of a function.
✞
-- function y=myF(x)
2 -- y = x*x;
-- endfunction
4 -- x = 1;
-- fp = derivative (myF ,x)
✌
✆
✞
fp =
2.
✌
✆
3.1.2 Numeric Derivative
Second method of finding the derivative of a function is numdiff that is more accurate
than the derivative function. Mathematically, it is given by
numdiff (f)|x=c =
df
dx
632. 146 Calculus
✞
-- function y=myF(x)
2 -- y = x*x;
-- endfunction
4 -- x = 1;
-- fp = numdiff(myF ,x)
✌
✆
✞
fp =
2.
✌
✆
3.1.3 Numeric Difference
diff function is used to find the difference between two consecutive elements continuously.
This function is used with a vector having only numeric values. In mathematics, first order
discrete difference is given by
∆f = fn+1 − fn
A function table and its first order difference is given in the following table.
f = 1 8 27 64 125 216
∆f = 7 19 37 61 91
The diff function does the same work. Numerically, it is reqpresented like
diff(f) = fn+1 − fn
✞
-- v=(1:8) ^3;
2 -- diff (v)
✌
✆
✞
ans =
7. 19. 37. 61. 91. 127. 169.
✌
✆
3.1.4 Function Evaluation (feval)
Multiple evaluation of a function for one or two arguments of vector type. The syntax is
✞
-- feval(var 1, var 2, ...., var n, n var func )
✌
✆
Example is
✞
1 -- function [z]=f(x,y)
-- z=x+y;
3 -- endfunction
-- a=1:3;
5 -- b=1:3;
-- feval(a,b,f)
✌
✆
633. 3.2. INTEGRAL CALCULUS 147
✞
ans =
2. 3. 4.
3. 4. 5.
4. 5. 6.
✌
✆
3.2 Integral Calculus
3.2.1 Integration
Integration is also called anti-derivative. It is a method of finding reverse of differentiation.
x
y f(x)
dx
b
x = a
b
x = b
The symbolic representation of integral of a function is given by
I =
Z
f(x)dx
Where symbol
Z
represent to the integration. f(x) is function on which integration is to
be performed and dx is the base in which d represents the small element of function and
x the base variable against which integration is to be performed. Variables other than x
are considered as constants. Indefinite integral has global scope while definite integral is
computed within the given limits of independent variable.
I =
Z b
a
f(x)dx
Scilab does not support symbolic integration but it has one of the most precisive calcu-
lations in numerical integration i.e. definite integration. Integration of a simple function
f(x) = x is
✞
--I=integrate (’x’,’x’ ,0,1)
✌
✆
✞
I =
0.5
✌
✆
In this integration command, first term is integrand function, second term is base variable,
third is lower limit and fourth term is upper limit. Applying these observations, the
integration of t2
is
634. 148 Calculus
✞
--I=integrate (’t*t’,’t’ ,0,1)
✌
✆
✞
I =
0.3333333
✌
✆
Similarly integration of a sine function is
✞
-- I=integrate (’sin(t)’,’t’ ,0,1)
✌
✆
✞
I =
0.4596977
✌
✆
Remember that lower limit and upper limit in trigonometric integration are taken by
trigonometric functions in radian unit of angle. Definite integration of trigonometric
function is computed by using intg function.
✞
-- function y=f(x)
2 -- y=sin(x)/x
-- endfunction
4 -- I=intg (0,2*%pi ,f)
✌
✆
✞
I =
1.4181516
✌
✆
There are several method of finding the integration by interpolation method. One of them
is Trapezoidal method.
✞
-- t=0:0.1: %pi;
2 -- inttrap (t,sin(t))
✌
✆
✞
ans =
1.9974689
✌
✆
The quadrature method of integration is given by
3.2.2 Double Integral (int2d)
Double integral is used to integrate a function along two dimensions. For example, double
integral is given by
A =
Z Z
f(x, y)dx dy
where dx and dy are length and width of element in x-axis and y-axis. Double integral
gives volume of region bounded between two axes and f(x, y).
635. 3.2. INTEGRAL CALCULUS 149
x
y
z
f(x, y)
dx
dy
int2d is used to integrate two dimensional functions. The syntax is
✞
-- [I,e]= int2d( ..
2 -- a three dimensional N array for abscissa , ..
-- a three dimensional N array for ordinate , ..
4 -- external function ..
-- )
✌
✆
Here I is integrated value and e is estimated error. Example is
✞
1 -- function [z]=f(x,y)
-- z=x+y;
3 -- endfunction
-- X=[0 ,0;1 ,1;1 ,0];
5 -- Y=[0 ,0;0 ,1;1 ,1];
-- int2d(X,Y,f)
✌
✆
✞
ans =
1.
✌
✆
3.2.3 Tripple Integration (int3d)
int3d is used to integrate three dimensional functions. The syntax is
✞
-- [I,e]= int3d( ..
2 -- a four dimensional array for abscissae , ..
-- a four dimensional array for ordinate , ..
4 -- a four dimensional array for z-axis , ..
-- external function ..
6 -- )
✌
✆
Here I is integrated value and e is estimated error. ‘external function’ is a function or
list or string which defines the integrand f(xyz, nf), where xyz is the vector of a point
coordinates and nf is the number function. By default nf is 1. Example is
✞
-- function v=f(xyz , numfun)
2 -- v=xyz ’* xyz;
-- endfunction
4 -- // Tetrahedron coordinates
636. 150 Calculus
-- //(0,0,0) ,(1,0,0) ,(0,1,0) ,(0,0,1)
6 -- X=[0;1;0;0];
-- Y=[0;0;1;0];
8 -- Z=[0;0;0;1];
-- [I,e]= int3d(X,Y,Z,f ,1)
✌
✆
✞
e =
5.551D-16
I =
0.05
✌
✆
Another example is
✞
-- function v=f(F, numfun)
2 -- //x+y+z=1
-- v=1;
4 -- endfunction
-- // Tetrahedron coordinates
6 -- //(0,0,0) ,(1,0,0) ,(0,1,0) ,(0,0,1)
-- X=[0;1;0;0]; // Vector of abscissa values
8 -- Y=[0;0;1;0]; // Vector of ordinate values
-- Z=[0;0;0;1]; // Vector of z values
10 -- [I,e]= int3d(X,Y,Z,f ,1)
✌
✆
✞
e =
1.850D-15
I =
0.1666667
✌
✆
Another example is
✞
-- function v=f(F, numfun)
2 -- //x+y+z=6
-- v=6;
4 -- endfunction
-- // tetrahedron coordinates
6 -- //(0,0,0) ,(2,0,0) ,(0,3,0) ,(0,0,1)
-- X=[0;2;0;0]; // Vector of abscissa values
8 -- Y=[0;0;3;0]; // Vector of ordinate values
-- Z=[0;0;0;1]; // Vector of z values
10 -- [I,e]= int3d(X,Y,Z,f ,1)
✌
✆
✞
e =
6.661D-14
I =
6.
✌
✆
637. 3.2. INTEGRAL CALCULUS 151
3.2.4 Integrate (integrate)
integrate function calculates the sum of values of a function for all the point range within
the limits by using quadrature method. Plane integration is simple area bounded by
function and the axis of limits. It is given by
Z
f(x) dx =
n
X
i=0
f(xi) × h
Where h is width of the two consecutive lower upper bound limits.
✞
-- x0 =0;
2 -- x1 =2;
-- X= integrate (’sin(x)’, ’x’, x0 , x1)
✌
✆
✞
ans =
1.4161468
✌
✆
3.2.5 Definite Integration (intg)
It return the definite integration of an external function. If f(x) is a function bounded
between close interval [a b] then definite integral of the function is given by
I =
Z b
a
f(x) dx
Definite integration of given relation
I =
Z 2π
0
x sin(30x)
q
1 − x
2π
2
dx
is −2.5432. Now the Scilab call of definite integral of the above relation is
✞
-- function y=f(x)
2 -- y=x*sin (30* x)/sqrt (1-((x/(2* %pi))^2))
-- endfunction
4 -- I=intg (0,2*%pi ,f)
✌
✆
✞
ans =
- 2.5432596
✌
✆
3.2.6 Cauchy’s Integration (intc)
If f is a complex-valued function, then intc computes the integral over the curve in the
complex plane from lower limit z1 to upper limit z2 of the complex function f(z) along the
line z1z2. Since line integrals of analytic functions are independent of the path, function
intc can be used to evaluate integrals of any analytic function. Syntax of Cauchy’s Line
integral is given below:
638. 152 Calculus
✞
-- [I,e]= intc (..
2 -- var z1 ,.. // complex number
-- var z2 ,.. // complex number
4 -- external function of two vars (f) ..
-- )
✌
✆
Here I is integrated value and e is estimated error. Let z = x(t) + iy(t) is a parametric
curve of t in complex plane C, where z1 ≤ t ≤ z2. Let f(z) is continuous function on
complex plane C.
1
−1
1 2
−1
−2
x
y
b
b
b b
b
b
b
b b
b
b
z0
z1
z2
zn−1
zn
1
−1
1 2
−1
−2
x
y
b
b
b b
b
b
b
b b
b
b
z1
z2
∆zi
bc
Si
The t is sub divided into n subdivisions as a = t0, t1, . . ., tn = b and corresponding
subdivisions of curve on C is z0, z1, . . ., zn. The width of two consecutive subdivisions
of curve is ∆zi = zi − zi−1, where 0 i n. If function is continuous and limit exists
over each subdivision of curve, then bounded area of curve within given bounded limits
is sum of areas of all subdivisions. So,
Sn =
n
X
i=1
f(zi)∆zi
Here, Sn is line integral of function f(z) and it is represented by
I =
Z
C
f(z) dz =
Z z2
z1
f(z) dz
Where dz = |∆zi| → 0. Scilab uses following relation for computation of complex integral
in intc function. Z
C
f(z) dz =
Z z2
z1
f(C(z))C′
(z) dz
To parameterize this relation, put z = z1 + (z2 − z1)t. When z = z1, t = 0 and when
z = z2, t = 1. This gives
Z
C
f(z) dz =
Z 1
0
f(z1 + (z2 − z1)t) (z2 − z1) dt
Real and imaginary part of integrand are computed separately and their sum is desired
result.
639. 3.2. INTEGRAL CALCULUS 153
Illustrated Example Take a function f(z) = i. We have to find Cauchy’s integral along
the line za = 1 − i to zb = 1 + i. The Cauchy’s line integral for the curve shall be
I =
Z
C
f(z) dz =
Z 1+i
1−i
i dz
I = [iz]
1+i
1−i = i [(1 + i) − (1 − i)]
This gives
I = i × 2i = 2i2
= −2
This is desired result.
✞
1 -- function [I]=f(z)
-- I=%i;
3 -- endfunction
-- intc (1-%i ,1+%i ,f)
✌
✆
✞
ans =
- 2
✌
✆
Illustrated Example Take a function f(z) = 1 + i. We have to find Cauchy’s integral
along the line za = 1 − i to zb = 1 + i. The Cauchy’s line integral for the curve shall be
I =
Z
C
f(z) dz =
Z 1+i
1−i
(1 + i) dz
I = [(1 + i)z]
1+i
1−i = (1 + i) [(1 + i) − (1 − i)]
This gives
I = (1 + i) × 2i = 2(−1 + i)
This is desired result.
✞
-- function [I]=f(z)
2 -- I=1+%i;
-- endfunction
4 -- intc (1-%i ,1+%i ,f)
✌
✆
✞
ans =
- 2 + 2 i
✌
✆
Illustrated Example Let the given function is f(z) = z2
. We have to integrate it along
the line 0 to 1 + i in complex plane. The Cauchy’s line integral intc along the curve is
given by
Z
C
f(z) dz =
Z 1+i
0
f(z) dz
640. 154 Calculus
Parameterization of this integral, we will substitute z = z1 + (z2 − z1)t. When z = z1,
t = 0 and when z = z2, t = 1. Again
dz = (z2 − z1)dt
Thus Z
C
f(z) dz =
Z 1
0
(z1 + (z2 − z1)t)2
× (z2 − z1) dt
Substituting the value of z1 and z2.
Z
C
f(z) dz =
Z 1
0
(1 + i)2
t2
× (1 + i) dt =
Z 1
0
(1 + i)3
t2
dt
Or Z
C
f(z) dz = (1 + i)3 t3
3
641.
642.
643.
644. 1
0
=
(1 + i)3
3
=
2i(1 + i)
3
This is desired result. The Scilab code for this example are
✞
-- function y=f(z)
2 -- y=z^2
-- endfunction
4 -- intc (0,1+%i ,f)
✌
✆
✞
ans =
- 0.6666667 + 0.6666667 i
✌
✆
3.2.7 Cauchy Integration (intl)
If f is a complex-valued function, then intl computes the integral of f(z) over the line
in the complex plane, along the curve defined by z = z0 + reiθ
for θ1 ≤ θ ≤ θ2 and
|z − z0| ≤ r. The syntax of this function is
✞
-- [y]= intl ( ..
2 -- var a, .. // lower limit , real number
-- var b, .. // upper limit , real number
4 -- origin point as complex number (z0), ..
-- positive real number (r), ..
6 -- external function (f) ..
-- )
✌
✆
Here, z0 is center of circle in complex plane and r is radius of the circle within exponential
phase between θ1 and θ2. The Cauchy’s path integral of complex function f(z) dz along
the curve
C(θ) = z0 + reiθ
is given by
645. 3.2. INTEGRAL CALCULUS 155
1
1 2 3
x
f(x)
z1
z2
θ1
θ2
1
1 2 3
x
f(x)
z1
z2
b
b
θ
θ1
θ2
Z
C
f(z) dz =
Z θ2
θ1
f(C(θ))C′
(θ) dθ
Where, θ1 ≤ θ ≤ θ2. To convert it into parametric form, put
θ = θ1 + (θ2 − θ1)t
Now, when θ = θ1, t = 0 and when θ = θ2, t = 1. Thus the limit in parametric form is
0 ≤ t ≤ 1. This gives
C(t) = z0 + rei[θ1+(θ2−θ1)t]
and Z
C
f(z) dz =
Z 1
0
f(C(t))C′
(t) dt
Now, real and imaginary parts of the above relation are integrated separately and final
result is equal to the sum of real and imaginary integrals. Scilab uses following intl codes.
✞
1 -- function r=intl (a,b,z0 ,r,f)
-- function y=real1(t)
3 -- z=r*exp(%i *((1-t)*a+t*b))
-- y=real (f(z+z0)*%i*(b-a)*z)
5 -- endfunction
-- function y=imag1(t)
7 -- z=r*exp(%i *((1-t)*a+t*b))
-- y=imag (f(z+z0)*%i*(b-a)*z)
9 -- endfunction
-- r=intg (0,1, real1)+%i*intg (0,1, imag1)
11 -- endfunction
✌
✆
Illustrated Example Let the given function is f(z) = z2
. We have to integrate it along
the unit circle, i.e. |z| = r ≤ 1 and 0 ≤ θ ≤ 2π. The Cauchy’s line integral intl along the
curve is given by
Z
C
f(z) dz =
Z b
a
f(z) dz
The curve of integral about z0 origin is
z = z0 + reiθ
; θ1 ≤ θ ≤ θ2
Parametrization of this curve, we will substitute
θ = θ1 + (θ2 − θ1)t
646. 156 Calculus
in z, we get
z = z0 + rei(θ1+(θ2−θ1)t)
From
θ = θ1 + (θ2 − θ1)t
When θ = θ1, t = 0 and when θ = θ2, t = 1. Again
dz = (θ2 − θ1)rei(θ1+(θ2−θ1)t)
dt
Thus Z
C
f(z) dz =
Z 1
0
z0 + rei(θ1+(θ2−θ1)t)
2
× (θ2 − θ1)rei(θ1+(θ2−θ1)t)
dt
Substituting the value of θ1 = 0 and θ2 = 2π, and taking origin z0 = 0 + i0 and radius of
unit circle r ≤ 1.
Z
C
f(z) dz =
Z 1
0
e2πti
2
× 2πe2πti
dt =
Z 1
0
2πe6πti
dt
Or Z
C
f(z) dz = 2π × 6πi × e6πti
647.
648. 1
0
= 12π2
i × (1 − 1) = 0 + 0i
This is desired result. The Scilab code for this example are
✞
1 -- function y=f(z)
-- y=z^2
3 -- endfunction
-- intl (0,2 %pi ,0,1,f)
✌
✆
✞
ans
- 3.384D-17 - 5.272D-16i
✌
✆
3.3 Differential Equations
An equation said to be differential equation, if its at least one term has differential part.
For example,
dy
dx
+ 2y = 0
is a differential equation.
3.3.1 Ordinary Differential Equation (ode)
A first order ordinary differential equation is given by
df
dt
= f(x, t)
and second order ordinary differential equation is given by
d2
f
dt2
= f(x, t)
ode function is used to solve the ordinary differential equation. The syntax used is
649. 3.3. DIFFERENTIAL EQUATIONS 157
✞
-- y=ode( ..
2 -- initial dependent value , ..
-- initial variable value , ..
4 -- point where function is calculated , ..
-- function ..
6 -- )
✌
✆
For illustration purposes, we will solve two examples of each first order and second order
ODE. Consider, first order ODE for practice and introduction
dy
dt
= −ky (3.1)
Where, at t = 0, y = 0.5, i.e. y(0) = 0.5. Firstly, the above relation is re-arranged like
dy
y
= −k dt
Integration of this equation is
ln y = −kt + ln c
Where ln c is an auxiliary constant of integration and can be obtained by putting initial
boundary values, y(0) = 0.5. Hence
y = c e−kt
Or
c = 0.5
It gives the result of the relation (3.1)
y = 0.5 e−kt
It is the required solution. An example is
✞
-- // Here we will use dy/dt=-2*y relation .
2 -- // Its solution is y = y0 e^{-k*t}
-- function ydot = f (t, y)
4 -- ydot = -2*y; //k=2
-- endfunction
6 -- t = 0:0.1:3;
-- y0 = 0.5; // c=0.5 value
8 -- t0 = 0;
-- y = ode(y0 ,t0 ,t,f)
10 -- plot (t,y)
✌
✆
0
0.5
1.0
1.5
2.0
2.5
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5
Second order differential equations are solved by using dassl function.
650. 158 Calculus
3.3.2 Differential Algebraic System Solver (dassl)
dassl is used to solve the set of differential-algebraic equation. The function must be in
form of fnc(t, y, ydot) = 0. This returns a matrix of y and ydot, with each row of y
corresponding to one of the elements in the time vector t. The first element of t should
be t0 and correspond to the initial state of the system y(0) and its derivative ẏ(0), so
that the first row of the output y is y(0) and the first row of the output ẏ is ẏ(0). Now,
take a second order ODE for problem solving exercise
d2
y
dt2
+ y = 0 (3.2)
where, at t = 0, y = 0.5, i.e. y(0) = 0.5 and at t = 0, ẏ = 0.25, i.e. ẏ(0) = 0.25. Auxiliary
equation of second order ODE, (3.2) is
m2
+ 1 = 0
Root of this second order algebraic equation for variable m are
m = ±i
Now the solution of the second order differential equation, (3.2) is
y = c1ei t
+ c2e−i t
(3.3)
On derivating and substituting the boundary values for ẏ in above equation
ẏ = c1iei t
− c2ie−i t
and
c1i − c2i = 0.25
Similarly substituting the boundary values for y in the equation 3.3
c1 + c2 = 0.5
On solving relations for c1 and c2
c1 =
0.5 i + 0.25
2 i
c2 =
0.5 i − 0.25
2i
Replacing these constants, c1 and c2 in equation of solution (3.3) of the second order
ODE, (3.2), the result is
y =
0.5 i + 0.25
2 i
ei t
+
0.5 i − 0.25
2i
e−i t
Using, rule of complex mathematics along-with Demoire Theorem, solution of second
order ODE, (3.2) can be written in terms of trigonometric functions like
y = 0.25 sin(t) + 0.5 cos(t)
This is the solution of second order ODE, (3.2). In Scilab, dassl function is
651. 3.3. DIFFERENTIAL EQUATIONS 159
✞
-- [y] = dassl (y0 , t0 , tt , fcn)
✌
✆
The ‘fcn’ is described in form of
f(t, y, ydot) = 0
i.e. function body should have first degree ODE equations. The function starts from
initial conditions , i.e. y(t0) and ydot(t0) and iterates for each time value t0. As function
body has multiple first order differential equations, thus y0 and ydot are matrices, i.e.
one value for each first order differential equation. t is scalar and for each t value there
is corresponding result row as y and ydot. It returns solution matrices y and ydot with
each row in the result matrices corresponding to one of the elements in the vector t. The
first element of t should be t0 and correspond to the initial state of the system y0 and
its derivative ydot0, so that the first row of the output y is y0 and the first row of the
output ydot is ydot0. To understand it, take above example explained in the beginning
of this section.
d2
y
dt2
+ y = 0
To solve this second order differential algebraic equation, first we will resolve in first order
differential equations
an(t)y(n)
+ an−1(t)y(n−1)
+ . . . . . . + a0(t)y + r(t) = 0
By using algorithm
y1 = y
Here, y(1) of Scilab dassl function is y1.
y2 =
dy1
dt
=
dy
dt
Here, x(2) of Scilab dassl function is y2 and ydot(1) of Scilab dassl function is
dy1
dt
or
dy
dt
.
y3 =
dy2
dt
=
d2
y
dt2
Here, x(3) of Scilab dassl function is y3 and ydot(2) of Scilab dassl function is
dy2
dt
or
d2
y
dt2
.
. . . . . .
yn =
dyn−1
dt
=
dn−1
y
dtn−1
y
′
n = −
an−1(t)
an(t)
yn −
an−2(t)
an(t)
yn−1 − . . . −
a0(t)
an(t)
y1 +
r(t)
an(t)
So, from the given second order differential equation
d2
y
dt2
+ y = 0
652. 160 Calculus
Let y1 = y = y(1) then on derivating it about t, we have
dy1
dt
=
dy
dt
= ydot(1)
Now, put y2 =
dy1
dt
= y(2), then
ydot(1) = y(2)
and on derivating it about t, we have
dy2
dt
=
d2
y1
dt2
=
d2
y
dt2
= ydot(2)
As we knew that
d2
y
dt2
= −y
Then ydot(2) = −y(1). This equation must be in form of
f(t, y, ydot) = 0
so, set of first order differential algebraic equation of reduced second order differential
equation shall be
ydot(1) = y(2); ydot(2) = −y(1)
From the initial conditions, when t = 0, y = 0.5, it means y(1) = 0.5 and from above set
of equations, ydot(2) = −0.5. When t = 0,
dy
dt
= 0.25, i.e. ydot(1) = 0.25 and from above
set of equations, we have y(2) = 0.25. So, initial conditions of y0 and ydot0 are [0.5; 0.25]
and [0.25; −0.5] respectively. Note that there are two parameters of y and ydot, hence y0
and ydot0 are either row matrix or column matrix as the equation set are arranged either
in row or column respectively. In Scilab, ydot is computed internally. The time iteration
is in ten steps between t = 0 to t = 1. The complete set of example is given below:
✞
1 -- y0 =[0.5;0.25];
-- t0 =0;
3 -- tt =0:0.1:1;
-- function [r,ires ]=f(t,y,ydot )
5 -- r=[ ydot (1) - y(2) , ydot (2) + y(1)]
-- ires =0
7 -- endfunction
-- y=dassl(y0 ,t0 ,tt ’,f);
9 -- y’
✌
✆
✞
ans =
; --------- y -------- ------- ydot -------
; t y ydot ydot yddot
;-----------------------------------------------------------
0. 0.5 0.25 0. 0.
0.1 0.5224602 0.1988344 0.1988336 - 0.5224595
0.2 0.5397004 0.1456821 0.145682 - 0.5397003
0.3 0.5515481 0.0910742 0.0910742 - 0.5515482
653. 3.3. DIFFERENTIAL EQUATIONS 161
0.4 0.5578849 0.0355562 0.0355562 - 0.5578849
0.5 0.5586475 - 0.0203170 - 0.0203170 - 0.5586472
0.6 0.5538283 - 0.0759871 - 0.0759871 - 0.5538279
0.7 0.5434754 - 0.1308981 - 0.1308978 - 0.5434759
0.8 0.5276923 - 0.1845012 - 0.1845012 - 0.5276925
0.9 0.5066366 - 0.2362608 - 0.2362610 - 0.5066364
1. 0.4805188 - 0.2856597 - 0.2856599 - 0.4805187
✌
✆
Verify it by substituting the time value t = 0.5 in equation
y = 0.25 sin(t) + 0.5 cos(t)
We have
y = 0.25 sin(0.5) + 0.5 cos(0.5) = 0.55865
and dy/dt at t = 0.5 is
dy
dt
654.
655.
656.
657. t=0.5
= −0.5 sin(t) + 0.25 cos(t)|t=0.5 = −0.2032
Thus dassl method is proved for solving of differential algebraic equations.
Solved Problem 3.1 Solve second order differential equation y
′′
− y = 0 where, at t = 0,
y = 0.5, i.e. y(0) = 0.5 and at t = 0, ẏ = 0.25, i.e. ẏ(0) = 0.25.
Solution From the given second order differential equation
d2
y
dt2
− y = 0
Let y1 = y = y(1) then on derivating it about t, we have
dy1
dt
=
dy
dt
= ydot(1)
Now, put y2 =
dy1
dt
= y(2), then
ydot(1) = y(2)
and on derivating it about t, we have
dy2
dt
=
d2
y1
dt2
=
d2
y
dt2
= ydot(2)
As we knew that
d2
y
dt2
= y
Then ydot(2) = y(1). This equation must be in form of
f(t, y, ydot) = 0
So, set of first order differential algebraic equation of reduced second order differential
equation shall be
ydot(1) = y(2); ydot(2) = y(1)
658. 162 Calculus
From the initial conditions, when t = 0, y = 0.5, it means y(1) = 0.5 and from above set
of equations, ydot(2) = 0.5. When t = 0,
dy
dt
= 0.25, i.e. ydot(1) = 0.25 and from above
set of equations, we have y(2) = 0.25. So, initial conditions of y0 and ydot0 are [0.5; 0.25]
and [0.25; 0.5] respectively. Note that there are two parameters of y and ydot, hence y0
and ydot0 are either row matrix or column matrix as the equation set are arranged either
in row or column respectively. The time iteration is in ten steps between t = 0 to t = 1.
The complete set of example is given below:
✞
1 -- y0 =[0.5;0.25];
-- t0 =0;
3 -- tt =0:0.1:1;
-- function [r,ires ]=f(t,y,ydot )
5 -- r=[ ydot (1) - y(2) , ydot (2) - y(1)]
-- ires =0
7 -- endfunction
-- y=dassl(y0 ,t0 ,tt ’,f);
9 -- y’
✌
✆
✞
ans =
; --------- y -------- ------- ydot -------
; t y ydot ydot yddot
;-----------------------------------------------------------
0. 0.5 0.25 0. 0.
0.1 0.5275440 0.3013347 0.3013351 0.5275436
0.2 0.5603678 0.3556850 0.3556868 0.5603689
0.3 0.5988000 0.4135953 0.4135954 0.5988002
0.4 0.6432250 0.4756448 0.4756448 0.6432246
0.5 0.6940875 0.5424548 0.5424543 0.694087
0.6 0.7518968 0.6146937 0.6146937 0.7518967
0.7 0.8172313 0.6930848 0.693085 0.8172314
0.8 0.8907449 0.7784126 0.7784126 0.8907450
0.9 0.9731734 0.8715309 0.8715311 0.9731736
1. 1.0653418 0.9733718 0.9733719 1.065342
✌
✆
The solution of the given differential algebraic equation is
y(t) = 0.375 et
+ 0.125 e−t
At t = 0.2, solution is
y(0.2) = 0.375 e0.2
+ 0.125 e−0.2
= 0.5603678
Hence result is verified.
Solved Problem 3.2 Solve second order differential equation y
′′
+ 2y
′
+ y = 0, where, at
t = 0, y = 0.5, i.e. y(0) = 0.5 and at t = 0, ẏ = 0.25, i.e. ẏ(0) = 0.25.
Solution From the given second order differential equation
d2
y
dt2
+ 2
dy
dt
+ y = 0
659. 3.3. DIFFERENTIAL EQUATIONS 163
Let y1 = y = y(1) then on derivating it about t, we have
dy1
dt
=
dy
dt
= ydot(1)
Now, put y2 =
dy1
dt
= y(2), then
ydot(1) = y(2)
and on derivating it about t, we have
dy2
dt
=
d2
y1
dt2
=
d2
y
dt2
= ydot(2)
As we knew that
d2
y
dt2
= −2
dy
dt
− y
Then ydot(2) = −2 ydot(1) − y(1). This equation must be in form of
f(t, y, ydot) = 0
So, set of first order differential algebraic equation of reduced second order differential
equation shall be
ydot(1) = y(2); ydot(2) = −2 ydot(1) − y(1)
From the initial conditions, when t = 0, y = 0.5, it means y(1) = 0.5 and from above set
of equations, ydot(2) = −1. When t = 0,
dy
dt
= 0.25, i.e. ydot(1) = 0.25 and from above
set of equations, we have y(2) = 0.25. So, initial conditions of y0 and ydot0 are [0.5; 0.25]
and [0.25; −1] respectively. Note that there are two parameters of y and ydot, hence y0
and ydot0 are either row matrix or column matrix as the equation set are arranged either
in row or column respectively. The time iteration is in ten steps between t = 0 to t = 1.
The complete set of example is given below:
✞
1 -- y0 =[0.5;0.25];
-- t0 =0;
3 -- tt =0:0.1:1;
-- function [r,ires ]=f(t,y,ydot )
5 -- r=[ ydot (1) - y(2) , ydot (2) + 2* ydot (1) + y(1)]
-- ires =0
7 -- endfunction
-- y=dassl(y0 ,t0 ,tt ’,f);
9 -- y’
✌
✆
✞
ans =
; ------ y ------ ---- ydot -----
; t y ydot ydot yddot
;---------------------------------------------------
0.00000 0.50000 0.25000 0.25000 -1.00000
0.10000 0.52028 0.15835 0.15835 -0.83697
0.20000 0.53218 0.08187 0.08187 -0.69592
660. 164 Elementary Functions
0.30000 0.53709 0.01852 0.01852 -0.57413
0.40000 0.53626 -0.03352 -0.03352 -0.46922
0.50000 0.53071 -0.07582 -0.07582 -0.37908
0.60000 0.52137 -0.10976 -0.10976 -0.30185
0.70000 0.50900 -0.13656 -0.13656 -0.23588
0.80000 0.49426 -0.15727 -0.15727 -0.17973
0.90000 0.47772 -0.17279 -0.17279 -0.13214
1.00000 0.45985 -0.18394 -0.18394 -0.09197
istate = 3
msg = integration to tout completed by stepping past TOUT
✌
✆
The solution of the given differential algebraic equation
d2
y
dt2
+ 2
dy
dt
+ y = 0
is
y(t) = (k1 + k2t) e−t
Substitute t = 0 and y(0) = 0.5, we get k1 = 0.5. Now, derivate this relation and
substitute t = 0 and dy/dt = 0.25, we get k2 = 0.75. Now, the solution of the given
differential equation is
y(t) = (0.5 + 0.75t) e−t
At t = 0.2, solution is
y(0.2) = (0.5 + 0.75 × 0.2) e−0.2
= 0.53217
Similarly, at t = 0.9, solution is
y(0.2) = (0.5 + 0.75 × 0.9) e−0.9
= 0.47772
Hence result is verified.
661. 4.1. DATA STRUCTURE 165
4Elementary Functions
This section includes the elementary functions.
4.1 Data Structure
Data means elements arranged in structured or non structured form. Scilab accepts each
input as vector of integer, float, string etc type data. For example, a matrix data in Scilab
is represented as:
✞
1 -- t = [1,2,3]
✌
✆
While non matrix data is supplied as:
✞
1 -- t = 1,2,3
✌
✆
Both of these data are different in meaning. First is matrix data while second is assign-
ment of values to variables. The output of first data is
✞
t =
1. 2. 3.
✌
✆
while output of second data is
✞
t =
1.
ans =
2.
ans =
3.
✌
✆
4.1.1 Comparison of Data
Data comparison in Scilab is performed elementwise. The type of data structure is auto-
matically assessed by the Scilab itself. For example, if Scilab sense the vector parenthesis
then input is considered as vector automatically. The element wise matrix comparison of
data is performed by using double equal operator as shown in the following code snippet.
✞
-- [1 ,2]==[1 ,3]
✌
✆
The output is
✞
ans =
T F
✌
✆
662. 166 Elementary Functions
In above example first element of matrices being compared are same hence first answer
is true but comparison of second elements (2 3) of both matrices is false. Comparison
between vector and scalar is performed like
✞
-- [1 ,2]==1
✌
✆
The output is
✞
ans =
T F
✌
✆
In this case both elements of left hand side matrix elements are compared with ‘1’.
First element comparison (1 1) is true while second comparison (2 1) is false.
4.1.2 Data Generation
In Scilab, range operator is used to create data values automatically. The range operator,
i.e. colon symbol is used for this purpose. See the following code snippet:
✞
-- a=1:1:10;
2 -- a(a5)
✌
✆
The output is
✞
ans =
1. 2. 3. 4.
✌
✆
4.1.3 Creation of Matrix Cell
Command cell(m) creates a blank cell matrix of m × m order. But the same command
cell(m, n) creates a blank cell matrix of m × n order.
✞
-- a=cell (2)
✌
✆
The output is
✞
ans =
!{} {} !
! !
!{} {} !
✌
✆
cell() is equal to cell(0) and produce null matrix. iscell() checks whether a variable is
a cell or not.
✞
-- a=cell (2)
2 -- iscell(a)
✌
✆
✞
ans =
T
✌
✆
663. 4.1. DATA STRUCTURE 167
4.1.4 Creation of List
A list can be produced by using list() command
✞
-- k = list ()
✌
✆
The output is
✞
k =
()
✌
✆
It produces empty list. The index ‘0’ of a list is also called empty list and list() is
equivalent to list(0). Each list element is separated by comma symbol. The elements of
list may be constant, variable, vector or matrix etc. See the example of a list with mixed
type elements. Index for non null list elements started with index 1.
✞
-- k = list (1,[a b])
✌
✆
The output is
✞
k =
l(1)
1.
l(2)
!a b !
✌
✆
The first element of list ‘k’ is ‘1’ and second element is matrix ‘!a b !’. We can add
another element by selecting the index and defining the list element for that index.
✞
1 -- k = list (1,[a b]);
-- k(3)=Arun
✌
✆
✞
k =
k(1)
1.
k(2)
!a b !
k(3)
Arun
✌
✆
iscell() checks whether a element is a cell or not.
✞
1 -- k = list (1,[a b]);
-- iscell(k)
✌
✆
✞
ans =
F
✌
✆
Here ‘k’ is an element list not a cell matrix.
664. 168 Elementary Functions
4.2 Bitwise operations
Bitwise operations are used to perform logic operations. Followings are the main com-
mands used most prominently.
4.2.1 Bitwise AND (bitand)
bitand is used for logical AND operation. The binary output is true if and only if all the
binary inputs are true. The AND table of two binary input is given below.
X Y Z
0 0 0
0 1 0
1 0 0
1 1 1
✞
-- x=86; // 1010110 in binary
2 -- y=91; // 1011011 in binary
-- z=bitand(x,y) // 1010010 in binary
✌
✆
✞
z =
82.
✌
✆
The matrix method of bitwise and operation is
✞
--x = uint8 ([0 1 0 1]);
2 --y = uint8 ([0 0 1 1]);
--z = bitand(x, y)
✌
✆
✞
z =
0 0 0 1
✌
✆
4.2.2 Bitwise Complements (bitcmp)
bitcmp returns the complements of a binary number. This function has two arguments,
first is decimal number and second is bit places for complements. This functions intakes
two decimal numbers as arguments. First argument is converted into binary number.
Second argument controls the number of binary bits to be selected from LSB side. This
selected binary number is complemented and converted into decimal number as result.
The numerical example is
✞
-- bitcmp (10, 5)
2 -- // 10 5 are decimal numbers.
-- // 1010 (binary)=10 (decimal)
4 -- //
665. 4.2. BITWISE OPERATIONS 169
-- // 01010 (five binary bits from LSB)
6 -- // --------------------------------
-- // 10101 (complement binary number)
8 -- // = 21 (decimal equivalent )
✌
✆
✞
ans =
21
✌
✆
Here ‘10’ is an decimal number and ‘5’ is number of bits to be extracted from the binary
equivalent to the decimal number ‘10’. The calculation performed as:
1. First read the decimal number and converts it into binary number.
2. Extracts ‘n’ bits from LSB side. If there are deficiency of bits, use bit ‘0’.
3. Now make complements of the bits extracted.
4. Converts this binary number into decimal number. This is the final result.
If ‘n’ is not supplied in this function, then it is calculated by using the relation k =
log 2(bitmax) + 1. The maximum value of ‘n’ is ‘54’ because ‘54’ bit binary number and
its equivalent decimal number is the largest floating point value.
4.2.3 Get a Position Bit (bitget)
bitget return the bit at specific position. First argument in this function is decimal number
and second argument is the index of bit to be obtained.
✞
-- dec2bin (13)
2 -- bitget(uint8(13) ,4:-1:1)
✌
✆
✞
ans =
1 1 0 1
✌
✆
4.2.4 Bitwise OR (bitor)
bitor is used for logical OR operation. The output is ‘true’ if any of the input is ‘true’.
The logical OR table is
X Y Z
0 0 0
0 1 1
1 0 1
1 1 1
Table 4.1: Logical OR Table.
666. 170 Elementary Functions
✞
-- x=86; // 1010110 in binary
2 -- y=91; // 1011011 in binary
-- z=bitor(x,y) // 1011111 in binary
✌
✆
✞
z =
95.
✌
✆
4.2.5 Set Bit In Position (bitset)
bitset is used to set a bit at any position. The syntax of this function is
✞
-- bitset( ..
2 -- decimal number , ..
-- bit index , ..
4 -- bit value ..
-- )
✌
✆
✞
1 -- // Decimal 25 is equal to binary 11001
-- bitset (25, 5, 0) // Binary 01001
✌
✆
✞
ans =
9.
✌
✆
4.2.6 Bitwise Exclusive OR (bitxor)
bitxor is used for logical exclusive OR operations. The truth table of Exclusive OR
operation is
X Y Z
0 0 0
0 1 1
1 0 1
1 1 0
Table 4.2: Exclusive OR Table.
✞
-- // decimal 25 - binary 11001
2 -- // decimal 5 - binary 00101
-- // --------------------------
4 -- // Exclusive OR is - 11100
-- bitxor (25, 5) // 11100 - 28
✌
✆
667. 4.3. COMPLEX 171
✞
ans =
28.
✌
✆
4.3 Complex
Complex numbers are those numbers that contains real and imaginary parts in addition
or subtraction.
4.3.1 Complex Number (complex)
complex command is used the create a complex number from two or more inputs. First
argument to this function is real part of the complex number and second argument is
imaginary part of the complex number.
✞
-- c = complex ([1 2 3], 4)
✌
✆
✞
c =
1. + 4.i 2. + 4.i 3. + 4.i
✌
✆
By changing the order of matrices.
✞
-- c = complex (1, [2 3 4])
✌
✆
✞
c =
1. + 2.i 1. + 3.i 1. + 4.i
✌
✆
Remember that, some time complex numbers are also represented as a + ib or a + jb.
Where i = j =
√
−1. Symbol j is mostly used in electric engineering where complex
numbers are used to solve an electrical circuit. Conjugate of a complex number
✞
-- c = complex (1, [2 3 4]);
2 -- conj (c)
✌
✆
✞
c =
1. - 2.i 1. - 3.i 1. - 4.i
✌
✆
4.3.2 Complex Imaginary (imag)
imag return the imaginary part of a complex number. If A = x + iy is a complex number
then imag(A) will return the value y. In Scilab
✞
-- c = complex (1, 2);
2 -- [y]= imag (c)
✌
✆
668. 172 Elementary Functions
✞
y =
2.
✌
✆
4.3.3 Iota Multiplication (imult)
imult returns the ‘i’ multiplication to the complex number. If A = x + iy then imult
returns the value of A as Ai.
✞
-- c = complex (1, 2);
2 -- [y]= imult(c)
✌
✆
✞
y =
- 2. + i
✌
✆
4.3.4 Is Real (isreal)
isreal checks if a variable is real or complex entries. If entry is real then it return ‘true’
otherwise returns ‘false’.
✞
-- isreal (1+%s)
✌
✆
✞
ans =
T
✌
✆
4.3.5 Complex Real (real)
real returns the real part of a complex number. If A = x + iy is a complex number then
real(A) will return the value x. In Scilab
✞
-- x=86;
2 -- [y]= real (x)
✌
✆
✞
y =
86.
✌
✆
4.3.6 Complex Conjugate (conj)
conj returns conjugate value of a complex number. If a + bi is a complex number then
its complex conjugate value is a − bi and vice-verse.
✞
-- A=[1, %i;%i ,4];
2 -- conj (A)
✌
✆
669. 4.4. DISCRETE MATHEMATICS 173
✞
ans =
1. - i
- i 4.
✌
✆
Remember that, some time complex numbers are also represented as a + ib or a + jb.
Where i = j =
√
−1. Symbol j is mostly used in electric engineering where complex
numbers are used to solve an electrical circuit.
4.4 Discrete mathematics
4.4.1 Binomials (binomial)
Binomial series expansion is used to get the exponent of base in series form. For example,
2.410
can be computed by using exponent relation as (2 + 0.4)10
. A binomial expansion
of a number k = a + b with power n is given by kn
or
(a + b)n
= n
C0an
b0
+ n
C1an−1
b1
+ . . . + n
Cna0
bn
Number of terms in binomial expansion are n+ 1. In Scilab, function binomial() expands
series (a + b)n
such that (a + b)n
= 1. Hence, either a or b should be between 0 and 1.
First argument of this function is either a or b and they are between 0 and 1. See the
example given below:
✞
1 -- binomial (0.1, 3)
✌
✆
Output of this function is coefficients of each term. As (a + b)n
= 1, hence sum of all
coefficients is always equal to one.
✞
ans =
0.729 0.243 0.027 0.001
✌
✆
S = 0.729 + 0.243 + 0.027 + 0.001 = 1
Second argument of binomial() function is exponent n. Number of terms in binomial
expansion are n + 1 always.
4.4.2 Factors (factor)
factor returns all the prime co-factors of supplied number. Co-factors of a given number
are those factors which are not mutually divisible. For example, the co-factor of 12 are
2, 2, 3 and not 2 and 6, as 2 and 6 are mutually divisible.
✞
-- y=factor (620)
✌
✆
✞
y =
2. 2. 5. 31.
✌
✆
670. 174 Elementary Functions
4.4.3 Factorial (factorial)
factorial function returns factorial of a number. Factorial of a number n is given by
following relation.
n! = n × (n − 1) × (n − 2) × . . . × 3 × 2 × 1
✞
-- y=factorial (10)
✌
✆
✞
y =
3628800.
✌
✆
4.4.4 Permutations (perms)
perms function returns all possible permutations of a vector component. The possible
maximum number of permutations on taking all n items of a set are n!. Note that, a
single element is used more than once in a permutation set. Permutation may be learn as
per-mutation. It means if digits are interchanged then outcome is mutated. For example,
12 and 21 are two different numbers even if same digits are sued.
✞
-- x=[4, 7, 10];
2 -- y=perms(x)
✌
✆
✞
y =
10. 7. 4.
10. 4. 7.
7. 10. 4.
7. 4. 10.
4. 10. 7.
4. 7. 10.
✌
✆
4.4.5 Primes (primes)
primes returns all the prime numbers less than the number given by user.
✞
1 -- y=primes (10)
✌
✆
✞
y =
2. 3. 5. 7.
✌
✆
4.5 Floating point
This section includes the function used in numeric systems.
671. 4.5. FLOATING POINT 175
4.5.1 Ceiling To (ceil)
ceil round up a number to the next larger integer. Symbolic representation of ceil of a
number x is ⌈x⌉. If supplied value is a complex number z then computation used in this
function is
ceil(z) = ceil(real(x)) + ceil(imag(x)) ∗ i
An example of this function for positive number is
✞
-- ceil ([1.3 1.5 1.7 2.5 3.7])
✌
✆
✞
ans =
2. 2. 2. 3. 4.
✌
✆
For negative numbers
✞
-- ceil ([ -1.3 -1.5 -1.7 -2.5 -3.7])
✌
✆
✞
ans =
- 1. - 1. - 1. - 2. - 3.
✌
✆
ceil function returns the right side integer to the given real number based on number-line.
In the following figure, ceiling of −2.25 is circled by cyan colour circle. Here, ceiling value
of −2.25 is −2.
0 1 2 3 4
0
−1
−2
−3
−4
b
ceil(-2.25)
4.5.2 Double (double)
It converts the integer value into double integer value. Actually, double makes an integer
a floating point value.
✞
-- x=int8 ([0 12 140]);
2 -- double(x)
✌
✆
✞
ans =
0 12 -116
ans =
0. 12. -116.
✌
✆
4.5.3 Number as Integer (int8)
It converts a number into an 8 bits long integer value. See the following example:
✞
-- x=double ([0 12 140]),int8 (x)
✌
✆
672. 176 Elementary Functions
✞
x =
0. 12. -116.
ans =
0 12 - 116
✌
✆
4.5.4 Fix to Zero (fix)
fix rounds a number towards zero. In other words rounded up to the next smaller integer.
It is equivalent to rounding towards zero. If supplied value is a complex then computation
used is
✞
fix (real (x)) + fix (imag (x)) * i
✌
✆
Here an example is
✞
1 -- fix ([1.3 1.5 1.7 2.5 3.7])
✌
✆
✞
ans =
1. 1. 1. 2. 3.
✌
✆
4.5.5 Round to Lower Integer (floor)
It rounds downward to a number i.e. rounded to lower integer. Symbolic representation
of floor of a number x is ⌊x⌋. It is equivalent to rounding towards negative infinity. If
supplied value is a complex number then computation in result is
✞
floor (real (x)) + floor (imag (x)) * i
✌
✆
Example is
✞
1 -- floor ([1.3 1.5 1.7 2.5 3.7])
✌
✆
✞
ans =
1. 1. 1. 2. 3.
✌
✆
This function is similar to the Greatest Integer Function (gif). floor function returns the
left side integer of the given real number based on numberline. In the following figure,
floor of −2.25 is circled by cyan colour circle. Here, floor value of −2.25 is −3. Floor
value of 2.5 is 2.
0 1 2 3 4
0
−1
−2
−3
−4
b
gi
b
floor(2.5)