SlideShare a Scribd company logo
1 of 18
Download to read offline
i
Adaptive Filtering
Mustafa Khaleel
Year 2016
ii
Contents
1. Introduction ............................................................................................................................ 1
2. Digital Filters ........................................................................................................................... 2
2.1. Linear and Nonlinear Filter............................................................................................. 2
2.2. Filter Design .................................................................................................................... 3
3. Wiener Filters.......................................................................................................................... 4
3.1. Error Measurements....................................................................................................... 6
3.2. The Mean-Square Error (MSE)........................................................................................ 6
3.3. Mean Square Error Surface ............................................................................................ 7
4. Method of Steepest Descent.................................................................................................. 8
5. The Least Mean Squares (LMS) Algorithm........................................................................... 10
5.1. Convergence in the Mean-Sense.................................................................................. 11
5.2. Convergent in the Mean Square Sense........................................................................ 12
6. Simulation and Results ......................................................................................................... 13
Conclusion..................................................................................................................................... 15
Reference ...................................................................................................................................... 15
Annex ............................................................................................................................................ 16
Figure 1 FIR Filter ............................................................................................................................ 3
Figure 2 IIR Filter............................................................................................................................. 4
Figure 3 Wiener Filters.................................................................................................................... 5
Figure 4 Error surface with two weights........................................................................................ 7
Figure 5 Adaptive Filter with LMS................................................................................................ 10
Figure 6 Adaptive Filter (Noise Cancellation) .............................................................................. 13
Figure 7 Step-Size small................................................................................................................ 13
Figure 8 Step-Size Large................................................................................................................ 14
Figure 9 Step-Size Acceptable ...................................................................................................... 14
1
1. Introduction
Filtering is a signal processing operation whose objective is to process a signal in
order to manipulate the information contained in the signal. In other words, a
filter is a device that maps its input signal to another output signal facilitating the
extraction of the desired information contained in the input signal. A digital filter
is the one that processes discrete-time signals represented in digital format. For
time-invariant filters the internal parameters and the structure of the filter are
fixed, and if the filter is linear the output signal is a linear function of the input
signal. Once prescribed specifications are given, the design of time-invariant
linear filters entails three basic steps, namely: the approximation of the
specifications by a rational transfer function, the choice of an appropriate
structure defining the algorithm, and the choice of the form of implementation
for the algorithm.
An adaptive filter is required when either the fixed specifications are unknown or
the specifications cannot be satisfied by time-invariant filters. Strictly speaking an
adaptive filter is a nonlinear filter since its characteristics are dependent on the
input signal and consequently the homogeneity and additivity conditions are not
satisfied. However, if we freeze the filter parameters at a given instant of time,
most adaptive filters considered in this text are linear in the sense that their
output signals are linear functions of their input signals
The adaptive filters are time-varying since their parameters are continually
changing in order to meet a performance requirement. In this sense, we can
interpret an adaptive filter as a filter that performs the approximation step on-
line. Usually, the definition of the performance criterion requires the existence of
a reference signal that is usually hidden in the approximation step of fixed-filter
design.
2
2. Digital Filters
The term filter is commonly used to refer to any device or system that takes a
mixture of particles/elements from its input and processes them according to
some specific rules to generate a corresponding set of particles/elements at
its output. In the context of signals and systems, particles/elements are the
frequency components of the underlying signals and, traditionally, filters are
used to retain all the frequency components that belong to a particular band
of frequencies, while rejecting the rest of them, as much as possible. In a more
general sense, the term filter may be used to refer to a system that reshapes
the frequency components of the input to generate an output signal with
some desirable features.
2.1.Linear and Nonlinear Filter
Filters can be classified as either linear or nonlinear types. A linear filter is one
whose output is some linear function of the input. In the design of linear filters it
is necessary to assume stationary (statistical-time-invariance) and know the
relevant signal and noise statistics a priori. The linear filter design attempts to
minimize the effects of noise on the signal by meeting a suitable statistical
criterion. The classical linear Wiener filter, for example, minimizes the Mean
Square Error (MSE) between the desired signal response and the actual filter
response. The Wiener solution is said to be optimum in the mean square sense,
and it can be said to be truly optimum
for second-order stationary noise statistics (fully described by constant finite
mean and variance). A linear adaptive filter is one whose output is some linear
combination of the actual input at any moment in time between adaptation
operations.
A nonlinear adaptive filter does not necessarily have a linear relationship
between the input and output at any moment in time. Many different linear
adaptive filter algorithms have been published in the literature. Some of the
important features of these algorithms can be identified by the following terms
1. Rate of convergence - how many iterations to reach a near optimum solution.
2. Misadjustment- measure of the amount by which the final value of the MSE,
averaged over an ensemble of adaptive filters, deviates from the MSE produced
by the Wiener solution.
3. Tracking - ability to follow statistical variations in a non-stationary
environment.
3
4. Robustness - implies that small disturbances from any source (internal or
5.external) produce only small estimation errors.
6. Computational requirements - the computational operations per iteration,
Data storage and programming requirements.
7. Structure - of information flow in the algorithm, e.g., serial, parallel etc.,
which determines the possible hardware implementations.
8. Numerical properties - type and nature of quantization errors, numerical
stability and numerical accuracy.
2.2.Filter Design
There two common way to design a Filter (recursion) and (non- recursion).
For non-recursion filter also its call (Finite Impulse Response or FIR).
The Filter is implemented by convolution, each sample in the output is calculated
by weighting the samples in the input, and adding them together.
Recursive filters (Infinite Impulse Response or IIR filters) are an extension of this,
using previously calculated values from the output, besides points from the input.
Recursive filters are defined by a set of recursion coefficients.
Figure 1 FIR Filter
4
Figure 2 IIR Filter
Finally we can classify digital filters by their use and by their implementation. The
use of a digital filter can be broken into three categories: time domain, frequency
domain and custom. As previously described, time domain filters are used when
the information is encoded in the shape of the signal's waveform. Time domain
filtering is used for such actions as: smoothing, DC removal, waveform shaping,
etc. In contrast, frequency domain filters are used when the information is
contained in the amplitude, frequency, and phase of the component sinusoids. The
goal of these filters is to separate one band of frequencies from another. Custom
filters are used when a special action is required by the filter, something more
elaborate than the four basic responses (high-pass, low-pass, band-pass and band-
reject).
3. Wiener Filters
Wiener formulated the continuous-time, least mean square error, estimation
Problem in his classic work on interpolation, extrapolation and smoothing
Of time series (Wiener 1949). The extension of the Wiener theory from
Continuous time to discrete time is simple, and of more practical use for
Implementation on digital signal processors. A Wiener filter can be an
Infinite-duration impulse response (IIR) filter or a finite-duration impulse
5
Response (FIR) filter.
In general, the formulation of an IIR Wiener filter results in a set of non-linear
equations, whereas the formulation of an FIR Wiener filter results in a set of
linear equations and has a closed-form solution e they are relatively simple to
compute, inherently stable and more practical. The main drawback of FIR filters
compared with IIR filters is that they may need a large number of coefficients to
approximate a desired response.
Figure 3 Wiener Filters
Where 𝑥( 𝑛) is input signal and 𝑤 are filter coefficients, respectively; that is
𝑥(𝑛) = [ 𝑥(𝑛)𝑥 … 𝑥(𝑛 − 𝑁 + 1)] 𝑇
(1)
𝑤 = [ 𝑤0 𝑤1 … . 𝑤 𝑁] 𝑇
(2)
And 𝑦(𝑘) is the output signal,
𝑦(𝑛) = ∑ 𝑤𝑖 𝑥(𝑛 − 𝑖)𝑁
𝑖=0
= 𝑤𝑜 𝑥(𝑛) + 𝑤1 𝑥(𝑛 − 1) + ⋯ + 𝑤 𝑁 𝑥(𝑛 − 𝑁)
𝑦(𝑛) = 𝑤 𝑇
𝑥(𝑛) (3)
𝑑(𝑛) Is the training or desired signal, and e(n) is error signal (different between the
Output signal 𝑦(𝑛) and desired signal 𝑑(𝑛)).
𝑒(𝑛) = 𝑑(𝑛) − 𝑦(𝑛) (4)
6
3.1.Error Measurements
Adaptation of the filter coefficients follows a minimization procedure of a
particular objective or cost function. This function is commonly defined as a norm
of the error signal e (n). The most commonly employed norms are the mean-
square error (MSE).
3.2.The Mean-Square Error (MSE).
From Figure 3, we defined The MSE (cost function) as
𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑦(𝑛)|2]. (5)
From equation (3) we write the equation (5) as follow:
So,
𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑦(𝑛)|2]
𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑤 𝑇
𝑥(𝑛)|2]
𝜉(𝑛) = [𝑑2(𝑛) − 2𝑤 𝑇
𝐸[𝑑(𝑛)𝑥(𝑛)] + 𝑤 𝑇
𝐸[𝑥(𝑛)𝑤 𝑇
𝐸[𝑥(𝑛)𝑥 𝑇(𝑛)]𝑤
Where,
𝑅= 𝐸[𝑥(𝑛)𝑥 𝑇(𝑛)],
𝑝 = 𝐸[𝑑(𝑛)𝑥 𝑇(𝑛)].
𝜉(𝑛) = 𝑥 𝑑𝑑(0) − 2𝑝 + 2𝑅𝑤 (6)
Where R and p are the input-signal correlation matrix and the cross-correlation
vector between the reference signal and the input signal.
The gradient vector of the MSE function with respect to the adaptive filter
coefficient vector is given by
∇w ξ (n)= −2𝑝 + 2𝑅𝑤 (7)
That minimizes the MSE cost function, is obtained by equating the gradient
vector to zero. Assuming that R is non-singular, one gets that
∇ 𝑤 𝜉(𝑛) = 0
7
Figure 4 Error surface with two weights
𝑤𝑜 = 𝑅−1
𝑝 (8)
This system of equations is known as the Wiener-Hopf equations, and the filter
whose weights satisfy the Wiener-Hopf equations is called a Wienerfilter.
3.3.Mean Square Error Surface
From Equation (7) the mean square error for filter is a quadratic function of the
filter coefficient vector 𝒘 and has a single minimum point. For example, for a
filter with only two coefficients (𝑤0, 𝑤1). The mean square error function is a
bowl-shaped surface, with a single minimum point. At this optimal operating
point the mean square error surface has zero gradient.
8
4. Method of Steepest Descent
To solve the Wiener-Hopf equations (Eq.7) for tap weights of the optimum spatial
filter, we basically need to compute the inverse of a p-by-p matrix made up of the
different values of the autocorrelation function. We may avoid the need for this
matrix inversion by using the method of steepest descent. Starting with an initial
guess for optimum weight 𝒘 𝒐, say 𝒘(𝟎), a recursive search method that may
require many iterations (steps) to converge to 𝒘 𝒐 is used.
The method of steepest descent is a general scheme that uses the following steps
to search for the minimum point of any convex function of a set of parameters:
1. Start with an initial guess of the parameters whose optimum values are to
be found for minimizing the function.
2. Find the gradient of the function with respect to these parameters at the
present point.
3. Update the parameters by taking a step in the opposite direction of the
gradient vector obtained in Step 2. This corresponds to a step in the direction
of steepest descent in the cost function at the present point. Furthermore,
the size of the step taken is chosen proportional to the size of the gradient
vector.
4. Repeat Steps 2 and 3 until no further significant change is observed in the
parameters.
To implement this procedure in the case of the transversal filter shown in Figure
3, we recall (equation 7)
∇w ξ (n)= −2𝑝 + 2𝑅𝑤 (9)
Where ∇ is the gradient operator defined as the column vector,
∇ = [
𝜕
𝜕𝑤0
𝜕
𝜕𝑤1
…
𝜕
𝜕𝑤 𝑁−1
]
𝑇
(10)
According to the above procedure, if 𝑤(𝑛) is the tap-weight vector at the 𝑛 𝑡ℎ
iteration, the following recursive equation may be used to update 𝑤( 𝑛).
𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 𝜇∇ 𝑘 𝜉 (11)
Where 𝜇 positive scalar is call Step-Size, and ∇ 𝑘 𝜉 denotes the gradient vector ∇ 𝑘 𝜉
evaluated at the point 𝑤 = 𝑤( 𝑘). Substituting (Eq.9) in (Eq. 11), we get
𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇(𝑅𝑤( 𝑛) − 𝑝) (12)
9
As we shall soon show, the convergence of 𝒘(𝒏) to the optimum solution
𝑤𝑜 and the speed at which this convergence takes place are dependent on the
size of the step-size parameter μ. A large step-size may result in divergence of
this recursive equation.
To see how the recursive update 𝑤(𝑘) converges toward𝑤𝑜, we rearrange Eq.
(12) as
𝑤( 𝑘 + 1) = (Ι − 2𝜇𝑹) 𝒘( 𝑘) + 2𝜇𝒑 (13)
Where 𝚰 is the N-by-N identify matrix. Next we subtract 𝑤𝑜form both side for Eq.
(13) and rearrange the result to obtain
𝑤( 𝑘 + 1) − 𝑤𝑜 = (Ι − 2𝜇𝑹)( 𝒘( 𝑘) − 𝒘 𝒐) (14)
Defining the 𝑐( 𝑘) as
𝑐( 𝑛) = 𝑤( 𝑛) − 𝑤𝑜
And 𝑅 = 𝑄Λ𝑄 𝑇
Where 𝚲 is a diagonal matrix consisting of the eigenvalues 𝜆0, 𝜆0, … 𝜆 𝑁−1 of R and
the columns of 𝑄 contain the corresponding orthonormal eigenvectors,
and Ι =𝑄𝑄 𝑇
, Substituting Eq.(14) we get
𝑐( 𝑛 + 1) = 𝑄(I − 2μΛ) 𝑄 𝑇
𝑣( 𝑘) (15)
Pre-multiplying Eq. (15) by 𝑄 𝑇
we have 𝑄 𝐻
𝑐( 𝑛 + 1) = (I − μΛ)𝑄 𝐻
𝑐(𝑛) (16)
Notation: 𝑣( 𝑛) = 𝑄 𝐻
𝑐(𝑛)
𝑣( 𝑛 + 1) = (I − μΛ) 𝑣( 𝑛), 𝑘 = 1,2, . . , 𝑁 (17)
Initial conditions: 𝑣(0) = 𝑄 𝐻
𝑐(0) = 𝑄 𝐻
[𝑤(0) − 𝑤𝑜]
𝑣 𝑘( 𝑛) = (1 − 𝜇𝜆 𝑚𝑎𝑥) 𝑛
𝑣 𝑘(0), 𝑘 = 1,2, … . , 𝑁
Convergence (stability):
When n=0, 0 < 𝜇 <
2
𝜆 𝑚𝑎𝑥
Stability condition
𝜆 𝑚𝑎𝑥 = max{𝜆1, 𝜆2, … , 𝜆 𝑁}
𝜆 𝑚𝑎𝑥 Is the maximum of the eigenvalues𝜆0 , 𝜆1, … . , 𝜆 𝑁−1. The left limit in refers to
the fact that the tap-weight correction must be in the opposite direction of the
gradient vector. The right limit is to ensure that all the scalar tap-weight
parameters in the recursive equations (17) decay exponentially as 𝑘 increases.
10
5. The Least Mean Squares (LMS) Algorithm
In any event, care has to be exercised in the selection of the learning-rate
parameter 𝝁 for the method of steepest descent to work. Also, a practical
limitation of the method of steepest descent is that it requires knowledge of the
spatial correlation functions 𝒓 𝒅𝒙( 𝒋, 𝒏)and 𝒓 𝒙𝒙(𝒋, 𝒏)now, when the filter operates
in an unknown environment, these correlation functions are not available, in
which case we are forced to use estimates in their place. The least-mean-square
algorithm results from a simple and yet effective method of providing for these
estimates.
The least-mean-square (LMS) algorithm is based on the use of instantaneous
estimates of the autocorrelation function 𝑹 and the cross-correlation function
𝒑.These estimates are deduced directly from the defining equations (18) and (19)
as follows:
𝑅= 𝐸[𝑥(𝑛)𝑥 𝑇(𝑛)] ⟹ 𝑅′
= 𝑥(𝑛)𝑥 𝑇(𝑛) (18)
𝑝 = 𝐸[𝑑(𝑛)𝑥 𝑇(𝑛)] ⇒ 𝑝′
= 𝑥(𝑛)𝑑(𝑛) (19)
Now call Eq. (12): 𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇( 𝑅𝑤( 𝑛) − 𝑝)
𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇[( 𝑥( 𝑛) 𝑥 𝑇( 𝑛) 𝑤( 𝑛) − 𝑥( 𝑛) 𝑑( 𝑛)]
𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇𝑥(𝑛)[( 𝑥( 𝑛) 𝑥 𝑇( 𝑛) 𝑤( 𝑛) − 𝑑( 𝑛)]
When 𝑒′
(𝑛) = ( 𝑥( 𝑛) 𝑥 𝑇( 𝑛) 𝑤( 𝑛) − 𝑑( 𝑛)
So, 𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇𝑥(𝑛)𝑒′
(𝑛) (20)
Equation (20) describe Least-Mean-Square (LMS) Algorithm.
Figure 5 Adaptive Filter with LMS
11
Summary of the LMS algorithm
Input: Tap-weight vector, 𝒘( 𝒏),
Input vector, 𝒙(𝒏).
and desired output, 𝒅(𝒏).
Output: Filter: output, 𝒚(𝒏)
Tap-weight vector update 𝒘( 𝒏 + 𝟏)
1. Filtering:
𝒚(𝒏) = 𝒘 𝑻
𝒙(𝒏)
2. Error Estimation:
𝒆(𝒏) = 𝒅(𝒏) − 𝒚(𝒏)
3. Tap-Weight Vector Adaptation:
𝒘( 𝒏 + 𝟏) = 𝒘( 𝒏) − 𝟐𝝁𝒙(𝒏)𝒆′
(𝒏)
Where 𝑥( 𝑛) = [ 𝑥( 𝑛) 𝑥… 𝑥( 𝑛 − 𝑁 + 1)] 𝑇 . Substituting this result in Eq. (11),
we get
𝑤( 𝑛 + 1) = 𝑤( 𝑛) + 2𝜇𝑒( 𝑛) 𝑥( 𝑛) (21)
This is referred to as the LMS recursion it suggests a simple procedure for
recursive adaptation of the filter coefficients after arrival of every new input
sample 𝑥( 𝑛) and its corresponding desired output sample, 𝑑( 𝑛) Equations (3),
(4), and (21), in this order, specify the three steps required to complete each
iteration of the LMS algorithm. Equation (3) is referred to as filtering. It is
performed to obtain the filter output. Equation (4) is used to calculate the
estimation error. Equation (21) is tap-weight adaptation recursion.
5.1.Convergence in the Mean-Sense
A detailed analysis of convergence of the LMS algorithm in the mean square is
much more complicated than convergence analysis of the algorithm in the
mean. This analysis is also much more demanding in the assumptions made
concerning the behavior of the weight vector 𝒘( 𝒏) computed by the LMS
algorithm (Haykin, 1991). In this subsection we present a simplified result of
the analysis.
The LMS algorithm is convergent in the mean square if the learning-rate
parameter 𝜇 satisfies the following condition,
0 < 𝜇 < 𝑡𝑟[ 𝑅 𝑥] (22)
12
Where 𝑡𝑟[ 𝑅 𝑥] is the trace of the correlation matrix 𝑅, from matrix algebra, we
know that
𝑡𝑟[ 𝑅 𝑥] = ∑ 𝜆 𝑘 ≥ 𝜆 𝑚𝑎𝑥 (23)
And Convergence condition – Convergence in the mean sense
0 < 𝜇 <
2
𝜆 𝑚𝑎𝑥
(24)
5.2.Convergent in the Mean Square Sense
For an LMS algorithm convergent in the mean square, the final value of 𝜉(∞) the
mean-squared error 𝜉(𝑛) is a positive constant, which represents the steady-
state condition of the learning curve. In fact, 𝜉(∞) is always in excess of the
minimum mean-squared error J- realized by the corresponding Wiener filter for a
stationary environment. The difference between 𝜉(∞) and 𝜉 𝑚𝑖𝑛
is called the
excess mean-squared error:
𝜉 𝑒𝑥 = 𝜉(∞) − 𝜉 𝑚𝑖𝑛
(25)
And Convergence condition – Convergence in the mean square sense
0 < 𝜇 <
2
𝜆 𝑚𝑎𝑥
(26)
The ratio of 𝜉 𝑒𝑥 to 𝜉 𝑚𝑖𝑛
is called the miss-adjustment:
𝑀 =
𝜉 𝑒𝑥
𝜉 𝑚𝑖𝑛
(27)
It is customary to express the miss-adjustment M as a percentage. Thus, for
example, a miss-adjustment of 10 percent means that the LMS algorithm
produces a mean-squared error (after completion of the learning process) that is
10percent greater than the minimum mean squared error 𝜉 𝑚𝑖𝑛
.Such a
performance is ordinarily considered to be satisfactory.
Another important characteristic of the LMS algorithm is the settling time.
However, there is no unique definition for the settling time. We may, for
example, approximate the learning curve by a single exponential with average
time constant 𝝉, and so use 𝝉, as a rough measure of the settling time. The
smaller the value of 𝝉 ,is, the faster will be the settling time.
To a good degree of approximation, the miss-adjustment M of the LMS algorithm
is directly proportional to the learning-rate parameter 𝝁, whereas the average
time constant 𝝉 is inversely proportional to the learning-rate parameter 𝝁 .
13
Figure 7 Step-Size small
We therefore have conflicting results in the sense that if the learning-rate
parameter is reduced so as to reduce the miss-adjustment, then the settling time
of the LMS algorithm is increased. Conversely, if the learning-rate parameter is
increased so as to accelerate the learning process, then the miss-adjustment is
increased.
6. Simulation and Results
Our scenario in this simulation we have signal send thorough channel and received with noise ,
at receiver we have the training signal (reference) , we try to extract our signal using (LMS) with
specific value of ( 𝝁).
Figure 6 Adaptive Filter (Noise Cancellation)
In the first case we assign small value for 𝝁 =0.0002, as result:
14
As we see our received signal still noisy, next we may try other value of 𝝁
In Figure above we choose large value of step-size ( 𝝁 = 𝟎. 𝟒), and the Signal can’t recover by
receiver.
As we see in figure.9 our signal has be recover when the vlue of Step-Size
(𝝁 =0.005).
To recover our signal in this system we must choose value of step-size carefully,
So the step-size not be small size (slow convergence) and not be large (fast
convergence), it must be between (𝟎. 𝟎𝟎𝟎𝟐 < 𝝁 < 𝟎. 𝟒) to have stable system.
Figure 8 Step-Size Large
Figure 9 Step-Size Acceptable
15
Conclusion
Adaptive filtering involves the changing of filter parameters (coefficients) over
time, to adapt to changing signal characteristics. Over the past three decades,
digital signal processors have made great advances in increasing speed and
complexity, and reducing power consumption. As a result, real-time adaptive
filtering algorithms are quickly becoming practical and essential for the future of
communications, both wired and wireless. The LMS algorithm is by far the most
widely used algorithm in adaptive filtering for several reasons, the main features
that attracted the use of the LMS algorithm are low computational complexity,
proof of convergence in stationary environment, unbiased convergence in the
mean to the Wiener solution, and stable behavior when implemented with finite-
precision arithmetic.
Compare to other algorithm like Method of Steepest Descent which dependent
on the updated value of weights (coefficients) in iterative fashion and continually
seeking the bottom point of the error surface of the filter.
Reference
(1) Adaptive Filtering Algorithms and Practical Implementation,Third Edition, 2008 Springe.
(2) Principles of Adaptive Filters and Self-learning Systems, Springer-Verlag London Limited 2005.
(3) Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi,John Wiley
& Sons 2000.
(4) Adaptive Filtering Fundamentals of Least Mean Squares with MATLAB, Alexander D. Poularikas,CRC
Press 2015.
(5) The Scientist and Engineer's Guide to Digital Signal Processing .
16
Annex
Implementation Adaptive Filter Using LMS
t=1:0.025:5;
desired=5*sin(2*3.*t);
noise=5*sin(2*50*3.*t);
refer=5*sin(2*50*3.*t+ 3/20);
primary=desired+noise;
subplot(4,1,1);
plot(t,desired);
ylabel('desired');
subplot(4,1,2);
plot(t,refer);
ylabel('refer');
subplot(4,1,3);
plot(t,primary);
ylabel('primary');
order=2;
mu=0.005;
n=length(primary)
delayed=zeros(1,order);
adap=zeros(1,order);
cancelled=zeros(1,n);
for k=1:n,
delayed(1)=refer(k);
y=delayed*adap';
cancelled(k)=primary(k)-y;
adap = adap + 2*mu*cancelled(k) .* delayed;
delayed(2:order)=delayed(1:order-1);
end
subplot(4,1,4);
plot(t,cancelled);
ylabel('cancelled');

More Related Content

What's hot

Equalization
EqualizationEqualization
Equalizationbhabendu
 
Kalman filter for object tracking
Kalman filter for object trackingKalman filter for object tracking
Kalman filter for object trackingMohit Yadav
 
Seminar On Kalman Filter And Its Applications
Seminar On  Kalman  Filter And Its ApplicationsSeminar On  Kalman  Filter And Its Applications
Seminar On Kalman Filter And Its ApplicationsBarnali Dey
 
Diversity Techniques in Wireless Communication
Diversity Techniques in Wireless CommunicationDiversity Techniques in Wireless Communication
Diversity Techniques in Wireless CommunicationSahar Foroughi
 
Multi Carrier Modulation OFDM & FBMC
Multi Carrier Modulation OFDM & FBMCMulti Carrier Modulation OFDM & FBMC
Multi Carrier Modulation OFDM & FBMCVetrivel Chelian
 
Linear prediction
Linear predictionLinear prediction
Linear predictionUma Rajaram
 
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Brati Sundar Nanda
 
Channel Equalisation
Channel EqualisationChannel Equalisation
Channel EqualisationPoonan Sahoo
 
Kalman Filtering
Kalman FilteringKalman Filtering
Kalman FilteringEngin Gul
 
Introduction to adaptive signal processing
Introduction  to adaptive signal processingIntroduction  to adaptive signal processing
Introduction to adaptive signal processingPeerapol Yuvapoositanon
 
Application of adaptive linear equalizer
Application of adaptive linear equalizerApplication of adaptive linear equalizer
Application of adaptive linear equalizerSayahnarahul
 

What's hot (20)

Equalization
EqualizationEqualization
Equalization
 
17 SONET/SDH
17 SONET/SDH17 SONET/SDH
17 SONET/SDH
 
Kalman filter for object tracking
Kalman filter for object trackingKalman filter for object tracking
Kalman filter for object tracking
 
Seminar On Kalman Filter And Its Applications
Seminar On  Kalman  Filter And Its ApplicationsSeminar On  Kalman  Filter And Its Applications
Seminar On Kalman Filter And Its Applications
 
Diversity Techniques in Wireless Communication
Diversity Techniques in Wireless CommunicationDiversity Techniques in Wireless Communication
Diversity Techniques in Wireless Communication
 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
 
Av 738 - Adaptive Filtering Lecture 1 - Introduction
Av 738 - Adaptive Filtering Lecture 1 - IntroductionAv 738 - Adaptive Filtering Lecture 1 - Introduction
Av 738 - Adaptive Filtering Lecture 1 - Introduction
 
Adaptive Equalization
Adaptive EqualizationAdaptive Equalization
Adaptive Equalization
 
rake reciever ppt
rake reciever pptrake reciever ppt
rake reciever ppt
 
Adaptive filter
Adaptive filterAdaptive filter
Adaptive filter
 
Multi Carrier Modulation OFDM & FBMC
Multi Carrier Modulation OFDM & FBMCMulti Carrier Modulation OFDM & FBMC
Multi Carrier Modulation OFDM & FBMC
 
Channel equalization
Channel equalizationChannel equalization
Channel equalization
 
Linear prediction
Linear predictionLinear prediction
Linear prediction
 
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
 
Introduction to equalization
Introduction to equalizationIntroduction to equalization
Introduction to equalization
 
Channel Equalisation
Channel EqualisationChannel Equalisation
Channel Equalisation
 
Kalman Filtering
Kalman FilteringKalman Filtering
Kalman Filtering
 
Introduction to adaptive signal processing
Introduction  to adaptive signal processingIntroduction  to adaptive signal processing
Introduction to adaptive signal processing
 
Application of adaptive linear equalizer
Application of adaptive linear equalizerApplication of adaptive linear equalizer
Application of adaptive linear equalizer
 
Eye pattern
Eye patternEye pattern
Eye pattern
 

Viewers also liked

M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...Raj Kumar Thenua
 
Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...shaik chand basha
 
Active noise control
Active noise controlActive noise control
Active noise controlRishikesh .
 
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxReal-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxIDES Editor
 
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Raj Kumar Thenua
 
Nlms algorithm for adaptive filter
Nlms algorithm for adaptive filterNlms algorithm for adaptive filter
Nlms algorithm for adaptive filterchintanajoshi
 
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative StudyEcho Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Studyidescitation
 
Performance analysis of adaptive noise canceller for an ecg signal
Performance analysis of adaptive noise canceller for an ecg signalPerformance analysis of adaptive noise canceller for an ecg signal
Performance analysis of adaptive noise canceller for an ecg signalRaj Kumar Thenua
 
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713CSCJournals
 
5 g –wireless technology
5 g –wireless technology5 g –wireless technology
5 g –wireless technologySushil Sudake
 
State of the Word 2011
State of the Word 2011State of the Word 2011
State of the Word 2011photomatt
 
How to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksHow to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksSlideShare
 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShareSlideShare
 

Viewers also liked (19)

M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
 
Internet
InternetInternet
Internet
 
Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...
 
Adaptive filters
Adaptive filtersAdaptive filters
Adaptive filters
 
Active noise control
Active noise controlActive noise control
Active noise control
 
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxReal-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
 
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
 
Nlms algorithm for adaptive filter
Nlms algorithm for adaptive filterNlms algorithm for adaptive filter
Nlms algorithm for adaptive filter
 
Echo Cancellation Paper
Echo Cancellation Paper Echo Cancellation Paper
Echo Cancellation Paper
 
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative StudyEcho Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
 
Performance analysis of adaptive noise canceller for an ecg signal
Performance analysis of adaptive noise canceller for an ecg signalPerformance analysis of adaptive noise canceller for an ecg signal
Performance analysis of adaptive noise canceller for an ecg signal
 
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
 
5 g –wireless technology
5 g –wireless technology5 g –wireless technology
5 g –wireless technology
 
ppt on solar tree
ppt on solar treeppt on solar tree
ppt on solar tree
 
Noise cancellation and supression
Noise cancellation and supressionNoise cancellation and supression
Noise cancellation and supression
 
zigbee full ppt
zigbee full pptzigbee full ppt
zigbee full ppt
 
State of the Word 2011
State of the Word 2011State of the Word 2011
State of the Word 2011
 
How to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksHow to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & Tricks
 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShare
 

Similar to Adaptive filters

DSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersDSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersAmr E. Mohamed
 
Filter (signal processing)
Filter (signal processing)Filter (signal processing)
Filter (signal processing)RSARANYADEVI
 
Discrete time signal processing unit-2
Discrete time signal processing unit-2Discrete time signal processing unit-2
Discrete time signal processing unit-2selvalakshmi24
 
ASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ASP UNIT 1 QUESTIONBANK ANSWERS.pdfASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ASP UNIT 1 QUESTIONBANK ANSWERS.pdfKarthikRaperthi
 
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdfASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdfKarthikRaperthi
 
Design of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
Design of Low Pass Digital FIR Filter Using Cuckoo Search AlgorithmDesign of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
Design of Low Pass Digital FIR Filter Using Cuckoo Search AlgorithmIJERA Editor
 
Design of Area Efficient Digital FIR Filter using MAC
Design of Area Efficient Digital FIR Filter using MACDesign of Area Efficient Digital FIR Filter using MAC
Design of Area Efficient Digital FIR Filter using MACIRJET Journal
 
Method to Measure Displacement and Velocityfrom Acceleration Signals
Method to Measure Displacement and Velocityfrom Acceleration SignalsMethod to Measure Displacement and Velocityfrom Acceleration Signals
Method to Measure Displacement and Velocityfrom Acceleration SignalsIJERA Editor
 
Performance Analysis of FIR Filter using FDATool
Performance Analysis of FIR Filter using FDAToolPerformance Analysis of FIR Filter using FDATool
Performance Analysis of FIR Filter using FDAToolijtsrd
 
digital filters on open-loop system.pptx
digital filters on open-loop system.pptxdigital filters on open-loop system.pptx
digital filters on open-loop system.pptxHtetWaiYan27
 
Time domain analysis and synthesis using Pth norm filter design
Time domain analysis and synthesis using Pth norm filter designTime domain analysis and synthesis using Pth norm filter design
Time domain analysis and synthesis using Pth norm filter designCSCJournals
 
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass FilterIRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass FilterIRJET Journal
 
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...iosrjce
 
DIGITAL FILTERS/SIGNERS types and uses.docx
DIGITAL FILTERS/SIGNERS types and uses.docxDIGITAL FILTERS/SIGNERS types and uses.docx
DIGITAL FILTERS/SIGNERS types and uses.docxgreatmike3
 
Simulation of EMI Filters Using Matlab
Simulation of EMI Filters Using MatlabSimulation of EMI Filters Using Matlab
Simulation of EMI Filters Using Matlabinventionjournals
 

Similar to Adaptive filters (20)

DSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersDSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital Filters
 
File 2
File 2File 2
File 2
 
Filter (signal processing)
Filter (signal processing)Filter (signal processing)
Filter (signal processing)
 
Discrete time signal processing unit-2
Discrete time signal processing unit-2Discrete time signal processing unit-2
Discrete time signal processing unit-2
 
ASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ASP UNIT 1 QUESTIONBANK ANSWERS.pdfASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ASP UNIT 1 QUESTIONBANK ANSWERS.pdf
 
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdfASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
 
Z4301132136
Z4301132136Z4301132136
Z4301132136
 
Design of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
Design of Low Pass Digital FIR Filter Using Cuckoo Search AlgorithmDesign of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
Design of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
 
Design of Area Efficient Digital FIR Filter using MAC
Design of Area Efficient Digital FIR Filter using MACDesign of Area Efficient Digital FIR Filter using MAC
Design of Area Efficient Digital FIR Filter using MAC
 
Method to Measure Displacement and Velocityfrom Acceleration Signals
Method to Measure Displacement and Velocityfrom Acceleration SignalsMethod to Measure Displacement and Velocityfrom Acceleration Signals
Method to Measure Displacement and Velocityfrom Acceleration Signals
 
C010431520
C010431520C010431520
C010431520
 
Performance Analysis of FIR Filter using FDATool
Performance Analysis of FIR Filter using FDAToolPerformance Analysis of FIR Filter using FDATool
Performance Analysis of FIR Filter using FDATool
 
E0162736
E0162736E0162736
E0162736
 
digital filters on open-loop system.pptx
digital filters on open-loop system.pptxdigital filters on open-loop system.pptx
digital filters on open-loop system.pptx
 
Time domain analysis and synthesis using Pth norm filter design
Time domain analysis and synthesis using Pth norm filter designTime domain analysis and synthesis using Pth norm filter design
Time domain analysis and synthesis using Pth norm filter design
 
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass FilterIRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
 
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
 
D017632228
D017632228D017632228
D017632228
 
DIGITAL FILTERS/SIGNERS types and uses.docx
DIGITAL FILTERS/SIGNERS types and uses.docxDIGITAL FILTERS/SIGNERS types and uses.docx
DIGITAL FILTERS/SIGNERS types and uses.docx
 
Simulation of EMI Filters Using Matlab
Simulation of EMI Filters Using MatlabSimulation of EMI Filters Using Matlab
Simulation of EMI Filters Using Matlab
 

More from Mustafa Khaleel

More from Mustafa Khaleel (7)

LTE-U
LTE-ULTE-U
LTE-U
 
Massive mimo
Massive mimoMassive mimo
Massive mimo
 
IPsec vpn topology over GRE tunnels
IPsec vpn topology over GRE tunnelsIPsec vpn topology over GRE tunnels
IPsec vpn topology over GRE tunnels
 
WiMAX implementation in ns3
WiMAX implementation in ns3WiMAX implementation in ns3
WiMAX implementation in ns3
 
Turbocode
TurbocodeTurbocode
Turbocode
 
Mm wave
Mm waveMm wave
Mm wave
 
Ultra wideband technology (UWB)
Ultra wideband technology (UWB)Ultra wideband technology (UWB)
Ultra wideband technology (UWB)
 

Recently uploaded

Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 

Recently uploaded (20)

Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 

Adaptive filters

  • 2. ii Contents 1. Introduction ............................................................................................................................ 1 2. Digital Filters ........................................................................................................................... 2 2.1. Linear and Nonlinear Filter............................................................................................. 2 2.2. Filter Design .................................................................................................................... 3 3. Wiener Filters.......................................................................................................................... 4 3.1. Error Measurements....................................................................................................... 6 3.2. The Mean-Square Error (MSE)........................................................................................ 6 3.3. Mean Square Error Surface ............................................................................................ 7 4. Method of Steepest Descent.................................................................................................. 8 5. The Least Mean Squares (LMS) Algorithm........................................................................... 10 5.1. Convergence in the Mean-Sense.................................................................................. 11 5.2. Convergent in the Mean Square Sense........................................................................ 12 6. Simulation and Results ......................................................................................................... 13 Conclusion..................................................................................................................................... 15 Reference ...................................................................................................................................... 15 Annex ............................................................................................................................................ 16 Figure 1 FIR Filter ............................................................................................................................ 3 Figure 2 IIR Filter............................................................................................................................. 4 Figure 3 Wiener Filters.................................................................................................................... 5 Figure 4 Error surface with two weights........................................................................................ 7 Figure 5 Adaptive Filter with LMS................................................................................................ 10 Figure 6 Adaptive Filter (Noise Cancellation) .............................................................................. 13 Figure 7 Step-Size small................................................................................................................ 13 Figure 8 Step-Size Large................................................................................................................ 14 Figure 9 Step-Size Acceptable ...................................................................................................... 14
  • 3. 1 1. Introduction Filtering is a signal processing operation whose objective is to process a signal in order to manipulate the information contained in the signal. In other words, a filter is a device that maps its input signal to another output signal facilitating the extraction of the desired information contained in the input signal. A digital filter is the one that processes discrete-time signals represented in digital format. For time-invariant filters the internal parameters and the structure of the filter are fixed, and if the filter is linear the output signal is a linear function of the input signal. Once prescribed specifications are given, the design of time-invariant linear filters entails three basic steps, namely: the approximation of the specifications by a rational transfer function, the choice of an appropriate structure defining the algorithm, and the choice of the form of implementation for the algorithm. An adaptive filter is required when either the fixed specifications are unknown or the specifications cannot be satisfied by time-invariant filters. Strictly speaking an adaptive filter is a nonlinear filter since its characteristics are dependent on the input signal and consequently the homogeneity and additivity conditions are not satisfied. However, if we freeze the filter parameters at a given instant of time, most adaptive filters considered in this text are linear in the sense that their output signals are linear functions of their input signals The adaptive filters are time-varying since their parameters are continually changing in order to meet a performance requirement. In this sense, we can interpret an adaptive filter as a filter that performs the approximation step on- line. Usually, the definition of the performance criterion requires the existence of a reference signal that is usually hidden in the approximation step of fixed-filter design.
  • 4. 2 2. Digital Filters The term filter is commonly used to refer to any device or system that takes a mixture of particles/elements from its input and processes them according to some specific rules to generate a corresponding set of particles/elements at its output. In the context of signals and systems, particles/elements are the frequency components of the underlying signals and, traditionally, filters are used to retain all the frequency components that belong to a particular band of frequencies, while rejecting the rest of them, as much as possible. In a more general sense, the term filter may be used to refer to a system that reshapes the frequency components of the input to generate an output signal with some desirable features. 2.1.Linear and Nonlinear Filter Filters can be classified as either linear or nonlinear types. A linear filter is one whose output is some linear function of the input. In the design of linear filters it is necessary to assume stationary (statistical-time-invariance) and know the relevant signal and noise statistics a priori. The linear filter design attempts to minimize the effects of noise on the signal by meeting a suitable statistical criterion. The classical linear Wiener filter, for example, minimizes the Mean Square Error (MSE) between the desired signal response and the actual filter response. The Wiener solution is said to be optimum in the mean square sense, and it can be said to be truly optimum for second-order stationary noise statistics (fully described by constant finite mean and variance). A linear adaptive filter is one whose output is some linear combination of the actual input at any moment in time between adaptation operations. A nonlinear adaptive filter does not necessarily have a linear relationship between the input and output at any moment in time. Many different linear adaptive filter algorithms have been published in the literature. Some of the important features of these algorithms can be identified by the following terms 1. Rate of convergence - how many iterations to reach a near optimum solution. 2. Misadjustment- measure of the amount by which the final value of the MSE, averaged over an ensemble of adaptive filters, deviates from the MSE produced by the Wiener solution. 3. Tracking - ability to follow statistical variations in a non-stationary environment.
  • 5. 3 4. Robustness - implies that small disturbances from any source (internal or 5.external) produce only small estimation errors. 6. Computational requirements - the computational operations per iteration, Data storage and programming requirements. 7. Structure - of information flow in the algorithm, e.g., serial, parallel etc., which determines the possible hardware implementations. 8. Numerical properties - type and nature of quantization errors, numerical stability and numerical accuracy. 2.2.Filter Design There two common way to design a Filter (recursion) and (non- recursion). For non-recursion filter also its call (Finite Impulse Response or FIR). The Filter is implemented by convolution, each sample in the output is calculated by weighting the samples in the input, and adding them together. Recursive filters (Infinite Impulse Response or IIR filters) are an extension of this, using previously calculated values from the output, besides points from the input. Recursive filters are defined by a set of recursion coefficients. Figure 1 FIR Filter
  • 6. 4 Figure 2 IIR Filter Finally we can classify digital filters by their use and by their implementation. The use of a digital filter can be broken into three categories: time domain, frequency domain and custom. As previously described, time domain filters are used when the information is encoded in the shape of the signal's waveform. Time domain filtering is used for such actions as: smoothing, DC removal, waveform shaping, etc. In contrast, frequency domain filters are used when the information is contained in the amplitude, frequency, and phase of the component sinusoids. The goal of these filters is to separate one band of frequencies from another. Custom filters are used when a special action is required by the filter, something more elaborate than the four basic responses (high-pass, low-pass, band-pass and band- reject). 3. Wiener Filters Wiener formulated the continuous-time, least mean square error, estimation Problem in his classic work on interpolation, extrapolation and smoothing Of time series (Wiener 1949). The extension of the Wiener theory from Continuous time to discrete time is simple, and of more practical use for Implementation on digital signal processors. A Wiener filter can be an Infinite-duration impulse response (IIR) filter or a finite-duration impulse
  • 7. 5 Response (FIR) filter. In general, the formulation of an IIR Wiener filter results in a set of non-linear equations, whereas the formulation of an FIR Wiener filter results in a set of linear equations and has a closed-form solution e they are relatively simple to compute, inherently stable and more practical. The main drawback of FIR filters compared with IIR filters is that they may need a large number of coefficients to approximate a desired response. Figure 3 Wiener Filters Where 𝑥( 𝑛) is input signal and 𝑤 are filter coefficients, respectively; that is 𝑥(𝑛) = [ 𝑥(𝑛)𝑥 … 𝑥(𝑛 − 𝑁 + 1)] 𝑇 (1) 𝑤 = [ 𝑤0 𝑤1 … . 𝑤 𝑁] 𝑇 (2) And 𝑦(𝑘) is the output signal, 𝑦(𝑛) = ∑ 𝑤𝑖 𝑥(𝑛 − 𝑖)𝑁 𝑖=0 = 𝑤𝑜 𝑥(𝑛) + 𝑤1 𝑥(𝑛 − 1) + ⋯ + 𝑤 𝑁 𝑥(𝑛 − 𝑁) 𝑦(𝑛) = 𝑤 𝑇 𝑥(𝑛) (3) 𝑑(𝑛) Is the training or desired signal, and e(n) is error signal (different between the Output signal 𝑦(𝑛) and desired signal 𝑑(𝑛)). 𝑒(𝑛) = 𝑑(𝑛) − 𝑦(𝑛) (4)
  • 8. 6 3.1.Error Measurements Adaptation of the filter coefficients follows a minimization procedure of a particular objective or cost function. This function is commonly defined as a norm of the error signal e (n). The most commonly employed norms are the mean- square error (MSE). 3.2.The Mean-Square Error (MSE). From Figure 3, we defined The MSE (cost function) as 𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑦(𝑛)|2]. (5) From equation (3) we write the equation (5) as follow: So, 𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑦(𝑛)|2] 𝜉(𝑛) = 𝐸[𝑒2(𝑛)] = 𝐸[| 𝑑(𝑛) − 𝑤 𝑇 𝑥(𝑛)|2] 𝜉(𝑛) = [𝑑2(𝑛) − 2𝑤 𝑇 𝐸[𝑑(𝑛)𝑥(𝑛)] + 𝑤 𝑇 𝐸[𝑥(𝑛)𝑤 𝑇 𝐸[𝑥(𝑛)𝑥 𝑇(𝑛)]𝑤 Where, 𝑅= 𝐸[𝑥(𝑛)𝑥 𝑇(𝑛)], 𝑝 = 𝐸[𝑑(𝑛)𝑥 𝑇(𝑛)]. 𝜉(𝑛) = 𝑥 𝑑𝑑(0) − 2𝑝 + 2𝑅𝑤 (6) Where R and p are the input-signal correlation matrix and the cross-correlation vector between the reference signal and the input signal. The gradient vector of the MSE function with respect to the adaptive filter coefficient vector is given by ∇w ξ (n)= −2𝑝 + 2𝑅𝑤 (7) That minimizes the MSE cost function, is obtained by equating the gradient vector to zero. Assuming that R is non-singular, one gets that ∇ 𝑤 𝜉(𝑛) = 0
  • 9. 7 Figure 4 Error surface with two weights 𝑤𝑜 = 𝑅−1 𝑝 (8) This system of equations is known as the Wiener-Hopf equations, and the filter whose weights satisfy the Wiener-Hopf equations is called a Wienerfilter. 3.3.Mean Square Error Surface From Equation (7) the mean square error for filter is a quadratic function of the filter coefficient vector 𝒘 and has a single minimum point. For example, for a filter with only two coefficients (𝑤0, 𝑤1). The mean square error function is a bowl-shaped surface, with a single minimum point. At this optimal operating point the mean square error surface has zero gradient.
  • 10. 8 4. Method of Steepest Descent To solve the Wiener-Hopf equations (Eq.7) for tap weights of the optimum spatial filter, we basically need to compute the inverse of a p-by-p matrix made up of the different values of the autocorrelation function. We may avoid the need for this matrix inversion by using the method of steepest descent. Starting with an initial guess for optimum weight 𝒘 𝒐, say 𝒘(𝟎), a recursive search method that may require many iterations (steps) to converge to 𝒘 𝒐 is used. The method of steepest descent is a general scheme that uses the following steps to search for the minimum point of any convex function of a set of parameters: 1. Start with an initial guess of the parameters whose optimum values are to be found for minimizing the function. 2. Find the gradient of the function with respect to these parameters at the present point. 3. Update the parameters by taking a step in the opposite direction of the gradient vector obtained in Step 2. This corresponds to a step in the direction of steepest descent in the cost function at the present point. Furthermore, the size of the step taken is chosen proportional to the size of the gradient vector. 4. Repeat Steps 2 and 3 until no further significant change is observed in the parameters. To implement this procedure in the case of the transversal filter shown in Figure 3, we recall (equation 7) ∇w ξ (n)= −2𝑝 + 2𝑅𝑤 (9) Where ∇ is the gradient operator defined as the column vector, ∇ = [ 𝜕 𝜕𝑤0 𝜕 𝜕𝑤1 … 𝜕 𝜕𝑤 𝑁−1 ] 𝑇 (10) According to the above procedure, if 𝑤(𝑛) is the tap-weight vector at the 𝑛 𝑡ℎ iteration, the following recursive equation may be used to update 𝑤( 𝑛). 𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 𝜇∇ 𝑘 𝜉 (11) Where 𝜇 positive scalar is call Step-Size, and ∇ 𝑘 𝜉 denotes the gradient vector ∇ 𝑘 𝜉 evaluated at the point 𝑤 = 𝑤( 𝑘). Substituting (Eq.9) in (Eq. 11), we get 𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇(𝑅𝑤( 𝑛) − 𝑝) (12)
  • 11. 9 As we shall soon show, the convergence of 𝒘(𝒏) to the optimum solution 𝑤𝑜 and the speed at which this convergence takes place are dependent on the size of the step-size parameter μ. A large step-size may result in divergence of this recursive equation. To see how the recursive update 𝑤(𝑘) converges toward𝑤𝑜, we rearrange Eq. (12) as 𝑤( 𝑘 + 1) = (Ι − 2𝜇𝑹) 𝒘( 𝑘) + 2𝜇𝒑 (13) Where 𝚰 is the N-by-N identify matrix. Next we subtract 𝑤𝑜form both side for Eq. (13) and rearrange the result to obtain 𝑤( 𝑘 + 1) − 𝑤𝑜 = (Ι − 2𝜇𝑹)( 𝒘( 𝑘) − 𝒘 𝒐) (14) Defining the 𝑐( 𝑘) as 𝑐( 𝑛) = 𝑤( 𝑛) − 𝑤𝑜 And 𝑅 = 𝑄Λ𝑄 𝑇 Where 𝚲 is a diagonal matrix consisting of the eigenvalues 𝜆0, 𝜆0, … 𝜆 𝑁−1 of R and the columns of 𝑄 contain the corresponding orthonormal eigenvectors, and Ι =𝑄𝑄 𝑇 , Substituting Eq.(14) we get 𝑐( 𝑛 + 1) = 𝑄(I − 2μΛ) 𝑄 𝑇 𝑣( 𝑘) (15) Pre-multiplying Eq. (15) by 𝑄 𝑇 we have 𝑄 𝐻 𝑐( 𝑛 + 1) = (I − μΛ)𝑄 𝐻 𝑐(𝑛) (16) Notation: 𝑣( 𝑛) = 𝑄 𝐻 𝑐(𝑛) 𝑣( 𝑛 + 1) = (I − μΛ) 𝑣( 𝑛), 𝑘 = 1,2, . . , 𝑁 (17) Initial conditions: 𝑣(0) = 𝑄 𝐻 𝑐(0) = 𝑄 𝐻 [𝑤(0) − 𝑤𝑜] 𝑣 𝑘( 𝑛) = (1 − 𝜇𝜆 𝑚𝑎𝑥) 𝑛 𝑣 𝑘(0), 𝑘 = 1,2, … . , 𝑁 Convergence (stability): When n=0, 0 < 𝜇 < 2 𝜆 𝑚𝑎𝑥 Stability condition 𝜆 𝑚𝑎𝑥 = max{𝜆1, 𝜆2, … , 𝜆 𝑁} 𝜆 𝑚𝑎𝑥 Is the maximum of the eigenvalues𝜆0 , 𝜆1, … . , 𝜆 𝑁−1. The left limit in refers to the fact that the tap-weight correction must be in the opposite direction of the gradient vector. The right limit is to ensure that all the scalar tap-weight parameters in the recursive equations (17) decay exponentially as 𝑘 increases.
  • 12. 10 5. The Least Mean Squares (LMS) Algorithm In any event, care has to be exercised in the selection of the learning-rate parameter 𝝁 for the method of steepest descent to work. Also, a practical limitation of the method of steepest descent is that it requires knowledge of the spatial correlation functions 𝒓 𝒅𝒙( 𝒋, 𝒏)and 𝒓 𝒙𝒙(𝒋, 𝒏)now, when the filter operates in an unknown environment, these correlation functions are not available, in which case we are forced to use estimates in their place. The least-mean-square algorithm results from a simple and yet effective method of providing for these estimates. The least-mean-square (LMS) algorithm is based on the use of instantaneous estimates of the autocorrelation function 𝑹 and the cross-correlation function 𝒑.These estimates are deduced directly from the defining equations (18) and (19) as follows: 𝑅= 𝐸[𝑥(𝑛)𝑥 𝑇(𝑛)] ⟹ 𝑅′ = 𝑥(𝑛)𝑥 𝑇(𝑛) (18) 𝑝 = 𝐸[𝑑(𝑛)𝑥 𝑇(𝑛)] ⇒ 𝑝′ = 𝑥(𝑛)𝑑(𝑛) (19) Now call Eq. (12): 𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇( 𝑅𝑤( 𝑛) − 𝑝) 𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇[( 𝑥( 𝑛) 𝑥 𝑇( 𝑛) 𝑤( 𝑛) − 𝑥( 𝑛) 𝑑( 𝑛)] 𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇𝑥(𝑛)[( 𝑥( 𝑛) 𝑥 𝑇( 𝑛) 𝑤( 𝑛) − 𝑑( 𝑛)] When 𝑒′ (𝑛) = ( 𝑥( 𝑛) 𝑥 𝑇( 𝑛) 𝑤( 𝑛) − 𝑑( 𝑛) So, 𝑤( 𝑛 + 1) = 𝑤( 𝑛) − 2𝜇𝑥(𝑛)𝑒′ (𝑛) (20) Equation (20) describe Least-Mean-Square (LMS) Algorithm. Figure 5 Adaptive Filter with LMS
  • 13. 11 Summary of the LMS algorithm Input: Tap-weight vector, 𝒘( 𝒏), Input vector, 𝒙(𝒏). and desired output, 𝒅(𝒏). Output: Filter: output, 𝒚(𝒏) Tap-weight vector update 𝒘( 𝒏 + 𝟏) 1. Filtering: 𝒚(𝒏) = 𝒘 𝑻 𝒙(𝒏) 2. Error Estimation: 𝒆(𝒏) = 𝒅(𝒏) − 𝒚(𝒏) 3. Tap-Weight Vector Adaptation: 𝒘( 𝒏 + 𝟏) = 𝒘( 𝒏) − 𝟐𝝁𝒙(𝒏)𝒆′ (𝒏) Where 𝑥( 𝑛) = [ 𝑥( 𝑛) 𝑥… 𝑥( 𝑛 − 𝑁 + 1)] 𝑇 . Substituting this result in Eq. (11), we get 𝑤( 𝑛 + 1) = 𝑤( 𝑛) + 2𝜇𝑒( 𝑛) 𝑥( 𝑛) (21) This is referred to as the LMS recursion it suggests a simple procedure for recursive adaptation of the filter coefficients after arrival of every new input sample 𝑥( 𝑛) and its corresponding desired output sample, 𝑑( 𝑛) Equations (3), (4), and (21), in this order, specify the three steps required to complete each iteration of the LMS algorithm. Equation (3) is referred to as filtering. It is performed to obtain the filter output. Equation (4) is used to calculate the estimation error. Equation (21) is tap-weight adaptation recursion. 5.1.Convergence in the Mean-Sense A detailed analysis of convergence of the LMS algorithm in the mean square is much more complicated than convergence analysis of the algorithm in the mean. This analysis is also much more demanding in the assumptions made concerning the behavior of the weight vector 𝒘( 𝒏) computed by the LMS algorithm (Haykin, 1991). In this subsection we present a simplified result of the analysis. The LMS algorithm is convergent in the mean square if the learning-rate parameter 𝜇 satisfies the following condition, 0 < 𝜇 < 𝑡𝑟[ 𝑅 𝑥] (22)
  • 14. 12 Where 𝑡𝑟[ 𝑅 𝑥] is the trace of the correlation matrix 𝑅, from matrix algebra, we know that 𝑡𝑟[ 𝑅 𝑥] = ∑ 𝜆 𝑘 ≥ 𝜆 𝑚𝑎𝑥 (23) And Convergence condition – Convergence in the mean sense 0 < 𝜇 < 2 𝜆 𝑚𝑎𝑥 (24) 5.2.Convergent in the Mean Square Sense For an LMS algorithm convergent in the mean square, the final value of 𝜉(∞) the mean-squared error 𝜉(𝑛) is a positive constant, which represents the steady- state condition of the learning curve. In fact, 𝜉(∞) is always in excess of the minimum mean-squared error J- realized by the corresponding Wiener filter for a stationary environment. The difference between 𝜉(∞) and 𝜉 𝑚𝑖𝑛 is called the excess mean-squared error: 𝜉 𝑒𝑥 = 𝜉(∞) − 𝜉 𝑚𝑖𝑛 (25) And Convergence condition – Convergence in the mean square sense 0 < 𝜇 < 2 𝜆 𝑚𝑎𝑥 (26) The ratio of 𝜉 𝑒𝑥 to 𝜉 𝑚𝑖𝑛 is called the miss-adjustment: 𝑀 = 𝜉 𝑒𝑥 𝜉 𝑚𝑖𝑛 (27) It is customary to express the miss-adjustment M as a percentage. Thus, for example, a miss-adjustment of 10 percent means that the LMS algorithm produces a mean-squared error (after completion of the learning process) that is 10percent greater than the minimum mean squared error 𝜉 𝑚𝑖𝑛 .Such a performance is ordinarily considered to be satisfactory. Another important characteristic of the LMS algorithm is the settling time. However, there is no unique definition for the settling time. We may, for example, approximate the learning curve by a single exponential with average time constant 𝝉, and so use 𝝉, as a rough measure of the settling time. The smaller the value of 𝝉 ,is, the faster will be the settling time. To a good degree of approximation, the miss-adjustment M of the LMS algorithm is directly proportional to the learning-rate parameter 𝝁, whereas the average time constant 𝝉 is inversely proportional to the learning-rate parameter 𝝁 .
  • 15. 13 Figure 7 Step-Size small We therefore have conflicting results in the sense that if the learning-rate parameter is reduced so as to reduce the miss-adjustment, then the settling time of the LMS algorithm is increased. Conversely, if the learning-rate parameter is increased so as to accelerate the learning process, then the miss-adjustment is increased. 6. Simulation and Results Our scenario in this simulation we have signal send thorough channel and received with noise , at receiver we have the training signal (reference) , we try to extract our signal using (LMS) with specific value of ( 𝝁). Figure 6 Adaptive Filter (Noise Cancellation) In the first case we assign small value for 𝝁 =0.0002, as result:
  • 16. 14 As we see our received signal still noisy, next we may try other value of 𝝁 In Figure above we choose large value of step-size ( 𝝁 = 𝟎. 𝟒), and the Signal can’t recover by receiver. As we see in figure.9 our signal has be recover when the vlue of Step-Size (𝝁 =0.005). To recover our signal in this system we must choose value of step-size carefully, So the step-size not be small size (slow convergence) and not be large (fast convergence), it must be between (𝟎. 𝟎𝟎𝟎𝟐 < 𝝁 < 𝟎. 𝟒) to have stable system. Figure 8 Step-Size Large Figure 9 Step-Size Acceptable
  • 17. 15 Conclusion Adaptive filtering involves the changing of filter parameters (coefficients) over time, to adapt to changing signal characteristics. Over the past three decades, digital signal processors have made great advances in increasing speed and complexity, and reducing power consumption. As a result, real-time adaptive filtering algorithms are quickly becoming practical and essential for the future of communications, both wired and wireless. The LMS algorithm is by far the most widely used algorithm in adaptive filtering for several reasons, the main features that attracted the use of the LMS algorithm are low computational complexity, proof of convergence in stationary environment, unbiased convergence in the mean to the Wiener solution, and stable behavior when implemented with finite- precision arithmetic. Compare to other algorithm like Method of Steepest Descent which dependent on the updated value of weights (coefficients) in iterative fashion and continually seeking the bottom point of the error surface of the filter. Reference (1) Adaptive Filtering Algorithms and Practical Implementation,Third Edition, 2008 Springe. (2) Principles of Adaptive Filters and Self-learning Systems, Springer-Verlag London Limited 2005. (3) Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi,John Wiley & Sons 2000. (4) Adaptive Filtering Fundamentals of Least Mean Squares with MATLAB, Alexander D. Poularikas,CRC Press 2015. (5) The Scientist and Engineer's Guide to Digital Signal Processing .
  • 18. 16 Annex Implementation Adaptive Filter Using LMS t=1:0.025:5; desired=5*sin(2*3.*t); noise=5*sin(2*50*3.*t); refer=5*sin(2*50*3.*t+ 3/20); primary=desired+noise; subplot(4,1,1); plot(t,desired); ylabel('desired'); subplot(4,1,2); plot(t,refer); ylabel('refer'); subplot(4,1,3); plot(t,primary); ylabel('primary'); order=2; mu=0.005; n=length(primary) delayed=zeros(1,order); adap=zeros(1,order); cancelled=zeros(1,n); for k=1:n, delayed(1)=refer(k); y=delayed*adap'; cancelled(k)=primary(k)-y; adap = adap + 2*mu*cancelled(k) .* delayed; delayed(2:order)=delayed(1:order-1); end subplot(4,1,4); plot(t,cancelled); ylabel('cancelled');