2. Adaptive Quantization
• Linear quantization
• Instantaneous companding : SNR only weakly dependent on
𝑋 𝑚𝑎𝑥/𝜎𝑥 for large -law Compression(100-500)
• Optimum SNR : minimize 𝜎𝑒
2
when 𝜎𝑥
2
is known, non-uniform
distribution of Quantization lavels.
Quantization dilemma : want to choose quantization steps-size
large enough to accommodate Maximum peak to peak range of x[n].
At the same time need to make the quantization step size small so as to
minimize the Quantization error.
3. Solutions to Quantization Dilemna
Adaptive Quantization:
• Solution 1- Let Δ vary to match the variance of the input
Signal :Δ[n]
• Solutions 2- use a variable gain, G[n], followed by a fixed
quantizer step size, Δ: keep signal variance of y[n]=G[n]x[n]
constant.
Case 1: Δ[n] proportional to σ 𝑥 ∶
𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑎𝑡𝑖𝑜𝑛 𝑙𝑒vels and ranges would be linearly scaled to match σ 𝑥
2
: 𝑛𝑒𝑒𝑑 𝑡𝑜 𝑟𝑒𝑙𝑖𝑎𝑏𝑙𝑦 𝑒𝑠𝑡𝑖𝑚𝑎𝑡𝑒 σ 𝑥
2
Case 2: G[n] proportional to 1/σ 𝑥 𝑡𝑜 𝑔𝑖𝑣𝑒 σ 𝑦
2 = Constant
Click here
Click here
4. Types of Adaptive Quantization
• Feed-forward-adaptive quantizers that estimate σ 𝑥
2
from x[n]
itself.
• Feedback adaptive quantizers that adapt step size , Δ ,on the
basis of the quantized signal, or equivalently codewords ,c[n].
• Instantaneous-amplitude changes reflects sample to sample
variations in x[n]: rapid adaptation
• Syllabic-amplitude changes reflects syllable to syllable
variations in x[n]=slow adaption
• Adaptive quantization with one word memory.
• Switched quantization
5. Feed Forward Adaptation
Variable Step-size
• Assume uniform quantizer with
step size Δ[𝑛]
• X[n] is quantized using Δ[n] :
c[n] and Δ[𝑛] need to be
transmitted to the decoder
• if c’[n]= c[n] and Δ’[n] = Δ[n] : no errors in channel, and
X’[n] = X[n]
• Don’t have x[n] at the decoder
to estimate Δ[n] : need to transmit Δ[n]
(a)
(b)
6. Feed Forward Quantizer
• Time varying gain is G[n],c[n] and
G[n] need to be transmitted to the
decoder.
• Ideally c’[n]=c[n] and G’[n]=G[n]
• Can’t estimate G[n] at the decoder :
it has to be transmitted
• Feed forward systems make estimates of σ 𝑥
2
, then make Δ or
the quantization levels proportional to σ 𝑥 ,or the gain is
inversely proportional to σ 𝑥.
7. Feed Backward Adaptive Quantization
• There is no need to send side information.
• The sensitivity of adaptation to the
changing statistics will be degraded,
however, since instead of the original
input, only the output of the quantization
encoder used in the statistical in the
statistical analysis.
8. Adaptive Quantization with a one word Memory
(JAYANT QUANTIZER)
In Backward Adaptive Quantization we don’t have any value of input
in adapting the quantizer.
In order to adapt a quantizer we need to observe quantizer output
for a long time.
Nuggehally S. Jayant at Bell Labs showed that we did not need to
observe the quantizer output over a long period of time.In fact, we
could adjust the quantizer step size after observing a single output.
Jayant named this quantization approach “quantization with one
word memory.” The quantizer is better known as the Jayant
quantizer.
Mathematically, the adaptation process can be represented as
where l(n−1) is the quantization interval at time (n−1).
10. Switched Quantization
• This scheme has shown improved performance even when the number
quantizers in the bank, L , is two.
• As L ∞, the switched quantization converges to the adaptive quantizer
Fig. : Switched Quantization
15. Reference
• The paper “Quantization,” by A. Gersho, in IEEE Communication Magazine,
September 1977
Book
• Introduction to Data compression by Khalid Sayood