In: Physics
Question-related to Lloyd-Max quantization
In signal detection, one often checks for the signal point (codebook) at the smallest Euclidean distance. What is the noise assumption that leads to that criterion or is this, in general, the optimum choice? Explain
Answer :-
Under assumptions of high resolution and smooth densities, the
quantization error behaved much like random “noise”: it had small
correlation with the signal and had approximately a flat (“white”)
spectrum. This led to an “additive-noise” model of quantizer error,
since with these properties the formula could be interpreted as
representing the quantizer output as the sum of a signal and white
noise. This model was later popularized, but the viewpoint avoids
the fact that the “noise” is in fact dependent on the signal and
the approximations are valid only under certain conditions. Signal-
independent quantization noise has generally been found to be
perceptually desirable. This was the motivation for randomizing the
action of quantization by the addition of a
dither signal, a method introduced as a
means of making quantized images look better by replacing the
artifacts resulting from deterministic errors by random
noise.