Information Bottleneck

Let denote the signal (message) space with a fixed probability measure , and let denote its quantized codebook or compressed representation.

For each , we seek a possible stochastic mapping to a representative, or codeword in a codebook, , characterized by a conditional p.d.f. . We would like to find a good compression .

1. Rate distortion theory

We are facing a tradeoff bewteen the quality of representation and the level of compression. In single word, we want to use as fewer bits as possible to compress the original message but in the meantime remain as much information about the origin message as possible.

In rate distortion theory such a constraint is provided through a distortion function,

which should be small for good representations. The partitioning of induced by the mapping , has an expected distortion

There is a monotonic tradeoff between the rate of quantization and the expected distortion: the larger the rate, the smaller is the achievable distortion.

The celebrated rate distortion theorem of Shannon and Kolmogorov characterizes this tradeoff through the rate distortion function, defined as the minimal achievable rate under a given constraint on the expected distortion:

where the mutual information is defined as

Notice that mutual information can be seen as the K-L divergence for joint distribution and marginal distribution , which is non-negative and measures the dependency between these two variables. it reach zero if and only if and are independent.

Back to the optimization problem, it can be reformalized as a Lagrangian with Lagrangian multiplier . Here we would like to minimize the functional below:

This variational formulation has the following well known consequences:

Theorem

The solution of the variational problem,

for normalized distributions , is given by the exponential form

where is a normalization (partition) function. Moreover, the Lagrangian multiplier determined by the value of the expected distortion, D, is positive and satisfies

2. Information Bottleneck

Since the "right" distortion measure is rarely available, the problem of relevant quantization has to be addressed directly, by preserving the relevant information about another variable. The relevance variable, denoted here by , must not be independent from the original signal , namely they have positive mutual information . It is assumed here that we have access to the joint distribution .

If is given by through a stochastic mapping, , then

Obviously lossy compression cannot convey more information than the original data. There is also a tradeoff between compressing the representation and preserving meaningful information.

Here we want the compression keeps a fixed amount of meaningful information about the relevant signal while minimizing the number of bits from the original signal . We still consider a constraint optimization problem:

which can be reformalized as a Lagrangian

By varying the parameter one can explore the tradeoff between the preserved meaningful information and compression at various resolutions.

Theorem

The optimal assignment satisfies the equation:

where the distribution in the exponent is given via Bayes’ rule and the Markov chain condition , as,

we must emphasize that it is a formal solution since in the exponential is defined implicitly in terms of the assignment mapping .

Remark: The Markov condition accutually states that and are conditionally independent given .

results matching ""

    No results matching ""