Michelle Effros, California Institute of Technology
Network Coding: A Unified Framework for Source Coding, Channel Coding, and Routing in Networks
Abstract
We generalize linear network coding techniques from noise-free
networks with uncorrelated sources to accommodate both correlation
between sources and additive noise across point-to-point and multiple
access channels on finite fields. In noise-free channels, network
coding of correlated sources is effectively a source coding problem;
we say that a network code accomplishes optimal source coding on a
noise-free network if network codes can be used to transmit any source
with joint entropy lower than the network capacity with asymptotically
negligible error probability. In additive noise channels with
uncorrelated sources, network coding is effectively a channel coding
problem; we say that a network code accomplishes optimal channel
coding on an additive noise channel if network codes can be used to
transmit any source with rate lower than the noisy channel capacity
with asymptotically negligible error probability. We demonstrate that
randomly designed linear network codes can achieve optimal source
coding and optimal channel coding for both point-to-point and multiple
access systems when the source, channel input, and channel output
alphabets are identical finite fields. In source coding applications,
our algorithm gives a random construction of the linear source codes
that Csiszar proved achievable using a sphere packing argument. In
channel coding applications, we demonstrate that randomly drawn linear
codes achieve capacity for both point-to-point and multiple access
networks. Following Csiszar, we also explore the relationship between
linear network source and channel codes, demonstrating codes that
achieve channel coding using a source code on the additive noise.
Finally, we treat the topic of joint source-channel coding for
networks. We begin by proving that separation holds under the assumed
system model, where channel input and output alphabets are identical
finite fields. We then note that in source and channel coding
applications where linear network codes achieve the optimal
performance, optimal joint source-channel coding design, like separate
design, can be accomplished with simple random code design, thereby
eliminating the need to use separation. The result is that network
codes not only unify source coding, channel coding, and routing into a
single conceptual framework, but also accomplish these three
previously disparate goals in a single code.
Joint work with Muriel Medard, Tracey Ho, David Karger, Ralf Koetter
Michael Gastpar, University of California, Berkeley
On Source-Channel Communication in Networks
The separation theorem provides a very general solution to the
point-to-point lossy source-channel communication problem by equating
the rate-distortion function of the source with the capacity-cost
function of the channel. It is well-known that this principle does
not extend to networks in general. We discuss this issue, and propose
a different approach that we call "measure-matching".
For a particular Gaussian source-channel network example (inspired by
a sensor network situation), we show that the minimum distortion for
the optimal source code followed by the optimal channel code decreases
like 1/log M, where M is the number of nodes in the network. In
contrast, there is a measure-matching scheme whose distortion
decreases like 1/M, which we show to be the optimum.
Joint work with Martin Vetterli.
Sergio Servetto, Cornell University
The Reachback Channel in Wireless Sensor Networks
We consider the problem of reachback communication in wireless sensor
networks: in this problem, multiple sensors are deployed on a field,
and they collect local measurements of some random process which then
need to be encoded and reproduced at a remote location. In this talk
we present capacity results on one particular instance of this
problem, in which sensors cooperate in the form of eliminating
interference in channel access (e.g., by means of perfect time or
frequency division). This assumption leads to a model in which
correlated sources are to be encoded separately, and transmitted over
an array of independent channels. And for this setup, we obtain the
following results:
- First, we give an exact characterization of the joint source/channel
capacity region, i.e., we give conditions on the sources and the
channels under which reliable communication is possible in this
context. These conditions generalize in a very meaningful way the
condition that ${\cal H}(X) < C$ (entropy of a source less than
channel capacity) for point-to-point channels, and surprisingly,
give a separation theorem for this problem as well---we know of no
other network information theory problem for which source and
channel separation holds.
- Then, we show how the classical multiterminal source coding problem
arises as a natural rate/distortion generalization of our problem
above. And in this case, we develop a new coding strategy. A key
feature of our codes is that they work for a large family of
auxiliary random variables $(W,Z)$ satisfying only two short Markov
chain conditions ($W-U-V$ and $U-V-Z$), but not the more restrictive
long chain condition $W-U-V-Z$ typical of all previously known
codes.
If time permits, we will talk briefly about other aspects of the
design of a distributed transmitter currently under development at
Cornell.
Joint work with Joao Barros, An-swol Hu, Christina Peraki.
Emina Soljanin, Bell Laboratories, Lucent Technologies
Hybrid ARQ in Wireless Networks
In mobile packet data transmission, a special coding scheme,
known as the type-II hybrid-ARQ (HARQ), achieves higher
throughput efficiency then ordinary turbo codes by adapting
its error correcting code redundancy to fluctuating channel
conditions characteristic for this application.
A type-II HARQ protocol implements so called incremental
redundancy, which operates as follows. Initially, the
information bits are encoded by a "mother" code and a selected
number of parity bits are transmitted. If a retransmission is
requested, only additional selected parity bits are transmitted.
At the receiving end, the additional parity bits together with
the previously received parity bits form a codeword of a punctured
mother code allowing for an increase in the error correction
capacity. This procedure is repeated after each subsequent
retransmission request until all the parity bits of the
mother code are transmitted. We analyze asymptotic performance
of an HARQ scheme based on randomly punctured turbo codes.
Leandros Tassiulas, University of Maryland, College Park
Fundamental Limits and Quality of Service Provisioning in Wireless Networks
In this talk we present a methodology for providing QoS in wireless
networks, that deals effectively with the main peculiarity of wireless
networks, that differentiates them from their wireline counterparts,
namely coordination of transmissions of neighboring interfering links.
An end-to-end session may specify its bandwidth requirements in terms of
constant bit rate requirements, or in terms of a guaranteed minimum rate
and best effort above that or in terms of a minimum necessary and a
maximum rate or in a combination of the above. Those requirements can be
mapped to appropriate quantitative requirements to the rate of each
session. Then allocating the rates according to the solution of the
resulting maxmin optimization problem will have as a result that the
requirements of the users will be satisfied while the residual bandwidth
if any will be shared in a fair manner to those session requiring best
effort additional bandwidth. We provide a distributed algorithm that
does virtual bandwidth allocation to the different flows based on a
token mechanism. Then scheduling of transmissions is done based on the
bandwidth that has been virtually allocated already. That algorithm has
proven to be optimal, providing maxmin fair bandwidth allocation. In
fact, despite the large amount of work on QoS provisioning in wireless
networks lately, it is the first proven fair bandwidth allocation
algorithm in a wireless system that spans more than one node. It
provides a general framework in which practical QoS algorithms can be
pursued.
Emre Telatar, EPFL Lausanne, Switzerland
Job Scheduling and Multiple Access
Information theory views multiple access as a question of combating
noise and interference, assigning the issue of bursty packet arrival
to appropriate source coding and thereby forgoing any chance of
discussion of delay. Network theory, on the other hand, views
multiple access as a question of resource allocation and distributed
scheduling, taking advantage of the bursty arrival of packets, but
ignoring the issues of noise and interference. Neither view is
entirely satisfactory. We present here a toy problem, a Gaussian
multiple access channel with Poisson packet arrivals. A certain
analogy between this problem and multi-server job queues suggests a
certain transmission strategy for good delay performance. We also
provide a lower bound on packet delay for any transmission strategy
and any arrival process of a given rate. Surprisingly, the
performance of the suggested transmission strategy is essentially
identical to the lower bound at high arrival rates.
Joint work with David Tse and Sibi Raj.
Daniela Tuninetti, EPFL Lausanne, Switzerland
On Two-user Fading Gaussian Broadcast Channels with Perfect Channel
State Information at the Receivers
Various settings of the time-varying (faded) two-user broadcast
channel (BC), with channel state information provided to the receiver
only are considered. In certain cases the capacity region is
characterized and computed.
For a two-user BC, the capacity region is known only if one channel is
`superior'' to the other, i.e., if the overall channel is `degraded''
(R.Gallager P.Bergmans , or `'less noisy'' (K.Marton and J.Korner , or
`more capable'' (A.El Gamal or if it has `deterministic components''
(K.Marton S.Gelfand and M.Pinsker . Moreover, the capacity region is
also known if one of the private messages has rate zero, the so-called
`degraded message set'' case (J.Korner and K.Marton . For all the
other cases, the best known inner bound region is due to K.Marton (and
the tightest known outer bound region is due to J.Korner and
K.Marton).
In this work, we consider the two-user single-antenna BC where the
information carrying signal fades and is corrupted by additive
Gaussian noise. We assume that the instantaneous channel gains are
known at the receivers but not at the transmitter. Clearly this
channel, depending on the joint fading distribution, may or may not
fall into one of the above cited categories. Hence, in general, a
single letter expression for the capacity region is not available.
We first derive an inner bound region (as special case of Marton's
inner bound region) which is valid for any fading distribution and has
the flavor of the `more capable capacity region'' by A.El Gamal. By
specializing this inner bound to the case of Gaussian inputs we show
that the receiver with largest single user capacity must decode
jointly the two messages and successive cancellation might incur
inherent degradation. Then, by using the entropy power inequality, we
show that Gaussian inputs exhaust J.Korner-K.Marton's outer bound
region. We conclude by stating the conditions, expressed in term of
difference of the single-user capacities, by which our inner bound
meets the outer bound region, yielding thus the capacity region.
Joint work with Shlomo Shamai (Shitz), Technion, Israel.
Frans Willems, Eindhoven University of Technology and
Philips Research Laboratories, Eindhoven, The Netherlands
Coding Theorems for Reversible Embedding
We consider embedding of messages (data-hiding) into i.i.d. source
(host) sequences. As in Fridrich et al. [2002] we focus on the case
where reconstruction of the source sequence is desired. First we
consider the rate-distortion corresponding to this setup. Then we
generalize this result in two directions. (A) For the case where only
partial reconstruction is required, we investigated the fundamental
limits. Also the rate of the composite sequence is taking into
account. The optimal trade-off between embedding rate, composite rate,
distortion between source sequence and composite sequence, and
distortion between source sequence and restoration sequence is given
by the admissible region. This admissible region is discussed here.
(B) So far we considered reconstruction based on the composite
sequence, i.e. the lossless case. In the robust version of this setup
the host sequence is conveyed to the decoder via a discrete memoryless
channel. We could also determine the rate-distortion function for this
setup. Although the coding theorems are interesting on their own,
there are some consequences of these results that will get some extra
attention.
Joint work with Ton Kalker, Eindhoven University of Technology and
Philips Research Laboratories, Eindhoven, The Netherlands.
Edmund Yeh, Yale University
Throughput and Delay Optimal Resource Allocation in Multiple Access Fading Channels
The central problem in multiple-access (many-to-one) communications is
the design of mechanisms for resource sharing. The development of
these mechanisms is influenced by the stochastic nature of data
traffic as well as the choice of coding and modulation schemes.
Traditionally, these two sets of issues have been analyzed in virtual
isolation from each other: some bodies of literature focus mainly on
network-layer throughput and delay, while others concentrate on
physical-layer channel modelling, coding, and detection.
We present an integrated, cross-layer view of multiple-access
communications over fading channels. This approach combines basic
communication limits with network quality-of-service issues such as
throughput and delay. We consider a multiple-access model with random
packet arrivals and queueing at the transmitters. Optimal coding is
assumed at the physical layer, so that all rates in the
information-theoretic capacity region are achievable. We then
consider resource allocation policies which assign power and rate
dynamically as a function of the joint fading state and the queue
state. Policies which optimize network throughput and delay are
characterized.
Joint work with Aaron Cohen, Brown University.
Wei Yu, University of Toronto, Canada
The Structure of the Worst Noise in Gaussian Vector Broadcast
Channels
The sum capacity of the Gaussian vector broadcast channel has recently
been shown to be the saddle point of a Gaussian mutual information
game, in which the transmitter maximizes the mutual information by
choosing the best transmit covariance matrix subject to a power
constraint, and the receiver minimizes the mutual information by
choosing the worst noise covariance matrix subject to a diagonal
constraint. This result can be established using either a
decision-feedback equalization approach or a duality
approach. However, a complete characterization of the saddle point is
not yet available to the best of the author's knowledge. This paper
gives such a characterization by combining ideas from both
approaches. The main results are as follows: first, the
decision-feedback equalization approach is generalized to vector
broadcast channels with singular worst noise, and a generalized
necessary and sufficient condition for the saddle point is
given. Second, it is shown that the uplink-downlink duality can be
derived directly from the saddle point condition, and the worst noise
is directly related to the dual variables associated with the saddle
point. Third, an efficient computational method for the worst noise
covariance matrix is proposed. The proposed algorithm is based on a
dual decomposition of the optimization problem. Fourth, it is shown
that the worst noise is not unique, and all possible worst-noise
covariance matrices are related to each other by a linear estimation
matrix. Finally, it is shown that the duality approach can be
generalized to Gaussian vector broadcast channels with arbitrary
linear input covariance constraints.
Ram Zamir, MIT/Tel Aviv
The Rate Loss in Writing on Dirty Paper
Costa's `writing on dirty paper'' [Costa83] is commonly viewed as an
information theoretic dual of the quadratic-Gaussian case of
Wyner-Ziv's rate-distortion with side information at the decoder
[WZ76,Wyner78]. In both scenarios a similar detection/quantization
step is done with incomplete information: in the former it is the
decoder who is ignorant of the interference (which is available at, or
is generated by the transmitter), while in the latter it is the
encoder who is ignorant of the correlated source (available at the
decoder). And in both scenarios we observe a surprising fact: optimum
coding schemes do not lose anything in performance relative to the
complete information case. In the former scenario the capacity is the
same as if the interference was known also at the decoder (which is
equivalent to not having interference at all); in the latter scenario
the Wyner-Ziv rate-distortion function coincides with the conditional
rate-distortion function (which is equivalent to predictive coding
based on the correlated source). This dual nature is also reflected
in geometric interpretations of the information theoretic solutions
[CPR99,SEG00,BCW01]. Hence, it is not surprising that efficient
coding algorithms which were proposed for the two scenarios share
similar concepts: nested codes, modulo-lattice arithmetic, syndrome
coding, and coset codes; see [ZSE02] and the references therein.
Inspired by this striking similarity, and by Shannon's famous comment
about source-channel coding duality [Shannon1959], it is tempting to
ask whether the scope of this duality goes beyond the
quadratic-Gaussian case. Shannon's, Wyner-Ziv's, and
Gelfand-Pinsker's classic papers, [Shannon58,WZ76,GP83], allow to
analyze more general cases of source/ channel coding with side
information at the decoder/encoder. Although the general solutions
are more involved, they exhibit a formal resemblance. This lead
researchers to believe that duality indeed plays a significant role
even in more general cases [CC,BCW,CPR,CM,...].
A particular example which seems to support this belief is the
binary-symmetric Hamming version of the Costa and Wyner-Ziv problems.
In both scenarios it is an example of a *positive* rate loss: the WZ
rate-distortion curve is strictly above the conditional
rate-distortion function [WZ76] while the modified Costa capacity-cost
curve is strictly below the zero-interference capacity-cost curve
[BCW,CPR]. Furthermore, similar `algebraic binning'' versions of the
coding schemes used in the quadratic-Gaussian case achieve optimum
performance in both scenarios [ZSE02]. Does this example confirms and
hence complete the story of duality??
In this paper we provide a *negative* indication about the scope of
duality, by demonstrating that the rate loss may actually behave very
differently in the two scenarios. The rate loss in the WZ problem was
investigated in [WZ96], where it was shown that for a difference
distortion measure this loss is bounded for all
source-side-information pairs and all distortion levels by a universal
constant. For a quadratic distortion measure, this constant is half a
bit. The `natural dual'' of the quadratic Wyner-Ziv scenario is the
generalized quadratic Costa problem:
Y = X + S + Z
where the transmitter output X satisfies a power constraint P, S is an
arbitrary interference known at the transmitter, and Z is some noise
variable (not necessarily Gaussian). The rate loss in the Costa
problem is the extra bit rate we could transmit if S was known at the
receiver as well, or equivalently, if the channel was Y = X + Z. We
observe that if the power of Z is bounded by P than the rate loss is
at most half a bit. However, if Z is not restricted, we find Z's for
which the rate loss is *arbitrarily large*. Thus, from a rate-loss
view point duality breaks down.
Joint work with Aaron Cohen, Brown University