DIMACS Center, Rutgers University, Piscataway, NJ

**Organizers:****Piyush Gupta**, Bell Laboratories, pgupta@research.bell-labs.com**Gerhard Kramer**, Bell Laboratories, gkr@research.bell-labs.com**Adriaan J. van Wijngaarden**, Bell Laboratories, alw@research.bell-labs.com

We study the Gaussian multiaccess channel from the point of view of cooperative game theory and discuss several results relating to defining the value of the individual players in this game. Joint work with Richard J. La, University of Maryland, College Park.

In this talk we examine an approach using multiuser information theory to design high-rate single user codes for multiple antenna (space-time) communications. Joint work with Naofal Al-Dhahir and A R. Calderbank.

We generalize linear network coding techniques from noise-free networks with uncorrelated sources to accommodate both correlation between sources and additive noise across point-to-point and multiple access channels on finite fields. In noise-free channels, network coding of correlated sources is effectively a source coding problem; we say that a network code accomplishes optimal source coding on a noise-free network if network codes can be used to transmit any source with joint entropy lower than the network capacity with asymptotically negligible error probability. In additive noise channels with uncorrelated sources, network coding is effectively a channel coding problem; we say that a network code accomplishes optimal channel coding on an additive noise channel if network codes can be used to transmit any source with rate lower than the noisy channel capacity with asymptotically negligible error probability. We demonstrate that randomly designed linear network codes can achieve optimal source coding and optimal channel coding for both point-to-point and multiple access systems when the source, channel input, and channel output alphabets are identical finite fields. In source coding applications, our algorithm gives a random construction of the linear source codes that Csiszar proved achievable using a sphere packing argument. In channel coding applications, we demonstrate that randomly drawn linear codes achieve capacity for both point-to-point and multiple access networks. Following Csiszar, we also explore the relationship between linear network source and channel codes, demonstrating codes that achieve channel coding using a source code on the additive noise. Finally, we treat the topic of joint source-channel coding for networks. We begin by proving that separation holds under the assumed system model, where channel input and output alphabets are identical finite fields. We then note that in source and channel coding applications where linear network codes achieve the optimal performance, optimal joint source-channel coding design, like separate design, can be accomplished with simple random code design, thereby eliminating the need to use separation. The result is that network codes not only unify source coding, channel coding, and routing into a single conceptual framework, but also accomplish these three previously disparate goals in a single code. Joint work with Muriel Medard, Tracey Ho, David Karger, Ralf Koetter

In this talk we introduce an instance of coupling between Information Theory and Networking by exploiting the multiple description representation of signals in using redundant routes through a network. A distortion function is introduced that includes a measure of usual signal distortion and a measure of network delay. Then a procedure is proposed for deciding on the rates of the multiple description representations in conjunction with choosing which route to use. An intelligent switch is also used for dropping packets that exceed their deadlines and the notion of controling the rates of the representation is introduced. Performance evaluation shows the advantage of joint control of networking and compression parameters. The network model is very simple and this work is intended more as a first step in exploiting further the coupling between network transmission and signal representation, which was actually originally motivated by energy-efficiency considerations that will also be discussed.

The broadcast nature of wireless communications suggests that a source signal transmitted towards the destination can be overheard at neighboring nodes. Cooperative communications refers to processing of this overheard information at the surrounding nodes and retransmission towards the destination to create spatial diversity, thereby to obtain higher throughput and reliability. We describe information theoretic models suitable for cooperation, study achievable rate regions and outage probabilities, and describe channel coding techniques that allow us to exploit the diversity advantages of cooperation. Joint work with Behnaam Aazhang, Andrew Sendonaris and Andrej Stefanov.

The separation theorem provides a very general solution to the point-to-point lossy source-channel communication problem by equating the rate-distortion function of the source with the capacity-cost function of the channel. It is well-known that this principle does not extend to networks in general. We discuss this issue, and propose a different approach that we call "measure-matching". For a particular Gaussian source-channel network example (inspired by a sensor network situation), we show that the minimum distortion for the optimal source code followed by the optimal channel code decreases like 1/log M, where M is the number of nodes in the network. In contrast, there is a measure-matching scheme whose distortion decreases like 1/M, which we show to be the optimum. Joint work with Martin Vetterli.

We investigate the structure of network coding problems and their solutions. In particular, the connection between network coding and state space realizations of beheviors of linear systems is addressed. This connection reveals interesting parallels between the problems and we can employ theorems from either area in both contexts.

How much information can wireless networks transport? How should they be operated? We determine the scaling laws for the traffic carrying capacity of such networks, and the information theoretically order optimal mode of operating operation. Joint work with L-L. Xie.

We address the problem of generating common randomness by an arbitrary number of terminals with correlated observations. Applications include coding in certain communication situations and encrypted communication in cryptography. In particular, we consider the problem of characterizing the secret key (SK)-capacity for an arbitrary number of terminals, each of which observes a distinct component of a set of correlated signals, with unconstrained public communication allowed between these terminals. Our main result is the determination of SK-capacity for an arbitrary subset of terminals with the remaining terminals serving as ``helpers," when an eavesdropper observes the communication between the terminals but does not have access to any other information. We also investigate the private key (PK)-capacity when the eavesdropper additionally wiretaps some of the helper terminals from which too the key must then be concealed. Joint work with Imre Csiszar.

Duality in multi-user source and channel coding problems is considered. We provide a generalization of our earlier work on duality in source and channel coding with side information to more general case of multi-terminal problems with limited collaboration among the terminals.

Concurrency is a fundamental concept in computer science which is concerned with the study of systems involving multiple processes. The order of events in such a system is unpredictable because of the independence of events occurring in the individual processes. Trace theory is a successful model for the execution of concurrent processes which employs congruence classes of words over partially commutative alphabets. We consider a rate distortion problem in which the objective is to reproduce a string which is equivalent to the original string.

We consider the problem of reachback communication in wireless sensor networks: in this problem, multiple sensors are deployed on a field, and they collect local measurements of some random process which then need to be encoded and reproduced at a remote location. In this talk we present capacity results on one particular instance of this problem, in which sensors cooperate in the form of eliminating interference in channel access (e.g., by means of perfect time or frequency division). This assumption leads to a model in which correlated sources are to be encoded separately, and transmitted over an array of independent channels. And for this setup, we obtain the following results: - First, we give an exact characterization of the joint source/channel capacity region, i.e., we give conditions on the sources and the channels under which reliable communication is possible in this context. These conditions generalize in a very meaningful way the condition that ${\cal H}(X) < C$ (entropy of a source less than channel capacity) for point-to-point channels, and surprisingly, give a separation theorem for this problem as well---we know of no other network information theory problem for which source and channel separation holds. - Then, we show how the classical multiterminal source coding problem arises as a natural rate/distortion generalization of our problem above. And in this case, we develop a new coding strategy. A key feature of our codes is that they work for a large family of auxiliary random variables $(W,Z)$ satisfying only two short Markov chain conditions ($W-U-V$ and $U-V-Z$), but not the more restrictive long chain condition $W-U-V-Z$ typical of all previously known codes. If time permits, we will talk briefly about other aspects of the design of a distributed transmitter currently under development at Cornell. Joint work with Joao Barros, An-swol Hu, Christina Peraki.

In mobile packet data transmission, a special coding scheme, known as the type-II hybrid-ARQ (HARQ), achieves higher throughput efficiency then ordinary turbo codes by adapting its error correcting code redundancy to fluctuating channel conditions characteristic for this application. A type-II HARQ protocol implements so called incremental redundancy, which operates as follows. Initially, the information bits are encoded by a "mother" code and a selected number of parity bits are transmitted. If a retransmission is requested, only additional selected parity bits are transmitted. At the receiving end, the additional parity bits together with the previously received parity bits form a codeword of a punctured mother code allowing for an increase in the error correction capacity. This procedure is repeated after each subsequent retransmission request until all the parity bits of the mother code are transmitted. We analyze asymptotic performance of an HARQ scheme based on randomly punctured turbo codes.

We consider an exponential server queue accessed by many different flows. It is assumed that the source and destination information of each flow is available in the packet header. We compute upper and lower bounds on the sum timing capacity of this channel. We also discuss the implications of this result for a single flow, where packets can be "colored" to distinguish them, and the amount of covert information that can be transmitted over this channel in the presence of eavesdroppers who are allowed to observe the packet headers. We will also discuss the role of the scheduling discipline at the server on the amount of covert information that can be conveyed. Joint work with Xin Liu.

In this talk we present a methodology for providing QoS in wireless networks, that deals effectively with the main peculiarity of wireless networks, that differentiates them from their wireline counterparts, namely coordination of transmissions of neighboring interfering links. An end-to-end session may specify its bandwidth requirements in terms of constant bit rate requirements, or in terms of a guaranteed minimum rate and best effort above that or in terms of a minimum necessary and a maximum rate or in a combination of the above. Those requirements can be mapped to appropriate quantitative requirements to the rate of each session. Then allocating the rates according to the solution of the resulting maxmin optimization problem will have as a result that the requirements of the users will be satisfied while the residual bandwidth if any will be shared in a fair manner to those session requiring best effort additional bandwidth. We provide a distributed algorithm that does virtual bandwidth allocation to the different flows based on a token mechanism. Then scheduling of transmissions is done based on the bandwidth that has been virtually allocated already. That algorithm has proven to be optimal, providing maxmin fair bandwidth allocation. In fact, despite the large amount of work on QoS provisioning in wireless networks lately, it is the first proven fair bandwidth allocation algorithm in a wireless system that spans more than one node. It provides a general framework in which practical QoS algorithms can be pursued.

In this talk we examine the class of Markov channels with feedback and side-information. We formulate the capacity optimization problem as a dynamic program and compute the capacity using tools from approximate dynamic programming.

Information theory views multiple access as a question of combating noise and interference, assigning the issue of bursty packet arrival to appropriate source coding and thereby forgoing any chance of discussion of delay. Network theory, on the other hand, views multiple access as a question of resource allocation and distributed scheduling, taking advantage of the bursty arrival of packets, but ignoring the issues of noise and interference. Neither view is entirely satisfactory. We present here a toy problem, a Gaussian multiple access channel with Poisson packet arrivals. A certain analogy between this problem and multi-server job queues suggests a certain transmission strategy for good delay performance. We also provide a lower bound on packet delay for any transmission strategy and any arrival process of a given rate. Surprisingly, the performance of the suggested transmission strategy is essentially identical to the lower bound at high arrival rates. Joint work with David Tse and Sibi Raj.

State Information at the Receivers

Various settings of the time-varying (faded) two-user broadcast channel (BC), with channel state information provided to the receiver only are considered. In certain cases the capacity region is characterized and computed. For a two-user BC, the capacity region is known only if one channel is `superior'' to the other, i.e., if the overall channel is `degraded'' (R.Gallager P.Bergmans , or `'less noisy'' (K.Marton and J.Korner , or `more capable'' (A.El Gamal or if it has `deterministic components'' (K.Marton S.Gelfand and M.Pinsker . Moreover, the capacity region is also known if one of the private messages has rate zero, the so-called `degraded message set'' case (J.Korner and K.Marton . For all the other cases, the best known inner bound region is due to K.Marton (and the tightest known outer bound region is due to J.Korner and K.Marton). In this work, we consider the two-user single-antenna BC where the information carrying signal fades and is corrupted by additive Gaussian noise. We assume that the instantaneous channel gains are known at the receivers but not at the transmitter. Clearly this channel, depending on the joint fading distribution, may or may not fall into one of the above cited categories. Hence, in general, a single letter expression for the capacity region is not available. We first derive an inner bound region (as special case of Marton's inner bound region) which is valid for any fading distribution and has the flavor of the `more capable capacity region'' by A.El Gamal. By specializing this inner bound to the case of Gaussian inputs we show that the receiver with largest single user capacity must decode jointly the two messages and successive cancellation might incur inherent degradation. Then, by using the entropy power inequality, we show that Gaussian inputs exhaust J.Korner-K.Marton's outer bound region. We conclude by stating the conditions, expressed in term of difference of the single-user capacities, by which our inner bound meets the outer bound region, yielding thus the capacity region. Joint work with Shlomo Shamai (Shitz), Technion, Israel.

We present an achievable region for the L channel multiple description coding problem. This region generalizes previous results of El Gamal and Cover and of Zhang and Berger for L=2. It further generalizes three channel results of Gray and Wyner and of Zhang and Berger. The region is shown to be the best possible for successive refinement on trees. We also present a new outer bound on the rate distortion region for memoryless Gaussian sources with mean squared error distortion. The achievable region meets this outer bound for certain symmetric cases.

We characterize the sum rate of a class of Gaussian multiterminal source coding problems with quadratic distortion constraints. The key component of the proof is the identification of a multiple antenna broadcast channel that serves as a test channel.

We consider embedding of messages (data-hiding) into i.i.d. source (host) sequences. As in Fridrich et al. [2002] we focus on the case where reconstruction of the source sequence is desired. First we consider the rate-distortion corresponding to this setup. Then we generalize this result in two directions. (A) For the case where only partial reconstruction is required, we investigated the fundamental limits. Also the rate of the composite sequence is taking into account. The optimal trade-off between embedding rate, composite rate, distortion between source sequence and composite sequence, and distortion between source sequence and restoration sequence is given by the admissible region. This admissible region is discussed here. (B) So far we considered reconstruction based on the composite sequence, i.e. the lossless case. In the robust version of this setup the host sequence is conveyed to the decoder via a discrete memoryless channel. We could also determine the rate-distortion function for this setup. Although the coding theorems are interesting on their own, there are some consequences of these results that will get some extra attention. Joint work with Ton Kalker, Eindhoven University of Technology and Philips Research Laboratories, Eindhoven, The Netherlands.

Bit stuffing has been used in network protocols to prevent data from being interpreted as control information. This talk will explore various methods of bit stuffing. The rate loss of bit stuffing will be compared with that predicted by the Shannon theory. Surprisingly, a modification to bit stuffing will be described which is 100% efficient, that is, equal to that predicted by Shannon theory.

The central problem in multiple-access (many-to-one) communications is the design of mechanisms for resource sharing. The development of these mechanisms is influenced by the stochastic nature of data traffic as well as the choice of coding and modulation schemes. Traditionally, these two sets of issues have been analyzed in virtual isolation from each other: some bodies of literature focus mainly on network-layer throughput and delay, while others concentrate on physical-layer channel modelling, coding, and detection. We present an integrated, cross-layer view of multiple-access communications over fading channels. This approach combines basic communication limits with network quality-of-service issues such as throughput and delay. We consider a multiple-access model with random packet arrivals and queueing at the transmitters. Optimal coding is assumed at the physical layer, so that all rates in the information-theoretic capacity region are achievable. We then consider resource allocation policies which assign power and rate dynamically as a function of the joint fading state and the queue state. Policies which optimize network throughput and delay are characterized. Joint work with Aaron Cohen, Brown University.

The sum capacity of the Gaussian vector broadcast channel has recently been shown to be the saddle point of a Gaussian mutual information game, in which the transmitter maximizes the mutual information by choosing the best transmit covariance matrix subject to a power constraint, and the receiver minimizes the mutual information by choosing the worst noise covariance matrix subject to a diagonal constraint. This result can be established using either a decision-feedback equalization approach or a duality approach. However, a complete characterization of the saddle point is not yet available to the best of the author's knowledge. This paper gives such a characterization by combining ideas from both approaches. The main results are as follows: first, the decision-feedback equalization approach is generalized to vector broadcast channels with singular worst noise, and a generalized necessary and sufficient condition for the saddle point is given. Second, it is shown that the uplink-downlink duality can be derived directly from the saddle point condition, and the worst noise is directly related to the dual variables associated with the saddle point. Third, an efficient computational method for the worst noise covariance matrix is proposed. The proposed algorithm is based on a dual decomposition of the optimization problem. Fourth, it is shown that the worst noise is not unique, and all possible worst-noise covariance matrices are related to each other by a linear estimation matrix. Finally, it is shown that the duality approach can be generalized to Gaussian vector broadcast channels with arbitrary linear input covariance constraints.

Costa's `writing on dirty paper'' [Costa83] is commonly viewed as an information theoretic dual of the quadratic-Gaussian case of Wyner-Ziv's rate-distortion with side information at the decoder [WZ76,Wyner78]. In both scenarios a similar detection/quantization step is done with incomplete information: in the former it is the decoder who is ignorant of the interference (which is available at, or is generated by the transmitter), while in the latter it is the encoder who is ignorant of the correlated source (available at the decoder). And in both scenarios we observe a surprising fact: optimum coding schemes do not lose anything in performance relative to the complete information case. In the former scenario the capacity is the same as if the interference was known also at the decoder (which is equivalent to not having interference at all); in the latter scenario the Wyner-Ziv rate-distortion function coincides with the conditional rate-distortion function (which is equivalent to predictive coding based on the correlated source). This dual nature is also reflected in geometric interpretations of the information theoretic solutions [CPR99,SEG00,BCW01]. Hence, it is not surprising that efficient coding algorithms which were proposed for the two scenarios share similar concepts: nested codes, modulo-lattice arithmetic, syndrome coding, and coset codes; see [ZSE02] and the references therein. Inspired by this striking similarity, and by Shannon's famous comment about source-channel coding duality [Shannon1959], it is tempting to ask whether the scope of this duality goes beyond the quadratic-Gaussian case. Shannon's, Wyner-Ziv's, and Gelfand-Pinsker's classic papers, [Shannon58,WZ76,GP83], allow to analyze more general cases of source/ channel coding with side information at the decoder/encoder. Although the general solutions are more involved, they exhibit a formal resemblance. This lead researchers to believe that duality indeed plays a significant role even in more general cases [CC,BCW,CPR,CM,...]. A particular example which seems to support this belief is the binary-symmetric Hamming version of the Costa and Wyner-Ziv problems. In both scenarios it is an example of a *positive* rate loss: the WZ rate-distortion curve is strictly above the conditional rate-distortion function [WZ76] while the modified Costa capacity-cost curve is strictly below the zero-interference capacity-cost curve [BCW,CPR]. Furthermore, similar `algebraic binning'' versions of the coding schemes used in the quadratic-Gaussian case achieve optimum performance in both scenarios [ZSE02]. Does this example confirms and hence complete the story of duality?? In this paper we provide a *negative* indication about the scope of duality, by demonstrating that the rate loss may actually behave very differently in the two scenarios. The rate loss in the WZ problem was investigated in [WZ96], where it was shown that for a difference distortion measure this loss is bounded for all source-side-information pairs and all distortion levels by a universal constant. For a quadratic distortion measure, this constant is half a bit. The `natural dual'' of the quadratic Wyner-Ziv scenario is the generalized quadratic Costa problem: Y = X + S + Z where the transmitter output X satisfies a power constraint P, S is an arbitrary interference known at the transmitter, and Z is some noise variable (not necessarily Gaussian). The rate loss in the Costa problem is the extra bit rate we could transmit if S was known at the receiver as well, or equivalently, if the channel was Y = X + Z. We observe that if the power of Z is bounded by P than the rate loss is at most half a bit. However, if Z is not restricted, we find Z's for which the rate loss is *arbitrarily large*. Thus, from a rate-loss view point duality breaks down. Joint work with Aaron Cohen, Brown University

Previous: Program

Document last modified on March 13, 2003.