DIMACS Workshop on Multimedia Streaming on the Internet
June 12 - 13, 2000
DIMACS Center, Rutgers University, Piscataway, NJ
- Organizers:
- Lixin Gao, Smith College, gao@cs.smith.edu
- Jennifer Rexford, AT&T Labs, jrex@research.att.com
Presented under the auspices of the Special Focus on Next Generation Networks Technologies and Applications and the Special Year on Networks.
Workshop Abstracts:
1.
The role of network proxies in supporting multimedia streaming
Antonio Ortega
University of Southern California
In this talk present an overview of recent research in video
streaming, where our focus is exploring the potential benefits of
introducing video specific functions at proxies. Examples of such
functions include caching, local retransmission and
introduction/removal of error protection. We will argue that in a
network which shows increasing heterogeneity in terms of bandwidth,
delay and reliability, proxies can serve as the preferred tool to
enable adaptation, without requiring an end-to-end involvement of the
application. Moreover, we will argue that application- and
media-specific functions should be preferred to generic ones. We will
briefly discuss three examples to illustrate the benefits of
media-specific, proxy-based adaptation. First, we consider video
caching where the cache has the ability to store only a selected set
of frames from a video sequence. We show how this increases the
robustness of delivery to the receiver, in cases where cache storage
is limited. Second, we consider an example of local error control via
ARQ, where the final link to the client is assumed to be lossy. We
show how ARQ combined with rate control at the proxy can minimize the
loss of video information at the receiver. Finally, we discuss the
application of multiple description coding (MDC) for error robustness,
and describe an MDC approach where redundancy can be easily
controlled. This allows a proxy to increase or decrease the level of
redundancy, depending on whether the link to the client is more or
less reliable than the rest of the network.
2.
Video Caching and Delivery Using Proxy Server for Reducing Bandwidth
Requirement over Wide Area Networks
Weihsiu Ma and David H.C. Du
Department of Compute Science and Engineering
University of Minnesota
Minneapolis, MN 55455
Due to the high bandwidth requirement and rate variability of compressed
video, delivering video across wide area networks (WANs) is a challenging
issue. Proxy servers have been used to cache web objects to alleviate the
load
of the web servers and to improve client access time on the Internet.
We assume a central server is connected to a proxy server via WAN and
a proxy server can reach many clients via Local Area Networks (LANs).
The proxy server allows partial video caching and thus a certain number
of video frames are stored in its storage such that the network bandwidth
requirement over WAN can be reduced. Since there are two video sources,
video data has to be synchronized before playback at a client.
Two delivery models, client-synchronization model and proxy-synchronization
model in terms of where to synchronize data are identified. The two models
have different complexity and resource consumption in proxy server and
the client.
In this talk, we mainly focus on the fundamental understanding on delivering
single video by using the two models and investigate the effectiveness of
the partial video caching in bandwidth reduction. We study the tradeoffs
between client buffer, start-up delay, resource requirement in the proxy
server, and bandwidth requirement over WAN for video transport.
Given a video delivery rate for the WAN, we propose several frame caching
selection algorithms to determine which frames of the video are to be stored
in the proxy server. In client-synchronization model, a scheme, which
partitions a video into different segments (chunks of frames) with
alternating chunks stored in the proxy server, is shown to offer the best
tradeoff. In proxy-synchronization model, caching the initial portion
(prefix) of a video is adopted and this approach is proved to minimize
the space requirement in proxy storage.
3.
Issues in Design and Evaluations of Multimedia Proxy Caching
Mechanisms for the Internet.
Reza Rejaie; AT&T Labs - Research
Haobo Yu; USC/ISI
Most of the existing Internet multimedia streaming applications are
based on the client-server architecture; a media server pipelines a
stream to a client through the network while the client plays back
the available portion of the stream. This client-server architecture
has two major limitations: 1) The quality of the delivered streams
is limited to the bottleneck bandwidth between the server and the
client. Thus a client with a high bandwidth local connectivity may
receive low quality streams due to a remote bottleneck. 2) Its
scalability is limited, in that it is difficult to support a large
number of concurrent high quality sessions due to network and server
load.
Multimedia proxy caching(MCaching) is a natural extension of the
client-server architecture that addresses all the above challenges
simultaneously. The idea is to cache popular streams with maximum
deliverable quality at a proxy close to interested clients. Thus the
proxy can effectively maximize delivered quality and improve
scalability by significantly reducing the load on the network and the
server.
In this talk, we provide insight in the design and performance evaluations
of MCaching mechanisms. We argue that in addition to cache efficiency
(measured by hit ratio), MCaching introduces the notion of ``quality''
for delivered streams as a new dimension to Web caching. We first outline
the design of an efficient MCaching mechanism that consists of: 1) an
internal structure (e.g. layered) for cached streams, 2) a fine-grain
replacement mechanism based on fine-grain popularity and encoding-specific
information (e.g. utility), and 3) an online fine-grain prefetching
mechanism to further improve the delivered quality on-the-fly.
We then argue that performance of MCaching mechanisms should be
collectively evaluated along two dimensions: caching efficiency and
delivered quality. Specifically, we identify a fundamental tradeoff
between these two dimensions. Our simulation results show that our
proposed MCaching design can adaptively exploit this tradeoff to improve
overall performance whereas simple extension of current Web caching
schemes can only enhance the performance along one dimension.
4.
Traffic Smoothing for Network Wide Streaming of Multimedia Traffic
JunBiao Zhang and Joseph Hui
Arizona State University
Traffic smoothing for video streams are considered
for network configurations such as single nodes, nodes in tandem and
routes in parallel. We may place constraints on
the smoothed traffic, such as rate constraints or on-off
constraints. Optimal algorithms are derived for some of these cases
minimizing delay or buffer requirements or
maximum rates. Experimental results are also presented
for traffic smoothing. We also explore how these results
could have an impact on practical implementations of video
streaming applications and signaling requirements.
5.
Decentralized movie distribution in the Internet
Carsten Griwodz and Michael Zink
Damstadt University of Technology, Germany
After the failures of initial video-on-demand developments, the rapid growth
of the Internet is rising a new attraction to VoD-like applications. Intranet
VoD is successfully following the track of centrally managed video
distribution system. For public VoD, we consider it likely that such an
approach would fail as it did some years ago. Rather, decentralized
distribution systems that are organized in a similar way as the existing
video rental store infrastructures seem to be more adequate for adaptable
growth. In this talk, we present one consistent distribution infrastructure
based on caching and the results that we have produced so far in
investigating the unsolved issues of the infrastructure. Specifically, this
concerns protocol support, support for copyright violator tracing, and the
efficiency of the distribution system. A protocol suite that allows low
overhead reliable transfer of data into cache servers is introduced. In the
absence of multicast-capable watermarking !
schemes, an alternative, straight-forward, personalized movie marking scheme
is offered for discussion. Finally, the efficiency gains that can be achieved
by an interaction of caching strategies and new distribution mechanisms are
presented.
6.
On Content Delivery Networks for Streaming Media
David Shur
AT&T Labs
Abstract: We describe our ongoing project on a content
distribution network (CDN) for streaming media. CDNs support the delivery
of streaming media in an efficient and scalable manner. Multimedia sources
send a single stream to the CDN.
Based on the location and number of end-users, the CDN replicates the stream
(typically via a tree topology) as needed. Our architecture consists of an
overlay network of servers which exploit the functionality of IP Multicast
where available. Where IP Multicast is not available, the overlay servers
utilize unicast forwarding to reach end-systems. The servers also provide
packet recovery protocols which enable high perceived quality of service
through real-time recovery of lost packets. We compare our system with
emerging commercial CDNs, and discuss the challenges in this area.
7.
Selecting among Replicated Multicast Video Servers
Mostafa H. Ammar
College of Computing
Georgia Institute of Technology
Atlanta, GA
Server replication is often used to improve the scalability of a service.
One of the important factors in the efficient utilization of replicated
servers is the ability to direct clients to a server according to some
optimality criteria. Most server replication and selection work to date
has focused on traditional unicast services. We will first motivate the
need to replicate multicast video servers and argue that this is somewhat
different rationale from the one used to motivate unicast server replication.
We will then show some results from on-going work aimed at developing a
framework and a set of algorithms and protocols for multicast server
selection.
8.
Frame-Based Periodic Broadcast and Fundamental Resource Tradeoffs
Subhabrata Sen
University of Massachusetts, Amherst
The Internet is witnessing a rapidly increasing load of continuous
media traffic in the form of streaming audio and video. Video streams
typically have high transmission bandwidth requirements, and exhibit
burstiness on multiple time scales, making it expensive to deliver
multimedia content.
In this talk, we explore fundamental resource tradeoffs in periodic
broadcast, a technique for reducing network transmission bandwidth
requirements for streaming a popular video to multiple asynchronous
clients. We consider a class of frame-based periodic broadcast
schemes which assign a fixed transmission bandwidth to each frame in
the video, and continuously transmit each frame at this rate. The
model accommodates both CBR and VBR streams. We describe a novel
broadcast scheme that minimizes the transmission bandwidth overhead
under client buffer constraints. We also consider the problem of using
a single transmission scheme to satisfy clients with heterogeneous
resource constraints, and present a heuristic client reception scheme
to jointly minimize the client playback startup delay and client
buffer requirements. Finally, the talk concludes with results from
extensive evaluations that explore the tradeoffs among the network
transmission bandwidth requirement, and client resources (buffer,
reception bandwidth, and playback startup delay).
9.
Algorithms for On-Demand Stream Merging
Amotz Bar-Noy
AT&T Shannon Labs
Richard E. Ladner
University of Washington and AT&T Shannon Labs
As the Internet grows so does the desire for on-demand streams
of many types: movies, songs, news stories, stock quotes,
and others. The popularity of a specific stream may be so high
that multicasting may be the only way to satisfy the demand.
In addition, clients requesting a stream will want service as quickly
as possible. This may require repeated multicasts of the same stream
to satisfy the delay guarantees. Stream merging has the potential to
help solve the bandwidth problems created by heavy, low delay, demand
for the same stream. In this talk we describe the stream merging
technique and how it can be used to reduce bandwidth requirements at
the stream server. We describe a new quadratic off-line algorithm for
minimizing bandwidth. We describe the optimal solution for the fully
loaded case, where streams are requested at unit time intervals.
We analyze the approximation ratio for oblivious off-line algorithms that
only know the number of requests and not their arrival times.
Finally, we briefly describe a new approach to on-line stream merging
based on our optimal oblivious offline algorithm. It turns
out that the Fibonacci numbers play an important role in the analysis.
10.
TCP-like flow control for multimedia streaming using TCP
emulation at receiver (TEAR)
Injong Rhee
North Carolina State University
Congestion and flow control is an integral part of any Internet data
transport protocol. It is widely accepted that the congestion avoidance
mechanisms of TCP have been one of the key contributors to the success
of the Internet. However, TCP is ill-suited to real-time multimedia
streaming applications. Its bursty transmission, and abrupt and frequent
wide rate fluctuations cause delay jitters and sudden
quality degradation of multimedia applications. For asymmetric networks
such as wireless networks, cable modems, ADSL, and satellite networks,
transmitting feedback for (almost) every packet received as it is done in
TCP causes congestion in the reverse path, causing feedback losses and
delays. In this environment, TCP may severely under-utilize the forward
path throughput. Use of multicast further complicates the problem;
TCP-like frequent feedback from each receiver to the sender in a large
scale multicast session cause well-known scalability limitations,
such as acknowledgment implosion.
I have developed a new flow control approach for multimedia streaming,
called TCP emulation at receivers (TEAR). TEAR shifts most of flow control
mechanisms to receivers. In TEAR, a receiver does not send to the sender
the congestion signals detected in its forward path but rather processes
them immediately to determine its own appropriate receiving rate. TEAR
can determine this rate using congestion signals observed at the
receiver, such as packet arrivals, packet losses, and timeouts.
These signals are used to emulate the TCP sender's flow control functions
at receivers including slow start, fast recovery, and congestion
avoidance. The emulation allows receivers to estimate a TCP-friendly rate
for the congestion conditions observed in their forward paths.
TEAR also smoothes estimated values of steady-state
TCP throughput by filtering out noise. This smoothed rate estimate will
be reflected into the rate adjustment of receiving rates. Therefore,
TEAR-based flow control can adjust receiving rates to a TCP-friendly rate
without actually modulating the rates to probe for spare bandwidth, or to
react to packet losses directly. Thus, the perceived rate fluctuations
at the application are much more smooth than in TCP.
A unicast version of TEAR is implemented. In this talk, I will describe
the implementation of TEAR, examine the performance of this TEAR
implementation from the NS simulation and Internet experiments, and
compare it with that of other TCP-friendly flow control techniques. Our
preliminary tests indicate that TEAR shows superior fairness to TCP with
significantly lower rate fluctuations than TCP. TEAR's sensitivity
to feedback interval is very low, so that even under high feedback
latency, TEAR flows exhibit acceptable performance in terms of fairness,
TCP-friendliness, and rate fluctuations.
11.
The Design and Implementation of a Media Independent
Streaming Service for Stored Video Applications
Wu-chi Feng
The Ohio State University
In this talk, we describe our work focused on delivering high-quality
video
content over best-effort networks (such as the Internet) for stored video
streams. The goal of this work is two-fold. First, we are interested
in using algorithms that adapt stored video sources efficiently to
the available network resources, avoiding high packet losses and
congestion collapse. Our approach advocates the use of TCP as the
transport mechanism for stored video delivery. In addition, our
delivery technique uses a priority-based algorithm that helps smooth
the frame rates delivered to the end user effectively. Second, we have
designed a media independent streaming service to show the viability of
our approach. By providing this media-independent interface, we hope
to catalyze future video-based applications such as scientific
visualization.
12.
Scaling Up Reliability for Broadband Video
Jorg Nonnenmacher
Castify Networks
The Internet is the number one broadcast medium of the future.
In a system where live and on-demand audio/video should reach
millions of people, thousands of network components are involved.
While the Internet was engineered based on basic
fault-tolerant paradigms, the complexity of the current video
networking technology increases the gap between
rich functionality and systems reliability.
We show an increase of over 48000% in mean time to failure for a
distributed video system over the standard solution.
13.
Adaptive Streaming of Stored Layered Video over Lossy Channels
Srihari Nelakuditi Zhi-Li Zhang Sandeep C. Rao
University of Minnesota, Minneapolis
In recent years, most popular Internet application is web-based audio
and video playback where stored video is streamed from the server to a
client upon request. Rigid playback deadlines coupled with resource
constraints make video delivery a challenging task. Video smoothing
techniques reduce the bandwidth requirement by using client buffer for
pre-fetching. However, when both network bandwidth and client buffer are
limited it may not be possible to deliver full-quality video. In such a
situation, it is desirable to minimize the degradation in the video
quality while operating within the resource constraints. Layered
encoding is proposed to provide finer control on video quality where the
video signal is split into layers and a prefix of these layers is chosen
such that the resource constraints are met. However its is not feasible
to precompute the optimal number of layers since the network conditions
are continually varying. So the main concern is how to choose the ideal
number of layers adaptively based on the prevailing network conditions.
In our work, we address this layer selection problem for the case of
stored layered video delivery over lossy networks such as wireless. We
propose layer selection schemes that utilize the knowledge about the
bandwidth and buffer requirements of the stored video in delivering
smoother quality video.
One of the problems in assessing the performance of a video delivery
scheme is the lack of a good metric that captures the user's perception
of video quality. In general, the higher the amount of detail in the
played video, the better is its quality. However, it is generally
agreed that it is visually more pleasing to watch a video with
consistent, albeit lower, quality than one with highly varying quality.
Thus, a good metric should capture both, namely, the amount of detail
per frame as well as its uniformity across frames. In a layered video
delivery, ideally we would like to have a high mean and a low variance
in the number of layers per frame played. We devise a metric called
"mean layer run" which is defined as the number of layers a user is
expected to see continuously when the video is watched for a given
observation period starting at any arbitrary frame. This metric thus
accounts for both mean and variance and hence measures both detail and
uniformity of the video playback. In our work we use this metric to
measure the perceived quality of the delivered video for evaluating the
performance of various video delivery schemes.
We propose a MINimal Layer Discard algorithm (MINLD) that maximizes the
quality, under given network bandwidth and client buffer constraints, by
selectively discarding higher layers in order to minimize the likelihood
of future lower layers arriving late, thereby increasing the overall
quality of the video delivered. We develop an online variant of this
offline algorithm, called Finite Horizon based Minimal Layer Discard
(FHMLD) for adaptive layer selection. We apply this scheme to a
wireless setting where channel bandwidth is fixed and known but of
varying loss with retransmissions for error recovery. The video delivery
based on FHMLD scheme can be summarized as follows: periodic estimation
of loss rate based on observed channel conditions; estimation of
effective bandwidth and effective delay based on measured loss rate;
selective layer level discard based on effective bandwidth, effective
delay; discarding of packets that would anyway arrive late. We simulate
and show that FHMLD is better in maintaining consistent quality playback
than a simple greedy layer selection scheme. We are currently in the
process of extending our scheme to the case where both bandwidth and
loss are unknown and varying.
14.
Optimal Streaming of Layered Video
Despina Saparilla
Institut Eurecom
This paper presents a model and theory for streaming layered video. We
model the bandwidth available to the streaming application as a stochastic
process whose statistical characteristics are unknown a priori. The random
bandwidth models short term variations due to congestion control (such as
TCP-friendly conformance). We suppose that the video has been encoded
into a base and an enhancement layer, and that to decode the enhancement
layer the base layer has to be available to the client. We make the
natural assumption that the client has abundant local storage and attempts
to prefetch as much of the video as possible during playback. At any
instant of time, starvation or partial starvation can occur at the client
in either of the two layers. During periods of starvation, the client
applies video error concealment to hide the loss. We study the dynamic
allocation of the available bandwidth to the two layers in order to
minimize the impact of client starvation. For the case of an
infinitely-long video, we find that the optimal policy takes on a
surprisingly simple and static form. For finite-length videos, the optimal
policy is a simple static policy when the enhancement layer is deemed at
least as important as the base layer. When the base layer is more
important, we design a threshold policy heuristic which switches between
two static policies. We provide numerical results that compare the
performance of no-prefetching, static and threshold policies.
15.
Striping Doesn't Scale: How to Achieve Scalability for
Continuous Media Servers with Replication
Leana Golubchik
University of Maryland
Multimedia applications place high demands for QoS, performance, and
reliability on storage servers and communication networks. These, often
stringent, requirements make design of cost-effective and scalable
continuous media (CM) servers difficult. In particular, the choice of
data placement techniques can have a significant effect on the scalability
of the CM server and its ability to utilize resources efficiently.
In the recent past, a great deal of work has focused on ``wide'' data striping.
Another approach to dealing with load imbalance problems is replication.
The appropriate compromise between the degree of striping and the degree of
replication is key to the design of scalable CM servers. Thus, the main focus
of this work is a study of scalability characteristics of CM servers as a
function of tradeoffs between striping and replication.
16.
Semantic Transformation of Multimedia Streams inside an
Active Network
Maximilian Ott
C&C Research Laboratories, NEC USA, Inc.
Providing multimedia services to a large number of heterogeneous clients
raises the fundamental problem of incompatibility between the native
format of the media on the server, and the optimal format for each client.
This brings up the need for transforming a multimedia stream, not only to
reduce bit-rate, but also to enhance the acceptability of its play-out. We
discuss the implications of performing such media stream transformations
at different positions in the network, and propose a programmable network
architecture for providing this service. We also describe an experimental
testbed we built to determine the feasibility of our ideas.
17.
eSeminar Project Overview
Martin G. Kienzle
IBM Research
The research objective of the eSeminar project is to provide a test bed for
experiments with advanced multimedia technology in the operational setting
of a complete end-to-end system, and to provide an integration point for
many multimedia technologies. The operational goal of the system is to
improve communication in the IBM Research Division by recording video of
talks and meetings, as well as collateral information, and making it
available to all Research employees worldwide. In order to make this
system truly usable, we want to automate all aspects of recording and
distribution to radically reduce the operational complexity cost, and to
let the system completely "fade into the background". A second usability
goal is to facilitate focused video access through indexing and other meta
data, and to improve ease of access, to make the information available and
usable to a very large number of people.
One of the greatest inhibitors of the deployment of digital video
information systems has been that many technologies, such as capture,
encoding, application design, hosting, and distribution have been developed
as individual technologies, but have rarely been integrated in systems that
have sufficient functional breadth and operational stability to evaluate
the individual technologies in a broader context. eSeminar is an
integration point to evaluate advanced media technologies in an operational
setting that is to be used in the everyday lives of people. More
specifically, the eSeminar objectives are:
? Create a video library of events for everybody at any IBM Research
site to watch on-demand.
? Support a large number of videos for on-demand viewing .
? Support a large number of simultaneous on-demand streams.
? Transmit videos of events live as appropriate
? Make slides and other collateral data available where possible.
? Support sophisticated indexing tools and allow users focused access to
the material: turn video from a sequential medium into a direct access
medium.
eSeminar is operational at the Watson, Zurich, Austin, Almaden, and Tokyo
sites, with work ongoing to bring the Beijing and Haifa labs into the
system. Each site has a VideoCharger server to stream the videos locally
on the LAN, and a web site to provide access to the videos and to
collateral information. The eSeminar system is comprised of three workflow
stages:
Production: The videos are being recorded on portable video servers in
MPEG-1. In a specially equipped conference room, we use an automated
camera management system for operator less video recording. When
appropriate, speakers' slides are captured as images and are
synchronized with the video. Our next research target is to automate
the production and recording to the point that a user has only to fill
some information into a web form, click a button, and the recording
proceeds automatically without further human intervention.
Postproduction: The videos are transcoded into low bit rate videos
(MPEG-1, CIF, 10 fps, 200 kbps video, 64 kbps audio). This low bit
rate is required to avoid overloading the LANs at the various sites.
Next, we are using scene change recognition to create thumbnails of
video scenes for directly accessing scenes of the video. Then, we are
integrating thumbnails of recorded slides into the video story board.
Once an MPEG-1 file exists, the story board creation and the web page
generation are well automated.
We are now looking to integrate other indexing methods such as
audio-based speech recognition and keyword indexing. After indexing
off-line is complete we expect to move to on-line indexing. This will
allow users instantaneous access to meta data. For instance, somebody
who joins a talk late can use the capture data to catch up.
Distribution & hosting: The videos, the collateral material, and the
related web pages are transmitted to the other sites using ftp. The
content of the remote servers is managed from a central site. We have
prototyped a content distribution management system that uses usage
frequency as well as storage constraints to manage the media on
distributed media server. We expect this system to be integrated into
eSeminar to automate content management.
This is a brief overview of the current function, and of our near-term
research objectives. In addition, we will present our experience in
operating the system, and we will discuss problems we are currently
addressing.
Previous: Participation
Next: Registration
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on May 9, 2000.