DIMACS/DyDAn Workshop on Internet Tomography

May 14 - 16, 2008
DIMACS/DyDAn Center, CoRE Building, Rutgers University

Organizers:
Jasleen Kaur, University of North Carolina, jasleen at cs.unc.edu
Don Towsley, University of Massachusetts, towsley at cs.umass.edu
Walter Willinger, AT & T Labs-Research, walter at research.att.com
Presented under the auspices of the DIMACS Special Focus on Algorithmic Foundations of the Internet, the DIMACS Special Focus on Communication Security and Information Privacy and the Center for Dynamic Data Analysis (DyDAn).

Abstracts:


Brice Augustin, Paris LIP6

Title: Measuring load-balanced paths in the Internet

Tools to measure internet properties usually assume the existence of just one single path from a source to a destination. However, load-balancing capabilities, which create multiple active paths between two end-hosts, are available in most contemporary routers. Under load balancing, classic traceroute fails to find all paths. In this work, we show how to control packet headers to control the paths packets take under load balancing, on both the forward and return paths. We also specify an adaptive, stochastic probing algorithm to report all paths between a source and a destination. We deploy our algorithm probing from 15 sources towards 68 thousand destinations. Our results show that the traditional concept of a single network path between hosts no longer holds. For instance, $39$\% of the source-destination pairs in our traces traverse a load balancer. However, our characterization of load-balanced paths indicates a low level of path diversity. Paths are often disjoint for a few hops and have comparable delays.


Paul Barford, University of Wisconsin

Title: Shedding New Light on Network Behavior via DNS Query Analysis

The Domain Name System (DNS) is a one of the most important and widely used services in the Internet. In addition to translating hostnames into IP addresses for applications, DNS is now being used in a number of other ways including content distribution, blacklisting services for spam checking and covert channels for malicious traffic. In this talk, we consider the question of how DNS traffic monitoring can provide an important and useful perspective on network traffic in an enterprise. We will describe a new context-aware clustering methodology that is applied to DNS query-responses to generate the desired aggregates. Our method enables the analysis to be scaled to expose the desired level of detail of each traffic type, and to expose their time varying characteristics.

We implement our method in a tool we call TreeTop, which can be used to analyze and visualize DNS traffic in real-time. We demonstrate the capabilities and utility of TreeTop using a set of DNS traces that we collected over a four month period on our campus network. Our evaluation highlights both the coarse and fine level of detail on network behavior that can be exposed by our method.


Genevieve Bartlett, USC

Title: Understanding Passive and Active Service Discovery

Increasingly, network operators do not directly operate computers on their network, yet are responsible for assessing network vulnerabilities to ensure compliance with policies about information disclosure, and tracking services that affect provisioning. Thus, with decentralized network management, service discovery becomes an important part of maintaining and protecting computer networks.

We explore two approaches to service discovery: active probing and passive monitoring. Active probing finds all services currently on the network, except services temporarily unavailable or hidden by firewalls; however, it is often too invasive, especially if used across administrative boundaries. Passive monitoring can find transient services, but misses services that are idle. We compare the accuracy of passive and active approaches to service discovery and show that they are complimentary, highlighting the need for multiple active scans coupled with long-duration passive monitoring. We find passive monitoring is well suited for quickly finding popular services, finding servers responsible for 99% of incoming connections within minutes. Active scanning is better suited to rapidly finding all servers, which is important for vulnerability detection--one scan finds 98% of services in two hours, missing only a handful. External scans are an unexpected ally to passive monitoring, speeding service discovery by the equivalent of 9-15 days of additional observation. Finally, we show how the use of static or dynamic addresses changes the effectiveness of service discovery, both due to address reuse and VPN effects.


Chen-Nee Chuah, University of California at Davis

Title: Sampling Internet Traffic for Anomaly Detection

In this talk, we explore the use of traffic measurements from high-speed backbone for anomaly detection. To cope with increasing link speeds, packet sampling techniques are often deployed by ISPs at their routers to reduce measurement overhead. This raises an important question of whether sampling has a (negative) impact on the accuracy/ effectiveness of anomaly detection, and if so how to mitigate this effect. We survey four different sampling schemes and their impact on two categories of anomaly detection schemes: (a) target-specific schemes (e.g., for detecting portscans and volume anomalies), and (b) traffic profiling algorithm (e.g., entropy based profiling). Our studies show that the sampling process introduces fundamental bias in traffic features that are critical for anomaly detection, resulting in greater false positives and lower detection rate. This clearly demonstrates a need for better measurement techniques, since anomaly detection operates on a drastically different information region, which is often overlooked by existing traffic accounting methods that target heavy-hitters. We will discuss alternate approaches to measuring traffic that can support a wider range of monitoring applications.


Karl Deng, Northwestern University

Title: Pong: Diagnosing Spatio-Temporal Internet Congestion Properties

The ability to accurately detect congestion events in the Internet and reveal their spatial (where they happen) and temporal (how long they last) properties would significantly improve our understanding of how the Internet operates. We present Pong, a novel measurement tool capable of effectively diagnosing congestion events over short timescales (~50ms or longer), and locating congested points on an unidirectional end-to-end path at the granularity of a single link. Pong performs well in detecting multiple congested links on the forward path and in decoupling congested links on the backward path. Once a congested link is detected, Pong traces congestion status on that link. The light-weight nature of Pong makes Pong also a competent network monitoring tool rather than only an on-demand probing tool.

Pong (i) uses queuing delay as indicative of congestion, and (ii) strategically coordinates router-targeted probes with end-to-end probes. Moreover, it (iii) has small measurement overhead, (iv) achieves high sampling frequency by sending probes to all intermediate nodes, including un-congested ones, (v) dramatically improves spatial detection granularity (i.e., from path segments to individual links), by using short-term congestion history, (vi) considerably enhances the measurement quality by adjusting the probing methodology (e.g., send 4-, 3-, or 2-packet probes) based on the observed path topology, and (vii) deterministically detects moments of its own inaccuracy.


Amogh Dhamdhere, Georgia Tech

Title: Netdiagnoser - Troubleshooting network unreachabilities using end-to-end probes and routing data

The distributed nature of the Internet makes it difficult for a single service provider to troubleshoot the disruptions experienced by its customers. We propose NetDiagnoser, a troubleshooting algorithm to identify the location of failures in an internetwork environment. First, we adapt the well-known Boolean tomography technique to work in this environment. Then, we significantly extend this technique to improve the diagnosis accuracy in the presence of multiple link failures, logical failures (for instance, misconfigurations of route export filters), and incomplete topology inference. In particular, NetDiagnoser takes advantage of rerouted paths, routing messages collected at one provider's network and Looking Glass servers. We evaluate each feature of NetDiagnoser separately using C-BGP simulations on realistic topologies. Our results show that NetDiagnoser can successfully identify a small set of links, which almost always includes the actually failed/misconfigured links.


Qi (Hawk) Ding, Boston University

Title: Compressed PCA with Application to Anomaly Detection

Automated methods of anomaly detection are useful tools in Internet security. One successful approach to anomaly detection is based on the use of the sub-space method, grounded in principle component analysis (PCA), to find traffic that deviates from normal activity. As the size of the traffic data space increases, however, this approach becomes less computationally feasible. For example, a decision to monitor traffic not at, say, the PoP level, but rather the IP level, will entail an enormous increase in the dimensionality of the data. Random sketching is one tool that can be used to reduce dimensionality. Natural questions are whether sub-space methods of anomaly detection can still be applied in a meaningful fashion to the sketched data, and how the results compare were the same technique applied to the unsketched data. We present a combination of numerical and theoretical results in answer to these questions, showing that the application of principle component analysis to randomly sketched data and unsketched data can in fact yield similar results, under appropriate conditions.


Constantine Dovrolis, Georgia Tech

Title: Spectral Probing, Crosstalk and Frequency Multiplexing in Internet Paths

We present an end-to-end active probing methodology that creates frequency-domain signals in IP network paths. The signals are generated by periodic packet trains that cause short-lived queueing delay spikes. Different probers can be multiplexed in the frequency-domain on the same path. Further, a signal that is introduced by a ``prober'' in one path can cause a crosstalk effect, inducing a signal of the same frequency into another path (the ``sampler'') as long as the two paths share one or more bottleneck queues. Applications of the proposed methodology include the detection of shared store-and-forward devices among two or more paths, the creation of covert channels, and the modulation of voice or video periodic packet streams in less noisy frequencies. In this talk we will focus on the first application. Our goal is to detect shared bottleneck(s) between a ``sampler'' and one or more ``prober'' paths. We present a spectral probing methodology as well as the corresponding signal processing/detection process. The accuracy of the method has been evaluated with controlled and repeatable simulation experiments, and it has been tested on several Internet paths.


Nick Duffield, AT&T Labs-Research

Title: Measuring One-Way Loss with GRE Encapsulated Multicast Probing

This talk describes some techniques for estimating one-way loss from a measurement host to network routers, which exploit commonly implemented features on commercial routers and do not require any new router capabilities. The work addresses the problem of scalably performing one-way loss measurements across specific network paths. This is joint work with Lee Breslau, Yu Gu and Shubho Sen.


Brian Eriksson, University of Wisconsin

Title: Network Discovery from Passive Measurements

Understanding the Internet's structure through empirical measurements is important in the development of new topology generators, new protocols, traffic engineering, and troubleshooting, among other things. While prior studies of Internet topology have been based on active (traceroute-like) measurements, passive measurements of packet traffic offer the possibility of a greatly expanded perspective of Internet structure with much lower impact and management overhead. In this talk we will describe a methodology for inferring network structure from passive measurements of IP packet traffic. We describe algorithms that enable 1) traffic sources that share network paths to be clustered accurately without relying on IP address or autonomous system information, 2) topological structure to be inferred accurately with only a small number of active measurements, 3) missing information to be recovered, which is a serious challenge in the use of passive packet measurements. We demonstrate our techniques using a series of simulated topologies and empirical data sets.


Nick Feamster, Georgia Tech

Title: Challenges in Making Tomography Practical

This talk presents a systematic investigation of the practical issues that arise when using network tomography to monitor failures. We design and implement Doppler, a network monitoring system that probes end-to-end paths using ping and traceroute. Doppler can detect failures among a set of monitors, but also between monitors and arbitrary ``destinations''. It incorporates a number of techniques to make tomography work in current IP networks: algorithms to minimize the number of active probes needed to detect a failure; and new mechanisms to reduce the impact of measurement noise, and to disambiguate losses on forward paths from those on reverse paths. We use VINI to study Doppler in a controlled environment, as well as to identify and analyze all factors that affect detection accuracy and speed. We also evaluate Doppler in two real-world networks. Remarkably, our evaluations show that reducing the number of measurements can improve the accuracy of fault diagnosis, if the smaller set of measurements is chosen carefully.

Joint work with Yiyi Huang, Renata Teixeira, and Christophe Diot.


John Heidemann, USC/ISI

Title: Studying the IPv4 Address Space through Census and Survey

This talk will evaluate methods to study the Internet (IPv4) address space. Unlike prior studies that focus on the routers and links that connect the Internet, we ping the whole address space and so identify mostly edge hosts. We apply statistical population sampling, using \emph{censuses} to walk the entire Internet address space, and \emph{surveys} to probe frequently a fraction of that space. While the number of Internet hosts is very large, and many are hidden behind firewalls or in private address space, there is much to be learned from examining the population of \emph{visible} hosts, those with public unicast addresses that respond to messages. We evaluate the accuracy of our methodology, then use it to evaluate the utilization of public, visible addresses. Our approach can also characterize the size of router access control lists.


Alefiya Hussain, Sparta, India

Title: Spectral Analysis of Denial of Service Attacks

Network security is an arms race: attack tools constantly evolve to evade attack defenses. We believe insight into attack stream dynamics-the interaction of malicious packets with the network will aid in the development next generation defense systems. We have applied statistical signal processing and pattern recognition techniques to study the fundamental nature of periodic behavior in attack traffic and have developed novel algorithms to analyze and interpret this behavior. In this talk, I will present two such algorithms.

First, we have developed an attack fingerprinting system that provides the ability to identify repeated attack scenarios--the combination of attack hosts and attack tools, by extracting unique spectral signatures from the attack stream. Such an attack fingerprinting system can aid in the investigation and criminal or civil prosecution of attackers.

Second, we have developed a wavelet-based attack detection system that allows detection of low bandwidth attacks in aggregate network traffic. The main disadvantage of current detection techniques is that they analyze traffic on a per packet or a per flow basis and compare it with predefined behavior definitions that are determined either statistically or by a rule-set. Our detection system analyzes aggregate network traffic, identifies and isolates inherent attack features within it, for rapid attack detection.

While packet header content is easy to spoof by the attacker, attack stream dynamics are hard to forge without reducing attack effectiveness thus making the above algorithms more robust to the evasion efforts of the attackers. Although this talk focuses primarily on the analysis of DoS attacks, these algorithms can be applied to a large range of network analysis problems.


Howard Karloff, AT&T Labs-Research

Title: A Double Hitting Set Approximation Algorithm For Network Monitoring

I will discuss a fast approximation algorithm for the network monitoring problem: given a graph, OSPF weights on its edges, and a set of ``branch nodes'' b, choose a smallest set of ``monitoring nodes'' m such that for each specified branch node b, there are two distinct monitoring nodes m1, m2 such that: the set of all nodes on all OSPF shortest paths from b to m1 is disjoint from the set of all nodes on all OSPF shortest paths from b to m2 (except for b itself). This problem appears in network tomography and content distribution. While the problem is provably hard to solve, even approximately (unless P=NP), we give a fast algorithm called Double Hitting Set which, on our testbed, usually finds the exact integral optimum. What's more, it shows how to find a good lower bound on the optimum, proving that Double Hitting Set's output is (usually) very close to the optimum.

The talk assumes no knowledge of approximation algorithms. This is joint work with many people.


Eric Kolaczyk, Boston University

Title: Distributed Spatial Anomaly Detection

Detection of traffic anomalies is an important problem that has been the focus of considerable research. Recent work has shown the utility of spatial detection of anomalies via crosslink traffic comparisons. Three advances may be identified that are needed to make such methods more useful and practical for network operators. First, anomaly detection methods should avoid global communication and centralized decision making. Second, nonparametric anomaly detection methods are needed to augment current parametric approaches. And finally, such methods should not just identify possible anomalies, but should also annotate each detection with some probabilistic qualifier of its importance. In this talk, I will describe a framework that simultaneously advances the current state of the art on all three fronts, through a blending of concepts from distributed communication, generalized quantile estimation, and false discovery rates. Overall, our framework provides network operators with an anomaly detection methodology that is distributed, effective, and easily interpretable. This is joint work with Parminder Chhabra, Clayton Scott, and Mark Crovella.


Ramana Rao Kompella, Purdue University

Title: Towards automated fault localization in backbone networks.

Despite its tremendous success, the Internet remains fragile with frequent outages and service disruptions. Fault management remains an integral component of overall network management. Three basic steps exist in fault management--detection, localization and repair. While detection is relatively straightforward, localization involves indirect inference mechanisms (e.g., tomographic approaches) that can severely increase the downtime of failures. In this talk, I will describe two examples of such problems that I have worked on in the past---IP-over-optical failures and MPLS blackholes. I will talk about the architecture of the systems we have built that will allow automated fault localization in both these contexts. Further, I will present some ongoing work in devising the right router primitives to isolate faults directly without requiring any tomographic approaches.


Ritesh Kumar, UNC

Title: Practical Beacon Placement for Link Monitoring Using Network Tomography

Recent interest in using tomography for network monitoring has motivated the issue of whether it is possible to use only a small number of probing nodes (beacons) for monitoring all edges of a network in the presence of dynamic routing. Past work has shown that minimizing the number of beacons is NP-hard, and has provided approximate solutions that may be fairly suboptimal. We use a two-pronged approach to compute an efficient beacon set: (i) we formulate the need for, and design algorithms for, computing the set of edges that can be monitored by a beacon under all possible routing states; and (ii) we minimize the number of beacons used to monitor all network edges. We show that the latter problem is NP-complete and use various approximate placement algorithms that yields beacon sets of sizes within 1+ln(|E|) of the optimal solution, where E is the set of edges to be monitored. Beacon set computations for several Rocketfuel ISP topologies indicate that our algorithms may reduce the number of beacons yielded by past solutions by more than 50% are, in most cases, close to optimal.


Aleksandar Kuzmanovic, Northwestern University

Title: What Lies Beneath: Understanding Internet Congestion

Independent of where congestion may take place in the Internet, it can critically affect Internet applications such as voice-over-IP or peer-to-peer file sharing. However, while congestion at the Internet edge typically arises due to bottlenecks existent at a connection's last mile, congestion in the core is more complex. This is because it may depend upon internal network policies or complex inter-AS relationships. Consequently, excessive congestion in the core can reveal poorly-engineered network policies or non-cooperative inter-AS relationships.

In this talk, I will present the design and implementation of a large-scale triggered monitoring system that we recently deployed. The goal of the system is to quantify, locate, track, correlate, and analyze congestion events happening in the Internet core. Next, I will present the properties of concurrently monitored hot spots across the Internet that we obtained using this system. Contrary to common belief, we find that congestion events in the core can be highly correlated and such correlated congestion events can span across up to three neighboring ASes. We provide a root-cause analysis to explain this phenomenon and discuss implications of our findings.


Ashwin Lall, University of Rochester

Title: A Data Streaming Algorithm for Estimating Entropies of OD Flows

Entropy has recently gained considerable significance as an important metric for network measurement. Previous research has shown its utility in clustering traffic and detecting traffic anomalies. While measuring the entropy of the traffic observed at a single point is now well-studied, a far more interesting (and harder) problem is measuring the entropy of the traffic between every origin-destination pair. In this talk we will describe the first solution to this problem. Our sketch builds upon and extends the stable distribution sketch of Indyk, with significant additional innovations. We show that our data streaming algorithm can work with link speeds of up to 10 million packets per second using commodity CPU/memory at a reasonable cost. Our algorithm is shown to be very accurate in practice via simulations, using traffic traces collected at a tier-1 ISP backbone link.

This is joint work with Chuck Zhao, Mitsu Ogihara, Oliver Spatscheck, Jia Wang, and Jim Xu.


Haakon Larsen, Princeton University

Title: Sensitivity of PCA for Traffic Anomaly Detection

Detecting anomalous traffic is a crucial part of managing IP networks. In recent years, network-wide anomaly detection based on Principal Component Analysis (PCA) has emerged as a powerful method for detecting a wide variety of anomalies. We show that tuning PCA to operate effectively in practice is difficult and requires more robust techniques than have been presented thus far. We analyze a week of network-wide traffic measurements from two IP backbones (Abilene and Geant) across three different traffic aggregations (ingress routers, OD flows, and input links), and conduct a detailed inspection of the feature time series for each suspected anomaly. Our study identifies and evaluates four main challenges of using PCA to detect traffic anomalies: (i) the false positive rate is very sensitive to small differences in the number of principal components in the normal subspace, (ii) the effectiveness of PCA is sensitive to the level of aggregation of the traffic measurements, (iii) a large anomaly may inadvertently pollute the normal subspace, (iv) correctly identifying which flow triggered the anomaly detector is an inherently challenging problem.


Mauro Maggioni, Duke University

Title: Multiscale analysis on graphs via diffusion

We discuss a method for multiscale analysis of and on graphs, based on coarsening a random walk on a graph at increasingly large time scales. This construction is motivated by the analysis of the geometry of large data sets that can be modeled as graphs, the approximation and prediction of functions on graphs, and problems in the visualization.


George Michailidis, University of Michigan

Title: Some recent advances in active network tomography

We discuss active network tomography, a statistical inverse problem that deals with reconstructing link-level information from end-to-end path-level measurements obtained by actively probing the network. The primary application in this case is estimation of quality-of-service parameters such as loss rates and delay distributions. The talk discusses the statistical issues involved, with emphasis on identifiability, fast estimation and inference of the parameters of interest, as well as design of the necessary probing experiments and applications to network monitoring.


Eugene Ng, Rice University

Title: Internet Delay Space

Understanding the characteristics of the Internet delay space (i.e., the all-pairs set of static round-trip propagation delays among edge networks in the Internet) is important for the design of global-scale distributed systems. For instance, algorithms used in overlay networks are often sensitive to violations of the triangle inequality and to the growth properties within the Internet delay space. Moreover, since designers of distributed systems often rely on simulation and emulation to study design alternatives, they also need a realistic model of the Internet delay space.

In this talk, I will present our recent work on analyzing and modeling Internet delay space properties, and on leveraging better understanding of Internet delay space properties to improve the design of distributed systems. In particular, first, I will present a delay space synthesizer tool called DS2 that can be used to synthesize realistic delay space data for simulations and emulations at a scale where direct measurement and storage are impractical. Second, I will present a triangle inequality violation (TIV) alert mechanism that can inform distributed systems to avoid the pitfalls caused by TIVs. Finally, I will also present an interesting connection between Internet delay space properties and network locations, and show that this connection can be exploited to mitigate the Sybil attack problem in peer-to-peer systems.


Michael Rabbat, McGill University

Title: Compressed Network Monitoring

This talk considers the problem of efficient end-to-end monitoring of path-level performance metrics in communication networks. Our goal is to minimize the number of measurements or monitoring stations required to maintain an acceptable estimation accuracy. We present a framework based on diffusion wavelets and non-linear estimation. Our procedure involves the development of a diffusion wavelet basis that is adapted to the monitoring problem. This basis exploits spatial and temporal correlations in the measured phenomena to provide a compressible representation of network-wide performance parameters. We then adopt nonlinear estimation techniques to generate predictions for the performance on unmeasured paths, exploiting the compressibility of the aforementioned representation. We demonstrate how our estimation framework can improve the efficiency of end-to-end delay estimation in IP networks and reduce the number of hardware monitors required to track bit-error rates in all-optical networks.


Mauricio G. C. Resende, AT&T Labs-Research

Title: A random-keys genetic algorithm for node placement in path-disjoint network monitoring

We are given a network represented by a node set and a directed edge set and two subsets of nodes, one called the branch nodes, the other the monitoring nodes. For each branch-node / monitoring-node pair, we are given a set of one or more paths from the monitoring node to the branch node. We want to find a smallest cardinality subset of the monitoring nodes, such that, for each branch node, this subset has at least two monitoring nodes whose paths to the branch node are node disjoint (except for the branch node). We first present a greedy algorithm for this problem. Then, a random-keys genetic algorithm that uses the greedy algorithm is described. Computational results on randomly generated test instances that resemble real-life communication networks show that the genetic algorithm finds optimal solution for over 80% of the instances and near-optimal solutions for the remaining.


Irina Rish, IBM

Title: Blind Source Separation in Network Tomography

We consider the problem of diagnosing performance problems in distributed system and networks given end-to-end performance measurements provided by test transactions, or probes. Common techniques for problem diagnosis such as, for example, codebook and network tomography usually assume a known dependency (e.g., routing) matrix that describes how each probe depends on the systems components. However, collecting full information about routing and/or probe dependencies on all systems components can be very costly, if not impossible, in large-scale, dynamic networks and distributed systems. We propose an approach to problem diagnosis and dependency discovery from end-to-end performance measurements in cases when the dependency/routing information is unknown or partially known. Our method is based on Blind Source Separation (BSS) approach that aims at reconstructing unobserved input signals and the mixing-weights matrix from the observed mixtures of signals. Particularly, we apply sparse non-negative matrix factorization techniques that appear particularly fitted to the problem of recovering network bottlenecks and dependency (routing) matrix, and show promising experimental results on several realistic network topologies.


David Rincon Rivera, UPC, Spain

Title: Multiresolution analysis of traffic matrices and network topologies.

The development of the Diffusion Wavelet (DW) approach has opened the way to a multiresolution analysis (MRA) of any function defined on a graph. We are studying how this MRA can provide information about topological properties of certain types of connectivity structures (e.g., scale-free, small-world or HOT-inspired). We have also extended the original 1-dimensional DW to the case of 2-dimensional functions (such as the case of the origin-destination traffic matrices) and have studied the multiresolution signature of traffic matrix data from real networks.


Kamil Sarac, University of Texas at Dallas

Title: Challenges in Building Internet Topology Maps from Traceroute-collected Path Traces

Understanding the topological characteristics of the Internet has been an important issue. In general, Internet topology measurement studies involve three phases: (1) topology collection, (2) topology construction, and (3) topology analysis. Topology construction refers to the task of converting the raw data obtained in the topology collection phase into a corresponding topology map. Compared to the studies in topology collection and topology analysis, the amount of work in topology construction is fairly limited. In this talk, we present our ongoing work topology construction in the context of router-level Internet topology measurement studies.


Vyas Sekar, CMU

Title: cSamp - A System for Network-Wide Flow Monitoring

Critical network management applications increasingly demand fine-grained flow level measurements. However, current flow monitoring solutions are inadequate for many of these applications. In this talk, I will present the design, implementation, and evaluation of cSamp, a system-wide approach for flow monitoring. The design of cSamp derives from three key ideas: flow sampling as a router primitive instead of uniform packet sampling; hash-based packet selection to achieve coordination without explicit communication; and a framework for distributing responsibilities across routers to achieve network-wide monitoring goals while respecting router resource constraints. We show that cSamp achieves much greater monitoring coverage, better use of router resources, and enhanced ability to satisfy network-wide flow monitoring goals compared to existing solutions.


Harsh Singhal, University of Michigan

Title: Optimal Experiment Design for State Space Models with Application to Sampled Network Data

We introduce a linear state space model for the estimation of network traffic flow volumes from sampled data. Further, we formulate the problem of obtaining optimal sampling rates under router resource constraints as an experiment design problem. Theoretically it corresponds to the problem of optimal design for estimation of conditional means for state space models. We present the associated convex programs for a simple approach to it and propose several exciting problems in this area. The usefulness of the approach in the context of network monitoring is illustrated through an extensive numerical study.


Joel Sommers, Colgate University

Title: Accurate and Efficient SLA Compliance Monitoring

Service level agreements (SLAs) define performance guarantees made byservice providers, e.g, in terms of packet loss, delay, delay variation, and network availability. In this paper, we describe a new active measurement methodology to accurately monitor whether measured network path characteristics are in compliance with performance targets specified in SLAs. Specifically, (1) we describe a new methodology for estimating packet loss rate that significantly improves accuracy over existing approaches; (2) we introduce a new methodology for measuring mean delay along a path that improves accuracy over existing methodologies, and propose a method for obtaining confidence intervals on quantiles of the empirical delay distribution without making any assumption about the true distribution of delay; (3) we introduce a new methodology for measuring delay variation that is more robust than prior techniques; and (4) we extend existing work in network performance tomography to infer lower bounds on the quantiles of a distribution of performance measures along an unmeasured path given measurements from a subset of paths. We demonstrate the accuracy and convergence properties of SLAm in a controlled laboratory environment using a range of background traffic scenarios and in one- and two-hop settings, and examine its accuracy improvements over existing standard techniques.


Patrick Thiran, EPFL, Switzerland

Title: Methods for network monitor placement and location of congested links.

In this talk, I will review some recent results (i) on network-wide monitor placement for passive monitoring (joint work with Gion-Reto Cantieni, Chadi Barakat, Gianluca Iannaconne and Christophe Diot) and (ii) on congested link location using active monitoring techniques (joint work with Hung X. Nguyen).

(i) Network operators performing traffic measurements as part of their network management activities, need monitoring tools that give accurate measurements while limiting their consumption of computing resources, which raises the following monitoring placement problem. Given a network where all links can be (passively) monitored, which monitors should be activated and which sampling rate should be set on these monitors in order to achieve a given measurement task with high accuracy and low resource consumption? We provide an optimal algorithm to solve this problem, and we study its performance on a backbone network.

(ii) End users who cannot directly access the network links must use end-to-end measurements in order to infer the variable(s) of interest of a given set of links, which requires solving a system of equations that relate these measurement outcomes with these variables. As this system of equations usually does not have a unique solution, current methods use the unrealistic assumption that all links have the same prior probability of being congested. We find that this assumption is not needed, because these probabilities can be uniquely identified from a small set of measurements by using properties of Boolean algebra. We can then use the learnt probabilities as priors to find rapidly the congested links at any time, with an order of magnitude gain in accuracy over existing algorithms.


Bo Zhang, Rice University

Title: Reachability Analysis and Verification in Enterprise Networks

Enforcing correct reachability is crucial for an enterprise network to achieve access control, privacy, security and so on. Many sophisticated mechanisms, such as ACLs, firewalls and NATs, have been developed to enforce the desired reachability. However, with the increasing complexity of current enterprise networks, it is becoming more and more challenging to configure the reachability correctly. Therefore, the ability to analyze and verify network reachability becomes very valuable for many tasks such as verifying the original intent of the network administrator and trouble-shooting reachability problems. In this talk, I will first review the state of the art on network reachability analysis and verification, then I will introduce our idea of statistical reachability verification using active measurements.


Zhi-Li Zhang, University of Minnesota

Title: Internet Traffic Behavior Profiling and IP Gray Space Analysis for Detecting Suspicious Network Activities

Recent spates of cyber attacks and the frequent emergence of disruptive applications in the Internet has made it imperative to design techniques that can automatically extract, and make sense of, significant behavioral patterns from massive volumes of traffic data. In the first part of this talk, I will present a methodology for building comprehensive behavioral profiles of Internet traffic in terms of communication patterns of end-hosts. This methodology relies on information-theoretic and data-mining techniques, and consists of significant cluster extraction, automatic behavior classification and structural modeling for interpretive analyses. Using traffic data from the Internet core, I will demonstrate how this methodology enables us to identify canonical profiles as well as unusual or suspicious activities. I will also present some simple heuristics that can identify the most harmful sources of unwanted traffic based on behavior profiling. In the second part of the talk, we study the scanning activities towards a large campus network using a month-long netflow traffic trace. Based on the novel notion of IP gray space (namely, collection of IP addresses within our campus network that are not assigned to any ``active'' host during a certain period of time), we identify and extract potential outside scanners and their associated activities. We then apply data mining and machine learning techniques to analyze the scanning patterns of these scanners and classify them into a few groups (e.g., focused hitters, random address scanners, blockwise random scannars). We apply a heuristic algorithm to extract the IP gray space in our campus network. Subsequently, we analyze the behavioral patterns such as dominant activities and target randomness, of the gray space traffic for individual outside hosts. By correlating and contrasting the traffic towards IP gray addresses and live end hosts, we find the gray space traffic provides unique insight for uncovering the behavior, intention, and threats, of anomalous traffic towards live end hosts. Finally, we demonstrate the applications of gray space traffic for identifying email spam behavior, detecting malicious scanning and worm activities that successfully compromise end hosts.


Previous: Program
Workshop Index
DIMACS Homepage
Contacting the Center
Document last modified on May 12, 2008.