Nina Amenta, University of Texas at Austin
Title: Blocked Randomized Incremental Constructions
The randomized incremental construction has been widely exploited in computational geometry to produce simple and optimal algorithms. But it has terrible memory access patterns, so that large computations suffer from thrashing. We propose modified insertion orders which aliviate thrashing, without sacrificing optimality. Joint presentation by Nina Amenta and Sunghee Choi.
Herve Bronnimann, Polytechnic University
Title: Towards Generic Software Components?
There are major obstacles to implementing geometric algorithms. One is the difficulty to implement the geometric primitives, which get more complicated as the algorithms get more complex. This problem is compounded by the lack of robustness of implementations (the failure of floating point computations to answer consistent results). In the past decade, there has been increasing willingness within the community to cope with these problems. The CGAL library, among others, addresses both the framework and robustness problems. It proposes an implementation of many common algorithms, generic insofar as they are templated by a geometric class. There is however no single set of requirements for geometric classes (every algorithm defines their own).
In order to go to the next step, I propose that geometric classes be organized into a hierarchy of concepts, as a set of software requirements. The methodology of separating the development of concepts from their implementation is inspired by the generic programming paradigm. This is an approach that has met with success in other fields of algorithm engineering (most notably the C++ Standard Template Library -- STL, in our field the CGAL library, and more recently the Boost Graph Library). It would enable to say that algorithm A requires an AffineOrientedGeometry, without ambiguity as to what is meant by the affine oriented geometry concept. Adaptors could be provided for various libraries that offer the adequate functionality, allowing interoperability.
My talk will focus on three techniques that have the potential to realize this level of genericity: concept checking (e.g. affine, metric geometry), geometry class adaptors (duality, liftings), and algorithmic visitors (an object passed to an algorithm that decouples the core of an algorithm---e.g. sweeping a set of intersecting segments, from the output issues---e.g. counting intersections, or building a planar map). I will introduce a few interesting uses of visitors (e.g. graphical debugging) which I haven't seen before.
Sunghee Choi, University of Texas at Austin
Title: Blocked Randomized Incremental Constructions
The randomized incremental construction has been widely exploited in computational geometry to produce simple and optimal algorithms. But it has terrible memory access patterns, so that large computations suffer from thrashing. We propose modified insertion orders which alleviate thrashing, without sacrificing optimality. Joint presentation by Nina Amenta and Sunghee Choi.
Ken Clarkson, Bell Labs
Title: A Code for Nearest Neighbor Searching in Metric Spaces
Given a set of sites, the nearest neighbor searching problem is to build a data structure so that the site nearest a query point can be found quickly. I'll describe an kd-tree-like approach to this problem in the setting of metric spaces, and discuss an implementation, some experiments, and a test harness. The goal here is a library routine that is "pretty ok": uses moderate storage, fast in low dimension, never worse than brute force.
Ioannis Emiris, University of Athens, Greece
Title: Root-comparison techniques and applications to the Voronoi diagram of disks
This talk reviews algebraic approaches for comparing quadratic algebraic numbers, thus yielding methods for deciding key predicates in various geometric constructions. In particular, our main motivation and application is the dynamic algorithm by Karavelas and Yvinec for computing the Voronoi diagram of disks in the plane, also known as additively-weighted Voronoi diagram or the Apollonius diagram.
We focus on methods based on polynomial Sturm sequences, which yield an efficient, exact, and complete algorithm; completeness here refers to the handling of all degenerate cases. Our approach minimizes the algebraic degree of the computed quantities as well as the total number of arithmetic operations. This answers the twofold goal of minimizing precision as well as bit complexity. Interestingly, the bottleneck of our algorithm concerns the same operations as with other approaches. Ancillary tools include geometric invariants, multivariate factorization, and multivariate resultants. An implementation of these methods demonstrates their practical performance.
This work, to be presented at SODA 2003, is joint with Menelaos Karavelas.
Andreas Fabri, Geometry Factory
Title: From a Library to Geometric Software Components
The GeometryFactory is the commercial spin-off of the Cgal project. In contrast to the academic project, which has a library as product, the offering of the GeometryFactory will be geometric software components. This marries the "small is beautiful" aspect of gems with the promise of interoperabilty and coherence of a library. In the talk I will speak about the motivation of this re-packaging, and about implications for the CGAL project.
Steven Fortune, Bell Labs
Title: Iteartive conditioning as an algorithm design schema for robust geometric predicates
The evaluation of geometric predicates can often be expressed using concepts that are familiar from numerical linear algebra. However, specific instances of predicates can be extremely ill-conditioned, even though coordinate data is known exactly. Iterative conditioning is a general algorithm design schema that attempts to use the result of a floating-point calculation to improve the conditioning of a specific instance. With a sufficient number of iterations, conditioning is improved enough so that a floating-point calculation reliably decides the predicate.
Clarkson's determinant algorithm is one example of this approach; another is an iterative algorithm to compute the inertia of a symmetric integer matrix. The bulk of this talk will describe an iterative-conditioning algorithm for approximating the roots of a univariate polynomial. On extremely ill-conditioned polynomials, the algorithm is an order of magnitude faster than the best previous algorithm.
W. Randolph Franklin, Rensselaer Polytechnic Institute
Title: Geometric Operations on Hundreds of Millions of Objects
As computers get faster and more capacious, efficiency becomes ever more important. Processing very large datasets requires special techniques. We describe some implementations and the techniques used. Our first example is a connected component program for a 1000x1000x1000 binarized cellular universe. Two voxels are in the same component if they are adjacent (either orthogonally, or optionally diagonally). A typical execution time to find all 4,539,562 components in one universe of 1,212,153,856 voxels, 608,589,768 of which were empty, was 96 CPU seconds on a 1600 MHz Pentium. Our second implementation will attempt to find the area and perimeter of the union of 100M squares, containing 1G edge intersections, in time linear in the number of output vertices. A third implementation will attempt to locate many points in a large planar subdivision in expected constant time per point.
A careful consideration of what topology to represent explicitly is one major design strategy. E.g., if a Triangulated Irregular Network is being computed, then the only explicit data type is the triangle; edges are implicit. Eschewing pointers whenever possible, in favor of more compact strucures, is another policy. Third, local topological data structures are strongly favored. E.g., finding the area of a general polygon (possibly including many separate or nested components with holes) requires knowing only the locations and neighborhoods of the vertices, w/o any nonlocal info, even edges. This technique also reduces the number of special cases and degenerate conditions. Fourth, a uniform grid is superimposed on the data to quickly cull out pairs of edges that cannot intersect. For reasonable data distributions, this finds the number of intersections in time linear in their number. Fifth, in cases such as the area of the union of the squares, many edge intersections are inside other squares and so do not cause output vertices. The edge segments causing these unnecessary intersections can often be identified, with the uniform grid, and deleted, w/o intersecting them pair by pair. Thus, even a quadratic number of such useless intersections do not affect the algorithm's linear (in the number of output vertices) expected running time.
An adversary can select very unevenly distributed input to make all these algorithms execute very slowly. However, this has never been observed on real data. Indeed, some of the data structures are much more robust against uneven data than is apparent at first glance. All of the algorithms described have expected execution times linear in the size of the input plus the output. There are no log factors. These techniques also parallelize well.
Komei Fukuda, School of Computer Science, McGill University
Title: A Parallel Implementation of an Arrangement Construction Algorithm
In this talk, we present a parallel C-implementation of a reverse search algorithm for constructing an arrangement of hyperplanes in $R^d$, based on both the parallel exhaustive search library ZRAM and the linear programming and polyhedral computation library CDDLIB. It runs in both floating-point arithmetic and GMP rational arithmetic. We report preliminary computational experiences with the program. Finally, we report a plan to extend the code to perform the Minkowski-addition of convex polytopes, using a natural extension of the arrangement construction algorithm.
Marina Gavrilova, University of Calgary
Title: Algorithm library development for complex biological and mechanical systems: functionality, interoperability and numerical stability.
There are numerous challenges faced by scientists when developing algorithm libraries for scientific applications. Issues of functionality, stability, performance, and flexibility depending on the specific application areas are at the focus of researchers working in both theoretical and applied areas. This talk discusses some of the approaches to development and implementation of two- and three-dimensional data structures for applications in molecular biology, mechanical engineering and GIS. Issues of algorithm efficiency, memory requirements for large data sets, model interactions and numerical stability will be addresses and discussed on practical examples from the above areas. Challenges in the development of ECL (Exact Computational Library) based on the ESEA algorithm will be addresses.
Dan Halperin, Tel Aviv University
Title: Controlled Perturbation for Arrangements of Circles
Given a collection C of circles in the plane, we wish to construct the arrangement A(C) (namely the subdivision of the plane into vertices, edges and faces induced by C) using floating point arithmetic. We present an efficient scheme, controlled perturbation, that perturbs the circles slightly to form a collection C', so that all the predicates that arise in the construction of A(C') are computed accurately and A(C') is degeneracy free.
We had presented controlled perturbation several years ago, and already applied it to several types of arrangements. The major contribution of the current work is the derivation of a good (small) resolution bound, that is, a bound on the minimum separation of features of the arrangement that is required to guarantee that the predicates involved in the construction can be safely computed with the given (limited) precision arithmetic. A smaller resolution bound leads to smaller perturbation of the original input.
We present the scheme, review related work, describe how the resolution bound is determined and how it affects the perturbation magnitude. We report on experimental results and discuss the advantages and shortcomings of the method.
Joint work with Eran Leiserowitz
Martin Held, University of Salzberg
Title: Experiences With Designing and Implementing "Industrial-Strength" Codes for Handling "Industrial-Quality" Data in Real-World Applications
We report on the lessons learned during several years of application-oriented work in the common intersection of computational geometry, CAD/CAM, and computer graphics. As most other researchers who do implementational work, the author is to blame for far more codes which are not ultimately reliable than for codes which have demonstrated a perfect reliability score over some reasonable period of time. The most notable exceptions among the author's codes are FIST, which triangulates polygonal regions in 2D and 3D, and VRONI, which computes the Voronoi diagram of points and line segments in 2D. (VRONI also supports VD-based offsetting and similar tasks.)
We will discuss the main approach to the design of FIST and VRONI, and show how the underlying algorithmic principles can be combined to provide a framework for the implementation of geometric algorithms such that the resulting code can be run reliably and efficiently on the standard floating-point hardware. While there exists no formal argument that our approach is viable, there is compelling evidence that it does work excellently in practice, within the inherent limitations of fp-calculations: both FIST and VRONI have not been broken in more than two years of intense use in industry and in academia.
We conclude this presentation with a discussion of recent experiences with interfacing FIST with the CORE-library for exact geometric computation.
Michael Hemmer, Max-Planck-Institut fur Informatik
Title: A Computational Basis for Conic Arcs and Boolean Operations on Conic Polygons
We give an exact geometry kernel for conic arcs, algorithms for exact computation with low-degree algebraic numbers, and an algorithm for computing the arrangement of conic arcs that immediately leads to a realization of regularized boolean operations on conic polygons. A conic polygon, or polygon for short, is anything that can be obtained from linear or conic halfspaces (= the set of points where a linear or quadratic function is non-negative) by regularized boolean operations. The algorithm and its implementation are complete (they can handle all cases), exact (they give the mathematically correct result), and efficient (they can handle inputs with several hundred primitives).
Michael Joswig, TU Berlin
Title: Algorithms in polytope theory and polymake
polymake is a software package for the investigation of geometric and combinatorial properties of convex polytopes in arbitrary dimension. In addition to being useful as a tool for research in discrete geometry, polymake also serves as a workbench to compare algorithms. We report on computational results in the area of convex hulls, face lattice enumeration, and related topics.
Lutz Kettner, MPI Informatik, Germany
Title: Boolean Operations on 3D Surfaces Using Nef Polyhedra
We describe our efforts in CGAL to implement a data structure for three-dimensional polyhedra. Our efforts are based on W. Nef's foundational work on polyhedra [Nef78] that start with halfspaces, are closed under boolean operations and can represent all degeneracies and possible non-manifold situations. The focus lies on the modeling space and the correctness of the data structure. We present a first running prototype.
John Keyser, Texas A&M University
Title: Robust Operations on Curved Solids
Robustness problems due to numerical error and the presence of degeneracies are well-known in geometric computing. Solid modeling provides a domain in which such problems tend to be manifested often and obviously. Although robustness problems with linear solids are quite significant, the robustness problems for curved solids are even more numerous and more difficult to deal with.
In this talk, I will describe some of the methods that I have explored to increase the robustness of solid modeling operations. A major emphasis will be on the use of exact computation to eliminate numerical error, and how this can be achieved with reasonable efficiency even on low-degree curved solids. In particular, I will describe speedup methods that have proven useful for specific problems that arise in solid modeling applications. While speedup methods applied individually can be used effectively, often the combination of speedup methods does not achieve the overall increase in efficiency that is desired. I will address ways of combining such methods while yielding effective results. In addition to discussing the elimination of numerical error, methods for detecting and dealing with degenerate data will also be discussed briefly.
I will attempt to summarize previous work in this area as implemented in an exact boundary evaluation system. I will also describe ongoing work aimed toward extending exact computation from low-degree to moderate-degree curved surfaces, and extending the usefulness of such exact computation methods to other geometric problems. Finally, I will present what seem to be some of the key research directions needed for future progress in this direction.
Shankar Krishnan, AT&T Labs - Research
Title: Implementing Geometric Algorithms using Graphics Hardware
Fast graphics hardware, including dedicated vertex processing, 3D rasterization, texturing, and pixel processing, is becoming as ubiquitous as floating-point hardware. Moreover, the performance of the graphics processor units (GPUs) appears to be progressing at a rate faster than Moore s law. Along with multi-pass capabilities, programmability and fast readback bandwidth, the GPUs are being viewed as stream processing units useful for diverse applications that are beyond the conventional domain of image synthesis.
In this talk, I will give a brief overview of some of the problems that can be solved using the graphics hardware. The problem domains include computational geometry, solid modeling, statistics and visualization. I will present some specific examples of problems that we have implemented like geometric optimization problems, map simplification and depth contours. The main issues to consider in such implementations are limited programming capabilities, precision and temporary storage. I will also provide a preliminary cost model for computations performed in the GPUs.
Ming Lin, University of North Carolina at Chapel Hill
Title: Fast Penetration Depth Estimation: Algorithms, Implementation & Applications
Penetration Depth (PD) is defined as the minimum translational distance to separate two intersecting rigid polyhedral models. We present two classes of PD algorithms, one for convex models based on local estimations and one for non-convex models using a global approach:
(1) DEEP: Dual-space Expansion for Estimating Penetration depth is an incremental algorithm for estimating PD between 3D convex polytopes. It incrementally seeks a "locally optimal solution" by walking on the surface of the Minkowski sums.
(2) Our general PD algorithm for non-convex models uses a combination of object-space and image-space techniques. It computes the pairwise Minknowski sums of decompsed convex pieces, perform closest-point query using rasterization hardware, and refines the estimates by a combination of hierarchical culling, model simplification, and object-space walking.
We describe issues involved in implementing these algorithms and highlight their performance on complex models. We also demonstrate their applications to physically-based animation, tolerance verification, and haptic rendering.
Victor Milenkovic, University of Miami
Title: Geometric Conditioning
A number of schemes have been proposed to address numerical issues in Computational Geometry. Exact techniques use sufficient precision to determine geometric primitives correctly but may require much computational effort. Numerical approaches use insufficient precision and add additional computation to handle the resulting inconsistencies for ill-conditioned calculations. Geometric conditioning is a third approach: modify the input coordinates so that every necessary numerical calculation is well-conditioned. Nearest-pair rounding accomplishes geometric conditioning by moving nearby vertex/vertex pairs or nearby vertex/edge pairs to a midpoint. More generally, geometric conditioning can be thought of as an optimization problem: find the minimum (least-squares) perturbation of the coordinates that leads to a well-conditioned output. This talk examines geometric conditioning for curved and three-dimensional domains.
Evanthia Papadopoulou, IBM T.J. Watson Research Center
Title: Voronoi diagrams for VLSI manufacturing: Robustness and implementation issues
In this talk I will present applications of Voronoi diagrams in Computer-Aided Design for VLSI manufacturing in particular yield prediction. I will discuss issues of VLSI data and show that the use of the L-infinity metric can greatly improve robustness problems. I will use variations of the L-infinity Voronoi diagram of polygons to address the problem of computing the "critical area" of a VLSI layout. Critical area is a measure reflecting the sensitivity of a VLSI layout to random defects during the manufacturing process and it is widely used to predict the yield of a VLSI chip i.e., the percentage of functional chips among all chips manufactured. Yield prediction and optimization are essential in today's VLSI manufacturing due to the growing need to control cost. I will discuss implementation issues of our Voronoi-based tool to extract critical area for shorts, opens, via-blocks and combination faults. The tool is based on plane sweep, assumes shapes in constant orientations, and operates on large hierarchical VLSI designs.
Sylvain Poin, MPI Saarbruecken
Title: Efficient Exact Geometric Predicates for Delaunay Triangulations
A time efficient implementation of the exact computation paradigm relies on arithmetic filters which are used to speed up the exact computation of easy instances of the geometric predicates. Depending of what is called ``easy instances'', we usually classify filters as static or dynamic and also some in between categories often called semi-static.
In this paper, we propose, in the context of three dimensional Delaunay triangulations:
- semi-automatic tools for the writing of static and semi-static filters,
- a new semi-static level of filtering called translation filter,
- detailed benchmarks of the success rates of these filters and comparison with rounded arithmetic, long integer arithmetic and filters provided in Shewchuk's predicates.
Our method is general and can be applied to all geometric predicates on points that can be expressed as signs of polynomial expressions. This work is applied in the CGAL library.
This is joint work with Olivier Devillers.
Stefan Schirra, University of Magdeburg
Title: Progress on the number type leda::real
We report on geometric computing with real algebraic numbers using the number type leda::real. This number type, just like the CORE project from NYU, provides a very convenient approach to exact geometric computation for a subset of real algebraic numbers and is available as part of the LEDA software library. We report on recent modifications of the leda::reals, in particular, incorporation of new results on constructive separation bounds, experimental evaluation and future plans.
Jonathan Shewchuk, University of California, Berkeley
Title: Pyramid: Data Structures and Design of a 3D Mesh Generator
I discuss the algorithmic design, data structures, and robustness issues of implementing Pyramid, a three-dimensional constrained Delaunay tetrahedralizer and mesh generator.
Ayellet Tal, Princeton University and Technion
Title: Polyhedral Surface Decomposition and Applications
The problem of decomposing a polyhedron into (convex) pieces has been exhaustively researched in computational geometry. Despite the large number of potential applications, little of this research has been used in practice. There are a couple of possible explanations. First, programming these algorithms can be challenging and might involve solutions to various numerical problems. Second, the size of the decomposition might be quadratic, which is often prohibitive in real applications.
In various applications, however, it suffices to decompose only the polyhedral surface. In this case not only the implementation becomes simpler, but also the size of the decomposition is linear. Numerical problems, however, often cause over-segmentation.
We will present a novel algorithm which avoids over segmentation. The algorithm decomposes any given surface into a handful of patches regardless of the description size of the surface. In fact, the user can determine in advance the desirable size of the decomposition. This is an important feature since certain applications require only a few patches.
Our algorithm has an added benefit. Rather than focusing solely on the shape of the patches, as is customary, the shape of the cuts is also taken into account.
The research presented here is of an experimental nature, and we will thus discuss various possible applications. Moreover, we will demonstrate some results obtained for two specific applications: metamorphosis of polyhedral surfaces and shape-based retrieval of polyhedral surfaces.
Roberto Tamassia, Brown University
Title: JDSL: the Data Structures Library in Java
We overview the principles, design and implementation of JDSL, the Data Structures Library in Java (http://jdsl.org). JDSL is a collection of Java interfaces and classes that implement fundamental data structures and algorithms, such as: sequences, trees, priority queues, search trees, hash tables, sorting and searching algorithms, graphs, graph traversals, shortest path, and minimum spanning tree algorithms. We also discuss work in progress on JDSL packages for geometric computing.
Monique Teillaud, INRIA
Title: Towards a CGAL Geometric Kernel with Circular Arcs
We propose a framework for a geometric kernel containing curved objects. Circular arcs and some predicates on them are currently implemented in this framework.
David Warme, L-3 Communication Analytics Corp.
Title: Geometric and Numeric Stability Issues in GeoSteiner
GeoSteiner is a software package for obtaining exact solutions to several NP-hard optimization problems, including the classic Euclidean and rectilinear Steiner tree problems, and also the more recent "uniform directions" Steiner tree problem. One portion of the code -- the Euclidean full Steiner tree (EFST) generator -- is particularly rich with geometric processing, including numerous combinatorial decisions based on the results of geometric predicates.
As this code matured we encountered several problems caused primarily by geometric or numeric issues. Some of these bugs were triggered by problem instances as small as four points, while others were only discovered during the solution of extremely large problems such as usa13509 from the TSPLIB.
We Present brief descriptions of the EFST generation problem, and the algorithm GeoSteiner uses to generate EFSTs. The bulk of the talk is devoted to several of the bugs we encountered and the methods by which these bugs were corrected. Finally, we present empirical data showing that these methods achieve exceptional stability with only a very modest cost in execution time.
Nicola Wolpert, Max-Planck-Institute for Computer Science
Title: An Exact and Efficient Approach for Computing a Cell in an Arrangement of Quadrics
In the talk we will present an approach for the exact and efficient computation of a cell in an arrangement of quadric surfaces. All calculations are based on exact rational algebraic methods and provide the correct mathematical results in all, even degenerate, cases. By projection, the spatial problem can be reduced to the one of computing planar arrangements of algebraic curves. We show how to locate all event points in these arrangements, including tangential intersections and singular points. By introducing an additional curve, which we call the Jacobi curve, non-singular tangential intersections can be found. By a generalization of the Jacobi curve we are able to determine non-singular tangential intersections in arbitrary planar arrangements. We show that the coordinates of the singular points in our special projected planar arrangements are roots of quadratic polynomials. The coefficients of these polynomials are usually rational and contain at most a single square root.
Chee Yap, New York University
Title: Progress in Constructive Root Bounds
Constructive root bounds is a basic tool for implementing general libraries such as Core Library and LEDA that support the Exact Geometric Computation paradigm. We review some recent work and describe a new technique that exploits the presence of k-ary input numbers. As k-ary numbers include binary floating point and decimal rational numbers, this has the potential to speed up many real world computations. Experimental results based on Core Library implementation will be reported. Joint work with Sylvain Pion.