DIMACS Workshop on Constraint Programming and Large Scale Discrete Optimization

September 14-17, 1998
DIMACS Center, CoRE Building, Busch Campus, Rutgers University, Piscataway, NJ

Eugene C. Freuder, University of New Hampshire
Richard J. Wallace, University of New Hampshire
Presented under the auspices of the DIMACS Special Year on Large Scale Discrete Optimization


Creating and Evaluating Hybrid Algorithms for
Inventory Management Problems

Philippe Baptiste, Yves Caseau, Tibor Kökény, Claude Le Pape
Bouygues, Direction des Technologies Nouvelles
1, avenue Eugčne Freyssinet
78061 Saint-Quentin-en-Yvelines, France

Robert Rodošek
IC-Parc, Imperial College
London SW7 2AZ, England

Many industrial optimization problems are mixed problems. For example,
industrial scheduling problems include resource allocation sub-problems,
industrial vehicle routing problems include scheduling and packing
sub-problems, etc. In such cases, it is a real challenge to design and
implement robust problem-solving algorithms that efficiently account for
the different aspects of the overall problem. Indeed, different
optimization techniques, e.g., linear programming, constraint
programming, genetic algorithms, local search, etc., tend to perform
well on some - but not all - aspects of the complete problem. As a
result, one of the most promising strategies consists in developing an
algorithm that combines different optimization techniques, i.e.,
a hybrid algorithm.
The aim of this paper is to present (1) an industrial inventory
management problem, which consists of two main sub-problems, i.e., a
resource allocation sub-problem and a scheduling sub-problem, and (2)
the algorithms that have been developed so far for this problem, in the
context of the European project CHIC-2 (ESPRIT Project 22165,
http://www-icparc.doc.ic.ac.uk/chic2). Section 1 summarizes the
inventory management problem in an informal way. A precise definition of
this problem will appear in [Caseau & Kökény, 1998]. Benchmark instances
of this problem are available at the CHIC-2 web site (above) as well as
at Ecole Normale Supérieure
(http://www.ens.fr/~laburthe/inventory.html). Section 2 presents three
distinct algorithms for this problem. Section 3 summarizes the
experimental results available so far and our first (preliminary)

Using CSP for Course Of Action Analysis

Robert Calder, Robert Alexander, and Russell Richardson
SAIC, Arlington, VA, USA

Dell Lunceford

Initial prototypes of collaborative planning aids automate the
present command and control planning process but do not provide the
commander with the ability to quickly understand his situation, assess his
options, and analyze his course of actions.  The DARPA Course of Action
Analysis program is exploring breakthrough technologies that promise
improvements in the speed and effectiveness of the planning and analysis
process. Technologies being studied include constraint-satisfaction
problem-solving, exploratory modeling and new information-presentation
techniques. The new technology will improve the analysis process in two
primary areas: increasing the quality of the analysis and decreasing the
time the warfighter spends performing the analysis. Another focus is to
evaluate the impact of technology insertion on the present doctrinal
planning process to gain an understanding of how the current process must
be modified to take advantage of the new technology.

This paper describes the DARPA Course of Action Analysis program,
and how it is applying and extending constraint satification problem-solving
algorithms to the analysis of courses of action. Through the development
of a prototype and a set of soldier-in-the-loop controlled experiments,
the technique's utility and benefits will be measured.

A system for train crew scheduling

Phillipe Charlier and Helmut Simonis
COSYTEC SA 4, rue Jean Rostand F-91893, Orsay Cedex, France

In this report we describe a train crew scheduling system developed for
North Western Trains (NWT) in the UK. The system has been built by PA
Consulting and COSYTEC using the CHIP constraint language. The aim of
the system is the generation of work schedules for around 1500 train
drivers and conductors handling 18000 scheduled services per week.
The drivers and conductors are allocated to a number of different
depots where they start and end their work. One of the main challenges
of the system lies in the handling of work rules and safety regulations,
which control which allowances must be allocated to drivers/conductors
before and after actual driving activities. This crew scheduling system
is closely related to staff allocation in the airline field, but also
has many specific constraints not found in other applications.
Particular rules like time allowances to walk inside/between stations,
possible taxi use or other passive transport must be handled.

The system has been developed using the Prolog version of the CHIP
environment and is integrated with a graphical user environment and a
database system to allow multiple users. The time table information
is obtained from the existing manual unit diagramming system, which
matches particular units(trains) to each service. Finished schedules
are distributed electronically to the depots, where rostering is performed.

In the paper we describe the model of the constraint part of the system
in some detail. The problem is modelled basically using two of CHIP's
global constraints. The cycle constraint expresses the problem as a
tour planning problem in a directed graph, while the diffn constraint sees
the problem as a resource allocation problem. We also indicate different
search strategies and evaluation methods that have been tested. Some results
are explained using the constraint visualisation tools of CHIP together with
a search tree analysis. The system is partially fielded and shows significant
improvements over the previous, manual solution.

A Constraint-Based Optimization Approach to Satellite Scheduling

Flavius Galiber, III  and  Joseph C. Pemberton
Pacific Sierra Research Corporation, 1400 Key Boulevard, Suite 700
Arlington, VA  22209

Satellite scheduling, like all scheduling, is the problem of mapping tasks
(observation, communication, downlink, control maneuvers, etc.) to
resources (sensor satellites, relay satellites, ground stations, etc.).
Through our work on satellite scheduling problems, we have encountered many
different constraints that are particular to the satellite-scheduling
domain.  In this paper, we will introduce a general satellite-scheduling
problem, describing the problem constraints that are particular to
satellite scheduling, and then present the constraint-based techniques that
we have used to address these problems.

Metalogic constraint programming tool for scheduling of broadcast commercials

Boris Galitsky
DIMACS, Rutgers University, New Brunswick, NJ  08903, USA

The study addresses the problem of construction of an adequate language
and reasoning environment for the constraint programming tasks. We develop
the reasoning tool of metalanguage support (MS) with the embedded mechanism
of the constuction of predicates and metapredicates to link the inconsistent
constraints in a multiagent domain. This tool, based on metalanguage
representation with weakened soundness, gains the inference flexibility to
represent the formal scenarios.

The concept of formal scenario is introduced as an alternative to the
traditional axiomatic method (logic program) for the practical applications

Such MS capabilities as reasoning about action, change and belief, temporal
reasoning, are implemented in the scheduling system for the broadcasting
industry. Automatic, incremental, interactive and multiagent modes of the
scheduler demonstrate the scenario frameworks of the increasing complexity.

LSCO methodology - the CHIC2 experience

Carmen Gervet
IC-Parc, Imperial College, William Penny Laboratory, London SW7 2AZ, England

The industrial and commercial worlds are increasingly competitive, requiring
companies to maximize their productivity. As a consequence, the application
domain for Large Scale Combinatorial Optimization problems (LSCO) is booming
(e.g., production scheduling, transport, finance, management). At the same
time, the knowledge and maturity in optimization techniques has increased
considerably in the past ten years. Born in the 50s within the Operations
Research community, optimization techniques comprise today new paradigms like
constraint programming and stochastic search techniques. The increasing
range of optimization algorithms and the new potentials to integrate them
together, have required a greater level of expertise to tackle LSCOs. Some
form of guidance is provided in the literature by means of 1) case studies
that map powerful algorithms to instances of LSCOs, and 2) numerous tools
that ease the modelling and solving of LSCOs. However, there is little
guidance to design models and solutions based on the hybridization of
solvers, and more generally to develop commercial LSCO software. This paper
presents the CHIC2 methodology which aims at filling a gap in this direction
by providing a generic process and guidance for managing, scoping, modelling
and solving LSCO problems.

Speeding up search by exploiting randomization

Carla Gomes and Bart Selman
Department of Computer Science, Cornell University, Ithaca, NY, 14853, USA

Henry Kautz
AT&T Labs, 180 Park Avenue, Florham Park, NJ 07932-0971, USA

Randomized algorithms are among the best current methods for
solving computationally hard problem. However, their run time
performance can vary dramatically as a consequence of their random
nature. We study the run time profiles of combinatorial search
procedures. Our study reveals some intriguing properties of such cost
profiles. The distributions are often characterized by very long tails
or ``heavy tails''. We will show that these distributions are best
characterized by a general class of distributions that have no moments
(i.e., an infinite mean, variance, etc.). Such non-standard
distributions have recently been observed in areas as diverse as
economics, statistical physics, and geophysics to capture
phenomena subject to extreme variation. We also discuss how 
one can boost the performance of search procedures by exploiting the 
heavy tail phenomena.

Search Algorithms for Quantum Computers

Tad Hogg
Xerox Palo Alto Research Center, Palo Alto, CA 94304, USA

Theoretically, quantum computers operate simultaneously with an
exponentially large number of computational states in the time required
for a single operation with conventional computers. While many technical
challenges remain to realizing this capability, recent implementations of
small quantum computers and proposals for error correction are encouraging

This "quantum parallelism" can significantly reduce the cost of some
combinatorial search problems. Examples include factoring numbers in
polynomial time and a reduction in general searches to the square root of
the classical running time.

In this talk, I'll describe the potential capabilities of quantum
computers and how they can be used to design search algorithms. For
example, quantum algorithms can exploit aggregate properties of search
spaces that are of no use classically. How much these properties can
improve search is an important open issue for quantum algorithms and the
nature of combinatorial search.

On the average case behavior of bin packing algorithms

David S. Johnson
AT&T Labs, 180 Park Avenue, Florham Park, NJ 07932-0971, USA

Much research in the area of constraint programming has
dealt with the average-case behavior of algorithms and
heuristics, with special interest in detecting threshold
phenomena.  An example of the latter arises in the study
of random instances of 3-SAT.  Here there appears to be a
constant t ~ 4.2 such that asymptotically a random instance
whose ratio of clauses to variables is less than t can be
expected to be satisfiable, whereas random instances with
ratios greater than t are expected to be unsatisfiable.
Significantly more complicated behavior is possible
in other domains, especially if one turns to questions about
the performance of approximation algorithms.

In this talk I shall illustrate this point, using the
one-dimensional bin packing problem as an example.  In this
NP-hard problem one is given a sequence of items with sizes
in [0,1], and wishes to pack them into a minimum number of
unit-capacity bins.  I will cover both theoretical and
experimental results.  Bin packing is an example of a domain in
which results for small instances can be extremely misleading,
and even billion-item random lists may not be large enough
to reveal key phenomena.  Thus the study of average-case
behavior of necessity becomes one of large-scale optimization.

Personnel assignment as constraint satisfaction

Harald Meyer auf'm Hofe
German Research Center for Artificial Intelligence, Postfach 2080,
D-67608 Kaiserlautern, Germany

The commercial ORBIS-Dienstplan system solves complex nurse scheduling
problems which are represented as partial constraint satisfaction
problems (PCSP) of 250 to 1200 constraint variables. This system is
extended in two ways: fuzzy constraints are integrated in order to represent
certain optimisation tasks more accurately. Additionally, a method is
proposed to infer search control knowledge from an abstraction of the
original PCSP.

Using global constraints for local search

Alexander Nareyek
German National Research Center for Information Technology,
Research Institute for Computer Architecture and Software Technology,
Rudower Chaussee 5, D-12489, Berlin, Germany

The usual approaches of using local search can hardly be generalized.
The advance in efficiency is the primary goal, whereas generality is
often disregarded. This manifests in very domain-specific problem encodings
and specialized satisfaction methods, e.g. the solving of a TSP by edge-
exchange techniques based on graph representation.

Other work takes the general constraint programming framework as starting
point and tries to introduce local search methods for the satisfaction,
e.g. Minton's min-conflicts heuristic. These methods often fail, as they
have a very limited view of the unknown search space structure.

This work tries to overcome the drawbacks of these two approaches by using
global constraints. The use of global constraints for local search allows
to revise a current state on a more global level with domain-specific
knowledge, while preserving features like declarativeness, reusability,
and maintenance. The proposed strategy is demonstrated on a dynamic job-
shop scheduling problem.

Minimization of the Number of Breaks in Sports Scheduling Problems using
Constraint Programming.

Jean-Charles Regin, 
ILOG, Les Taissounieres HB2, 06560 Valbonne, France

This paper aims to show the interest of constraint programming for
minimizing the number of breaks in sports scheduling problems. We consider
single round-robin problems with an even number of teams. In such a problem
we are given n teams and n-1 periods and for each period each team has to
play either at home or away game against another team such that every team
plays every other team exactly once during all the periods. A break for a
team is defined to be two consecutive home matches or two consecutive away
matches. For the considered problem, it has been proven by Schreuder that
the minimal number of breaks is n-2. We propose a model using constraint
programming that has the capability to efficiently prove this result. For
20 teams this takes a mere 0.61s and for 60 teams it still takes less than
1 minute. This model dramatically outperforms previous models found in the
literature. The main reason for this is the use of several global constraints
with which powerful filtering algorithms are associated. 

Moreover, this model is well adapted to solve some variations of the
initial problem in which new constraints are added such as:
for each team the number of away and home matches has to be balanced, it is
forbidden to have two consecutive breaks, etc. We are also able to find and
prove the minimal number of breaks for some given timetables of teams.

Valued CSP: a framework for representing, analyzing and solving
overconstrained problems.

Thomas Schiex, INRA, Chemin de Borde Rouge, Auzeville, BP26, 31326
Castanet Tolosan Cedex, France

Overconstrained problems often occur when casting real problems into the
constraint satisfaction problem framework. In many synthesis problems, this
is often caused by the fact that properties of the solution that are only
desired, but do not have to be absolutely met are expressed as (hard)
constraints. Although such properties could naturally be handled as a
general criteria, it is natural and more uniform to effectively use
constraints to express such desired properties. Moreover, the specific
nature of the criteria can be exploited in algorithm design.

This has led to the definition of several extensions of the classical CSP
framework such as fuzzy CSP, possibilistic CSP, probabilistic CSP, weighted
CSP, partial CSP... Each of the framework apparently significantly differs
from the others but also share some characteristics. In each case, usual
properties, theorems and algorithms of classical CSP can (or cannot) be

Faced to this situation, the valued CSP framework as been designed with
antagonist aims :

   - generality: to contain a large number of existing extensions of the
     CSP framework designed to deal with overconstrained problems. It will
     then be possible to analyze the differences and similarities between
     the frameworks.

   - specificity : the framework should have enough properties to enable the 
     design of useful generic proofs and algorithms in order to avoid the
     repeated development done in each specific framework.

The algebraic approach followed is not new and is inspired by related work
on generalization of shortest path algorithms or more generally of dynamic
programming algorithms.

In this talk, I will show how the Valued CSP framework achieves this goals.
A related framework, the Semiring-CSP framework, will be compared to the
Valued CSP framework and we will finally show how some real problems can be
cast as Valued CSP and also how they can be solved.

CSPLib: a benchmark library for constraints?

Bart Selman
Department of Computer Science, Cornell University, Ithaca, NY 14853, USA

Toby Walsh
Department of Computer Science, University of Strathclyde,
Glascow G1 1XH, Scotland

At present, constraint satisfaction algorithms are most often
benchmarked using random problems made up of simple binary constraints. 
There is, however, an increasing dissatisfaction with this
approach. Random problems usually lack the structures found in real
world problems, and it is often these structures that allow us to solve 
otherwise formidable size problems. There is also an increasing amount of 
research on non-binary constraints. Such constraints allow us to represent 
the structures found in real world problems, and these structures can be 
lost when a problem is flattened out into binary constraints. There is 
thus a very great need for a library of real world non-binary constraint 
satisfaction problems.

A benchmark library could have many benefits for the constraints community. 
It will, for instance, make it easier and quicker to benchmark new
algorithms, and will tackle many of the criticisms made of random problems. 
A benchmark library may also have other, less direct benefits. For instance, 
it will promote a standard format in which to specify problems. This might
help researchers collaborate and exchange results. It also allows systems
to be built more easily as the problem format can act as an interlingua 
between, say, one person's preprocessing procedure and another person's 
local search method. Finally, a benchmark library is an excellent setting
in which to study issues concerning modelling and reformulation. The library 
can include multiple representations of the same problem, as well as tools to 
change between different representations. 

In this section of the workshop, we therefore would like to identify what 
people would want from such a library, to discuss possible formats for 
representing problems, and to identify ways of collecting a reasonable size
corpus of problems.

A constraint programming pre-processor for a bus driver scheduling

Barbara M. Smith, Colin J. Layfield and Anthony Wren
School of Computer Studies, University of Leeds, Leeds LS2 9JT, UK

In planning urban bus operations, scheduling the drivers is the next
stage after allocating buses to the timetabled journeys to produce a
set of bus workings. Driver scheduling involves constructing a set of
shifts such that every bus is always assigned a driver; the number of
shifts is minimized; and each shift obeys the agreed rules on such matters
as maximum driving time, minimum meal-break time, and so on. The bus
driver scheduling problem can be modelled, using integer linear programming,
as a set covering problem. TRACS II is a bus and train driver scheduling
system using this approach which has been developed at the University of
Leeds; earlier systems from which it has been developed have been in use
in the U.K. bus industry since 1984. A very large number of possible shifts
(currently up to 100,000) is generated, and a subset covering all the bus
work is selected to form the driver schedule. The set covering approach has
proved to be easily adaptable to widely different scheduling conditions
and produces very good results.

The complexity of the set covering problem depends to a large extent on the
number of relief opportunities in the bus schedule. A relief opportunity
occurs when a bus is scheduled to be at a point where the driver can hand
over the bus to another driver. The more relief opportunities, the more
pieces of work there are to be covered, and the more possible shifts can be
generated; hence the more constraints and variables there are in the set
covering problem.

The aim of the work described in this paper is to use constraint programming
as a pre-processing step for TRACS II. A subset of the relief opportunities
is selected, and the shifts generated in TRACS II can use only these
selected opportunities to change drivers. A good selection of relief
opportunities will drastically reduce the size of the subsequent set covering
problem while preserving the quality of its solution.

The selection is done by using constraint programming to find several
possible good ways of covering the morning and evening parts of the bus
workings separately; a random element ensures that different ways of covering
the work are explored on each run. The relief opportunities used in the
partial schedules are passed to TRACS II, which runs in the normal way.
Reducing the number of relief opportunities in this way reduces the total
time taken to produce a schedule by more than 75% in some cases. The
schedules produced have the same number of shifts good as those generated
from the full set of relief opportunities, and the cost of the schedules
is increased only slightly, if at all.

Guided Local Search Joins the Elite in Discrete Optimisation

Chris Voudouris
Intelligent Systems Research Group, BT Laboratories, British
Telecommunications plc, UK

Edward Tsang
Department of Computer Science, University of Essex, UK

Developed from constraint satisfaction as well as operations research ideas,
Guided Local Search (GLS) and Fast Local Search are novel meta-heuristic search
methods for constraint satisfaction and optimisation. GLS sits on top of other
local-search algorithms. The basic principle of GLS is to penalise features 
exhibited by the candidate solution when a local-search algorithm settles in
a local optimum. Using penalties is an idea used in operations research before.
The novelty in GLS is in the way that features are selected and penalised.
FLS is a way of reducing the size of the neighbourhood. GLS and FLS together
have been applied to a non-trivial number of satisfiability and optimisation
problems and achieved remarkable result. One of their most outstanding
achievements is in the well-studied travelling salesman problem, in which they 
obtained results as good as, if not better than the state-of-the-art algorithms.
In this paper, we shall outline these algorithms and describe some of their
discrete optimisation applications.

Solution stability as an optimization problem

Richard J. Wallace and Eugene C. Freuder
Department of Computer Science, Kingsbury Hall,
University of New Hampshire, Durham, NH 03824, USA

An important extension of constraint technology involves problems
that undergo changes that may invalidate the current solution.
An approach that we have pursued is to search for solutions
that are likely to remain valid in the face of change.
We call these {\it stable} solutions.
When changes to a problem are temporary and recurrent we want to
collect information related to relative likelihood of change
and incorporate it into our search procedure, so that the latter
not only gives us a solution to the current problem but also
insures that the solution will be relatively stable in the future.

In this talk we review the various issues associated with
the solution stability problem.
These include representational issues, issues related to acquiring
information for choosing assignments for a CSP, as well as questions
regarding the usefulness of
complete and incomplete problem solving strategies.

All this bears on the question of how one characterizes optimization
in this domain and how one goes about finding optimal or quasi-optimal

Multithreaded Constraint Programming: A Hybrid Approach

Fabian Zabatta, Logic Based Systems Lab, Brooklyn College and CUNY
Graduate School, Computer Science Department, 2900 Bedford Avenue,
Brooklyn, NY 11210, USA

Although powerful and robust, constraint programming often reduces to a
backtracking constrained search. Consequently, the search time for many
complex problems can be great. One solution is multithreaded parallelization.
Multithreaded parallelization can greatly improve search performance, not
only increasing the size of solvable problems but also improving heuristic
solutions that were previously limited by time. Presented is a hybrid approach
to application-based multithreaded parallel constraint programming. This
approach uses threads to perform a parallel search that combines a best-first
approach and the standard backtracking approach. We illustrate the effectiveness
of the approach on the Set Covering Problem and report computational results
of several large problem instances.

Other Workshops
DIMACS Homepage
Contacting the Center
Document last modified on September 4, 1998.