8th DIMACS Implementation Challenge:

The Traveling Salesman Problem

(This page last revised on December 11, 2008.)

About the Challenge

What are the goals?

The TSP is probably the most-studied optimization problem of all time, and often the first problem that newcomers to the field (or visitors from other domains) attack. Consequently, it has one of the most fractionated literatures in the field, with many papers written in apparent ignorance of what has been done before.

One goal of this Challenge is to create a reproducible picture of the state of the art in the area of TSP heuristics (their effectiveness, their robustness, their scalability, etc.), so that future algorithm designers can quickly tell on their own how their approaches compare with already existing TSP heuristics. To this end we are identifying a standard set of benchmark instances and generators (so that quality of tours can be compared on the same instances), as well as a benchmark implementation of a well-known TSP heuristic (the greedy or ``multi-fragment'' algorithm), so that the effect of machine speed on reported running time can (roughly) be normalized away.

A second goal is to enable current researchers to compare their codes with each other, in hopes of identifying the more effective of the recent algorithmic innovations that have been proposed, both for fast heuristics and for attempts at improving on the classical Lin-Kernighan algorithm. Although many implementations do not include the most sophisticated speed-up tricks and thus may not be able to compete on speed with highly tuned alternatives, we may be able to gain insight into the effectiveness of various algorithmic ideas by comparing codes with similar levels of internal optimization. It will also be interesting to compare the running times of the best current optimization codes with those of the more complicated heuristics on instances that both can handle.

There are also two more tangible initial objectives, one of which has now been achieved:

  1. To serve as the basis for a summary chapter on the experimental evaluation of TSP heuristics, written by David Johnson and Lyle McGeoch, for the new book The Traveling Salesman Problem and Its Variations, G. Gutin and A. P. Punnen (Editors), Kluwer Academic Publishers, 2002, Boston, 369-443. A draft of this chapter is available (80 pages postscript). (PDF).

  2. To produce a DIMACS technical report containing all the results obtained by the participants. All participants will be listed as co-authors of this report, which will be available from the DIMACS website. The deadline for submissions to this report has been extended to 1 July 2002.

Who can participate?

This challenge is open to anyone who wishes to participate. Participants may submit results for as many algorithms as they wish. Results for variants of the same algorithm that help quantify the effects of various design choices are especially welcome. We are mainly interested in evaluating heuristics, but welcome any optimizers who wish to report results for the benchmarks, as this will provide an interesting perspective.

To date we have concentrated on the symmetric TSP, although we have recently added a preliminary webpage devoted to the general (asymmetric) TSP, and hope to begin collecting results on that as well.

If you are interested in participating in the Challenge, send email to dsj@research.att.com to let us know of your plans.

How does one participate?

Participants should download benchmark instances and code for the benchmark heuristic and instance generators from the download page on this site. The codes are written in C, and participants need to be able to compile them on their own machines. Instructions are provided in a README file for using the instance generators to create the random instances in the testbed and verifying that they have been correctly created.

Two sorts of test reports should be submitted to the organizers. The following descriptions assume your code is running on a single processor. If your algorithm exploits the parallelism of a multiprocessor machine, contact the organizers about how to adapt the reports. (Suggestions welcome!)

  1. For each machine on which the above experimental results are provided, running times for a set of runs of the benchmark heuristic on selected instances from the random Euclidean testbed with from 1,000 to 10,000,000 cities (or perhaps fewer if your machine doesn't have enough memory for the largest of these).
  2. For each algorithm about which the participant wants to provide results, a list of quadruples (Instance Name, Tour Length Produced, User Time in Seconds, Memory Usage in Megabytes) for all the benchmark instances on which the algorithm can be run in feasible amounts of time and within the main memory of your machine (i.e., without substantial amounts of paging).

    The end-to-end running time should include the time to read the instance and create all relevant data structures. The memory usage reported should be the maximum amount actually used, rather than the total allocated. The UNIX top utility calls this RES as opposed to SIZE. The ps utility refers to it as RSS. As long as you are running on a lightly loaded machine, this should reflect the maximum memory actually needed by the code. The memory usage field can be left blank for all but the largest two sizes of random Euclidean instances that you can run. However, if your code's memory usage can vary significantly for instances with the same number of cities, you should also report memory sizes for large instances of other types. Contact the organizers if you are having trouble measuring memory usage or have alternative suggestions about how to measure it.

    For parameterized algorithms, no hand tweaking of the parameters to fit the instances is allowed. Either the same parameter settings should be used for all instances, or else the process of adapting the parameters to the instance should be automated and the time to adapt the parameters should be included in the reported running time. The adaptation process should use only the information present in the instance itself (and not including the NAME and COMMENT lines in the instance headers).

    If the algorithm is randomized, multiple runs may be advisable, in which case multiple lists of results should be submitted rather than average results. (The extra runs can concentrate on the instances for which the algorithm has higher variability.) The organizers will take on the task of computing common meaningful statistics given the raw data. One statistic we will not compute for an algorithm is "Best Solution Found," although participants may submit results for algorithms of the form "perform n runs of algorithm A and return the best tour found" so long as they report both the best tour and the overall time to perform all n runs.

    You may add additional fields to the report if you think they may be relevant. For instance, if your algorithm is sufficiently fast that running time is dominated by the time to read the input, you might want to add a field reporting the reading time separately. Another possibility would be to report the length of the starting tour when using a local improvement algorithm that works by first using a heuristic to generate a starting tour. When submitting reports, be sure to provide an explanation of the meanings of any additional data fields you include.

Samples of the of both types of report are available from the results page page.

In addition, participants should submit a description of at most two pages (text, latex, pdf, postscript) for each algorithm tested. This description should give a high-level view of how the algorithm works and any important implementation tricks, and provide pointers to more complete descriptions elsewhere (journal articles, tech reports, websites, etc.). This is also an opportunity to point out speed-up tricks that were not used, and which might have yielded better running times if incorporated.

Reports should be sent by email to dsj@research.att.com.

What are the benchmark instances?

There are four types of benchmark instances.
  1. All the symmetric TSP instances in Gerd Reinelt's TSPLIB with 1,000 or more cities. (Instances with fewer cities typically do not these days provide much challenge even to optimization codes.) There are 34 such instances, ranging in size from 1,000 to 85,900 cities. All but one consist of points in the plane under the Euclidean metric, rounded to an integer. The exception is a 1032-city instance given by its distance matrix, which can be ignored if the code being evaluated only handles geometric instances.
  2. 26 instances consisting of cities uniformly distributed in the 1,000,000 by 1,000,000 square under the Euclidean metric, with sizes ranging from 1,000 to 10,000,000 cities. These instances are included in part to allow us to examine the scalability of the codes in the study.
  3. 22 instances consisting of randomly clustered points in the 1,000,000 by 1,000,000 square under the Euclidean metric, with sizes ranging from 1,000 to 100,000 cities. Heuristics typically have a harder time with these, so these should help us study robustness as well as scalability.
  4. 7 instances consisting of random distance matrices, distances chosen independently and uniformly from the integers between 0 and 1,000,000. There's nothing realistic about these (and they can be ignored if your code only handles geometric instances), but they do tend to provide a real challenge to heuristics.
Comments on the testbeds and suggested additions are welcome.

What is the schedule?

The deadline for the Tech Report is now 1 July 2002 and so it is not too late to submit if you wish to be included in that report. Even after the report is done, the Challenge will continue to welcome new reports, and this website will be updated periodically as new data is accumulated. We hope to maintain the website indefinitely, in line with the first two goals mentioned above.

What will the Tech Report contain?

As currently planned, the Tech Report will have three sections.
  1. An introduction, written by the organizers, describing the Challenge itself, summarizing the range of participants and their codes, and drawing any general conclusions that may arise from the study.
  2. A section presenting the results for the various TSP codes studied, including tour quality and running times, both raw running times and times normalized to machine speed, as determined by the benchmark heuristic code. We realize that this will only be a rough normalization, but it should definitely be better than no normalization at all. The section for each code will also contain the short 1-2 page descriptions of the algorithms implemented mentioned above.
  3. A section providing the details of the machines used in the various studies and the running times reported for the benchmark heuristic on each of these machines.

Who are the organizers?

David Johnson, AT&T Labs - Research
Lyle McGeoch, Amherst College
Fred Glover, Leeds School of Business, University of Colorado
Cesar Rego, Hearin Center for Enterprise Science, University of Mississippi

Questions and comments should be sent to dsj@research.att.com.

Useful Links

Challenge Homepage
Download Page
Results Page
Comparisons Page
For a list of recent updates to this site, click here.

Talk on Challenge at 2000 International Symp. on Math. Programming (29 slides): postscript, pdf

Gerd Reinelt's TSPLIB
Bill Cook's TSP Page
Pablo Moscato's TSPBIB

Bibliography

  1. Experimental Analysis of Heuristics for the STSP, D. S. Johnson and L. A. McGeoch, in The Traveling Salesman Problem and its Variations, G. Gutin and A. Punnen, Editors, Kluwer Academic Publishers, 2002, Boston, 369-443. Postscript of near-final draft (80 pages) [PDF version]

  2. Experimental Analysis of Heuristics for the ATSP, D. S. Johnson, G. Gutin, L. A. McGeoch, A. Yeo, W. Zhang, and A. Zverovich, in The Traveling Salesman Problem and its Variations, G. Gutin and A. Punnen, Editors, Kluwer Academic Publishers, 2002, Boston, 445-487. Postscript of near-final draft (45 pages) [PDF version]

  3. A Theoretician's Guide to the Experimental Analysis of Algorithms, D. S. Johnson. To appear in Proceedings of the 5th and 6th DIMACS Implementation Challenges, M. Goldwasser, D. S. Johnson, and C. C. McGeoch, Editors, American Mathematical Society, Providence, 2002. Postscript of near-final draft (36 pages). [PDF version]

  4. The Traveling Salesman Problem: A Computational Study, D.L. Applegate, R.E. Bixby, B. Chvatal, and W.J. Cook, Princeton University Press, Princeton, NJ, 2006.