The TSP is probably the most-studied optimization problem of all time, and often the first problem that newcomers to the field (or visitors from other domains) attack. Consequently, it has one of the most fractionated literatures in the field, with many papers written in apparent ignorance of what has been done before.
One goal of this Challenge is to create a reproducible picture of the state of the art in the area of TSP heuristics (their effectiveness, their robustness, their scalability, etc.), so that future algorithm designers can quickly tell on their own how their approaches compare with already existing TSP heuristics. To this end we are identifying a standard set of benchmark instances and generators (so that quality of tours can be compared on the same instances), as well as a benchmark implementation of a well-known TSP heuristic (the greedy or ``multi-fragment'' algorithm), so that the effect of machine speed on reported running time can (roughly) be normalized away.
A second goal is to enable current researchers to compare their codes with each other, in hopes of identifying the more effective of the recent algorithmic innovations that have been proposed, both for fast heuristics and for attempts at improving on the classical Lin-Kernighan algorithm. Although many implementations do not include the most sophisticated speed-up tricks and thus may not be able to compete on speed with highly tuned alternatives, we may be able to gain insight into the effectiveness of various algorithmic ideas by comparing codes with similar levels of internal optimization. It will also be interesting to compare the running times of the best current optimization codes with those of the more complicated heuristics on instances that both can handle.
There are also two more tangible initial objectives, one of which has now been achieved:
This challenge is open to anyone who wishes to participate. Participants may submit results for as many algorithms as they wish. Results for variants of the same algorithm that help quantify the effects of various design choices are especially welcome. We are mainly interested in evaluating heuristics, but welcome any optimizers who wish to report results for the benchmarks, as this will provide an interesting perspective.
To date we have concentrated on the symmetric TSP, although we have recently added a preliminary webpage devoted to the general (asymmetric) TSP, and hope to begin collecting results on that as well.
If you are interested in participating in the Challenge, send email to dsj@research.att.com to let us know of your plans.
Participants should download benchmark instances and code for the benchmark heuristic and instance generators from the download page on this site. The codes are written in C, and participants need to be able to compile them on their own machines. Instructions are provided in a README file for using the instance generators to create the random instances in the testbed and verifying that they have been correctly created.
Two sorts of test reports should be submitted to the organizers. The following descriptions assume your code is running on a single processor. If your algorithm exploits the parallelism of a multiprocessor machine, contact the organizers about how to adapt the reports. (Suggestions welcome!)
The end-to-end running time should include the time to read the instance and create all relevant data structures. The memory usage reported should be the maximum amount actually used, rather than the total allocated. The UNIX top utility calls this RES as opposed to SIZE. The ps utility refers to it as RSS. As long as you are running on a lightly loaded machine, this should reflect the maximum memory actually needed by the code. The memory usage field can be left blank for all but the largest two sizes of random Euclidean instances that you can run. However, if your code's memory usage can vary significantly for instances with the same number of cities, you should also report memory sizes for large instances of other types. Contact the organizers if you are having trouble measuring memory usage or have alternative suggestions about how to measure it.
For parameterized algorithms, no hand tweaking of the parameters to fit the instances is allowed. Either the same parameter settings should be used for all instances, or else the process of adapting the parameters to the instance should be automated and the time to adapt the parameters should be included in the reported running time. The adaptation process should use only the information present in the instance itself (and not including the NAME and COMMENT lines in the instance headers).
If the algorithm is randomized, multiple runs may be advisable, in which case multiple lists of results should be submitted rather than average results. (The extra runs can concentrate on the instances for which the algorithm has higher variability.) The organizers will take on the task of computing common meaningful statistics given the raw data. One statistic we will not compute for an algorithm is "Best Solution Found," although participants may submit results for algorithms of the form "perform n runs of algorithm A and return the best tour found" so long as they report both the best tour and the overall time to perform all n runs.
You may add additional fields to the report if you think they may be relevant. For instance, if your algorithm is sufficiently fast that running time is dominated by the time to read the input, you might want to add a field reporting the reading time separately. Another possibility would be to report the length of the starting tour when using a local improvement algorithm that works by first using a heuristic to generate a starting tour. When submitting reports, be sure to provide an explanation of the meanings of any additional data fields you include.
In addition, participants should submit a description of at most two pages (text, latex, pdf, postscript) for each algorithm tested. This description should give a high-level view of how the algorithm works and any important implementation tricks, and provide pointers to more complete descriptions elsewhere (journal articles, tech reports, websites, etc.). This is also an opportunity to point out speed-up tricks that were not used, and which might have yielded better running times if incorporated.
Reports should be sent by email to dsj@research.att.com.
The deadline for the Tech Report is now 1 July 2002 and so it is not too late to submit if you wish to be included in that report. Even after the report is done, the Challenge will continue to welcome new reports, and this website will be updated periodically as new data is accumulated. We hope to maintain the website indefinitely, in line with the first two goals mentioned above.
David Johnson, AT&T Labs - Research
Lyle McGeoch, Amherst College
Fred Glover, Leeds School of Business, University of Colorado
Cesar Rego, Hearin Center for Enterprise Science, University of Mississippi
Questions and comments should be sent to dsj@research.att.com.
Challenge Homepage
Download Page
Results Page
Comparisons Page
For a list of recent updates to this site,
click here.
Talk on Challenge at 2000 International Symp. on Math. Programming (29 slides): postscript, pdf
Gerd Reinelt's TSPLIB
Bill Cook's TSP Page
Pablo Moscato's TSPBIB