Results Page

This page last revised on 22 August 2002. For a list of possibly more recent updates to this site, click here. If a result was submitted too late to be covered in the Challenge summary chapter (80 pages postscript) (PDF) in The Traveling Salesman Problem and its Variations, Gutin and Punnen (eds), Kluwer, 2002, this fact is indicated in the relevant algorithm description.


Machines Comparisons

Machine Comparisons using Benchmark Greedy Code

For a selection of the machines in this study, the chart graphs the average time for a single run of the Benchmark Greedy Code as a function of the number of cities N, with the times normalized by dividing through by NlogN (an estimate of the running time for the code on random Euclidean instances such as these if there were no memory hierarchy effects). Note that the fastest machines for small instances are not always the fastest for large ones.


Algorithm Results

In the following examples, the raw data entries are in the format required for submitting results. The normalized entries are html tables that include three extra columns of derived statistics. The summary entries are html tables that summarize the results for similarly-sized instances from the three random instance classes and compare these to the results for five selected TSPLIB instances of comparable sizes. (Participants do not have to produce such tables, as they are produced by cgi-scripts from text files in the specified format.)

The extra columns in the normalized tables give for each instance the following:

  1. Percentage excess over the Held-Karp lower bound.
  2. Percentage excess over the optimal tour length (when the latter is known or tightly bounded).
  3. Normalized running time.
The normalized running times are the predicted user seconds on the 500 Mhz Alpha processor in our machine test bed. This prediction is obtained by computing the ratio between running times for the Benchmark Greedy code on the two machines for instances with the given number of cities, using interpolation when necessary under the assumption that time grows linearly between data points.

Note that although this may be the best we can do as far as normalizing running times, it can be fairly inaccurate, yielding results that may be off by a factor of as much as 2. More Details.

Many of the implementations submitted are only applicable to geometric instances, thus ruling out si1032 and the random instances with prefix M. In addition, many of the geometric codes do not handle non-integral city coordinates, and so cannot be run on on TSPLIB instances d1291, d1655, d2103, fl1400, fl1577, fl3795, rl11849, u1060, u1817, u2152, usa13509, vm1084, or vm1748. To see whether such restrictions apply, check the "Normalized Data" for the implementation in question.

Currently, we have divided the algorithms tested into nine classes:

  1. Tour Construction Algorithms that Emphasize Speed over Quality

  2. Classic Tour Construction Algorithms: > 15% above the HK Bound

  3. Classic Tour Construction Algorithms: < 15% above the HK Bound

  4. Simple Local Search Algorithms (2-Opt, 3-Opt, etc.)

  5. Lin-Kernighan Implementations and Variants

  6. Repeated Local Search Algorithms

  7. Algorithms that Use Chained LK as a Subroutine

  8. Metaheuristics (Tabu Search, Etc.)

  9. Lower Bound and Optimization Algorithms


Tour Construction Algorithms that Emphasize Speed over Quality

These algorithms apply only to geometric instances, with all implementations having running times that are only a small multiple of that needed merely to read the instance. The slowest (FRP) takes only from 3 to 8.5 times the reading time.

The Strip Algorithm [Strip]
( raw data, normalized, summary )

This algorithm works as follows: Divide the instance space into sqrt(N/3) equal-width strips. Then, starting from the leftmost strip, proceed strip by strip, visiting the cities in order of height within each strip, alternately descending and ascending. Connect the last city in the rightmost strip to the first city in the leftmost strip.

The Best-Way Strip Algorithm [Strip2]
( raw data, normalized, summary )

Try both horizontal and vertical strips and return the better of the two tours.

The Spacefilling Curve Algorithm [Spacefill]
( raw data, normalized, summary )

Visit the points in the order they would be encountered when traversing a spacefilling curve. For details, see L. K. Platzmann and J. J. Bartholdi, III, "Spacefilling curves and the planar travelling salesman problem,'' J. ACM 36 (1989), 719-737.

Bentley's Fast Recursive Partitioning Algorithm [FRP]
( raw data, normalized, summary )

Adaptively partition the space until no remaining unpartitioned rectangle contains more than a fixed number of points, construct Nearest Neighbor tours for each rectangle, and put these subtours together according to the tree structure of the partition. For more details, see J. L. Bentley, "Fast algorithms for geometric traveling salesman problems," ORSA J. Computing 4 (1992), 387-411.


Classic Tour Construction Algorithms: > 15% above the HK Bound

Algorithms that build a tour out of smaller parts and stop as soon as one has been constructed.

Implementations attributed below to Bentley are described in J. L. Bentley, "Fast algorithms for geometric traveling salesman problems," ORSA J. Computing 4 (1992), 387-411. Those attributed to Johnson-McGeoch (J-M) will be described in D.S. Johnson, J.L. Bentley, L.A. McGeoch, and E.E. Rothberg, "Near-optimal solutions to very large traveling salesman problems," Monograph (to appear). Partial descriptions also appear in the Johnson-McGeoch Challenge summary chapter.

The Greedy Algorithm

Boruvka Algorithm (Concorde Implementation)

"Quick Boruvka" Algorithm (Concorde) (196 Mhz MIPS) [Q-Boruvka]
( raw data, normalized, summary )

Nearest Neighbor (196 Mhz MIPS)

Double-Ended Nearest Neighbor (Bentley) [DENN]
( raw data, normalized, summary )

Nearest Insertion (Bentley) [NI]
( raw data, normalized, summary )

Nearest Addition (Bentley) [NA]
( raw data, normalized, summary )

Nearest Augmented Addition (Bentley) [NA+]
( raw data, normalized, summary )

Bentley's implementations of this and the other "Augmented Addition" algorithms covered below use a more powerful "addition rule" than in the standard definition. In the standard version, the point to be inserted must be inserted into one of the two tour edges involving the point nearest to it in the current tour. In Bentley's augmented addition rule, one considers all edges that have an endpoint that is no more than twice as far from the point to be inserted than its nearest tour neighbor.

Cheapest Insertion (J-M) [CI]
( raw data, normalized, summary )

Convex Hull, Cheapest Insertion (J-M) [CHCI]
( raw data, normalized, summary )

Convex Hull, Greatest Angle Insertion (J-M) [CHGA]
( raw data, normalized, summary )

Random Insertion (Bentley) [RI]
( raw data, normalized, summary )

Random Addition (Bentley) [RA]
( raw data, normalized, summary )

Random Augmented Addition (Bentley) [RA+]
( raw data, normalized, summary )

The Double MST algorithm (J-M) [DblMST]
( raw data, normalized, summary )

Karp's Recursive Partitioning algorithm (J-M)

As described in R. M. Karp, ``Probabilistic analysis of partitioning algorithms for the traveling-salesman in the plane,'' Math. of O.R. 2 (1977) 209-224, and implemented by Johnson and McGeoch.

Litke's Recursive Clustering Algorithm (J-M)

As described in J. D. Litke, ``An improved solution to the traveling salesman problem with thousands of nodes,'' C. ACM 27:12 (1984), 1227-1236, and implemented by Johnson and McGeoch.


Classic Tour Construction Algorithms: < 15% above the HK Bound

Farthest Insertion (Bentley) [FI]
( raw data, normalized, summary )

Farthest Addition (Bentley) [FA]
( raw data, normalized, summary )

Farthest Augmented Addition (Bentley) [FA+]
( raw data, normalized, summary )

The Clarke-Wright Savings heuristic (J-M) (average of 10 runs) [Savings]
( raw data, normalized, summary )

Originally introduced in G. Clarke and J.W. Wright, "Scheduling of vehicles from a central depot to a number of delivery points," Operations Research 12 (1964), 568-581.

The Golden-Stewart CCA algorithm (J-M) [CCA]
( raw data, normalized, summary )

CCA stands for "Convex Hull, Cheapest Insertion, Angle Selection." For more details on the algorithm, see B.L. Golden and W.R. Stewart, "Empirical analysis of heuristics," in The Traveling Salesman Problem, Lawler, Lenstra, Rinnooy Kan, and Shmoys (editors), John Wiley & Sons, Chichester 1985, 207-249.

Variants on the Christofides algorithm:

Initial tree = MST (standard minimum spanning tree) or HK1-tree (one-tree generated by the Lagrangean relaxation approach to computing the Held-Karp bound)

Matching = Min (standard minimum weight matching, Approx (greedy matching + local optimization), or half-LK (every other edge of a Lin-Kernighan tour)

Shortcutting = Standard (walk an Euler tour, skipping past previously visited cities) or Greedy (for each city in some order, choose the shortcut that most shortens the tour)


Simple Local Search Algorithms (2-Opt, 3-Opt, etc.)

The Johnson-McGeoch implementations reported on here and in subsequent sections are described in some detail in the chapter "The Traveling Salesman Problem: A Case Study in Local Optimization," D. S. Johnson and L. A. McGeoch, in Local Search in Combinatorial Optimization, E. H. L. Aarts and J. K. Lenstra (editors), John Wiley and Sons, Ltd., 1997, pp. 215-310. Postscript of preliminary draft (103 pages) We expect soon to post results for these algorithms under different parameter settings.

Johnson-McGeoch 2-Opt Implementation

Bentley's 2-Opt Implementation [2opt-B]
( raw data, normalized, summary )

Bentley's implementation of 2-Opt (as well as 2.5-Opt and 3-Opt, below) are described in his paper "Fast algorithms for geometric traveling salesman problems," ORSA J. Computing 4 (1992), 387-411. They do not rely on neighbor lists, but use nearest neighbor searches in K-d trees to find candidates on-the-fly. Also, unlike the Johnson-McGeoch implementations, they do not perform the first improving move seen, but keep looking until either they have seen 8 improving moves or run out of possibilities, and then perform the best move seen. As with the Johnson-McGeoch implementations, however, they use the Greedy heuristic (called "multi-fragment" by Bentley) to produce starting tours.

Concorde's 2-Opt Implementation [2opt-ABCC]
( raw data, normalized, summary )

This implementation is one of the options that can be selected for Concorde's edgegen program, where it is used primarily to generate edge-sets for Concorde's optimization routines. It follows Bentley's basic scheme, but only considers about half as many possible moves, leading to faster running times and worse tours.

Bentley's 2.5-Opt Implementation [2.5opt-B]
( raw data, normalized, summary )

In 2.5-Opt one combines 2-opt moves with a move consisting of deleting a single city and reinserting it elsewhere in the tour.

Concorde's 2.5-Opt Implementation [2.5opt-ABCC]
( raw data, normalized, summary )

This implementation is one of the options that can be selected for Concorde's edgegen program. See the comments under 2-Opt above.

Johnson-McGeoch 3-Opt Implementation

Bentley's 3-Opt Implementation [3opt-B]
( raw data, normalized, summary )

This version of 3-Opt, does not examine all possible classes of 3-Opt moves, in particular failing to consider moves that simply reshuffle the three tour segments without reversing any of them.

Concorde's 3-Opt Implementation [3opt-ABCC]
( raw data, normalized, summary )

This implementation is one of the options that can be selected for Concorde's edgegen program. See the comments under 2-Opt above.

Vertex Reinsertion plus Path Reoptimization [VRPR]
( raw data, normalized, summary )

This algorithm, submitted by S. Sathiamoorthy and G. Andal Jayalakshmi, begins by constructing a Nearest Neighbor tour. It then performs two types of local improvement steps. First, if the tour can be shortened by removing a city and reinserting it next to one of its 5 nearest neighbors, we do this in the best possible way (note that this is a special case of a 3-Opt move). Second, if any 6 consecutive cities in the tour can be rearranged so as to shorten the overall tour, this is done in an optimal way.

See G. Andal Jayalakshmi, S. Sathiamoorthy, and R. Rajaram, ``A Hybrid Genetic Algorithm- A New Approach to solve Traveling Salesman Problem,'' International Journal of Computational Engineering Science 2:2 (2001), 339-355, Imperial College Press ( PDF file of paper).

Burke-Cowling-Keuthen "Hyperopt" Implementations

k-Hyperopt is a restricted version of 2k-Opt. For more details, see postscript (3 pages). The current implementation can only handle Euclidean instances.

The GENI/GENIUS Algorithms of Gendreau, Hertz, and Laporte

These algorithms are described in the authors' paper "New insertion and postoptimization procedures for the traveling salesman problem," Operations Research 40 (1992), 1086-1094. GENI might be viewed as a tour construction heuristic, with the addition that after each insertion a specialized 4- or 5-Opt move is made. GENIUS can be viewed as a special case of 9-Opt, using GENI starting tours. The code was provided by the authors but run by Johnson-McGeoch after tweaking to improve memory management and efficiency (tweaks that did not alter the tours produced). The parameter p represents the length of the neighbor lists used. The value p = 10 was chosen on the basis of limited tests. Reducing p to 3 appears to yield a factor of 2-3 speedup at the cost of a significant degradation in tour quality. Increasing p to 20 yields only marginal tour quality improvements while increasing running time by a factor of 8 or more.


Lin-Kernighan Implementations and Variants

Concorde's version of Lin-Kernighan [LK-ABCC]
( raw data, normalized, summary )

For details, see the draft "Finding tours in the TSP" by Applegate, Bixby, Chvatal, and Cook (59 postscript pages).

Applegate-Cook-Rohe Lin-Kernighan Implementation [LK-ACR]
( raw data, normalized, summary )

A paper, "Chained Lin-Kernighan for large traveling salesman problems," describing this implementation can be found in the Recent Paper section of Bill Cook's home page. Note: The base machine here had insufficient memory to run E10M.0. reported for this instance for the Applegate-Cook-Rohe codes here and below were obtained on an IBM 260 with a 200 Mhz 64-bit Power3 processor and 4 Gigs of memory, with the running time entry in the table being the actual time multipled by 1.4 to account for different processor speeds.

Rohe Lin-Kernighan Implementation [LK-Christo]
( raw data, normalized, summary )

Uses starting tours produced by Rohe's implementation of Christofides with HK one-tree starts, minimum matchings, and greedy shortcuts. The LK implementation is a predecessor of the Applegate-Cook-Rohe implementation.

Johnson-McGeoch Lin-Kernighan Implementation

See "The Traveling Salesman Problem: A Case Study in Local Optimization," D. S. Johnson and L. A. McGeoch, in Local Search in Combinatorial Optimization, E. H. L. Aarts and J. K. Lenstra (editors), John Wiley and Sons, Ltd., 1997, pp. 215-310. Postscript of preliminary draft (103 pages). For the results reported below, don't-look bits were used, tours were represented by two-level trees, and the depth of the LK search was unbounded (except where otherwise noted).

David Neto's Lin-Kernighan Implementation

This implementation attempts to compensate for clustering in the instance by modifying the gain computations appropriately. For more information, see this note describing the approach (8 postscript pages) or Neto's full 222-page Ph.D. Thesis (Efficient Cluster Compensation for Lin-Kernighan Heuristics, Computer Science Department, University of Toronto, 1999) available from this page, which also contains a link to Neto's downloadable source code. Results are averages over three runs.

The following results were submitted after the Challenge summary chapter was completed, but offer more insight into the effectiveness of Neto's cluster compensation techniques. They provide results for averages of 5 runs of his LK implementation, with and without cluster compensation. Cluster compensation can degrade tour quality slightly, but tends to speed up the computation significantly for clustered and TSPLIB instances, and even a bit for Uniform instances.

The new results also allow us to view the value of adding quadrant neighbors to nearest neighbors, which typically improves the tours for clustered instances substantially. That the quadrant neighbor results are much slower, however, due to inefficiencies in this code's quadrant neighbor calculation routines. If efficient kd-tree code had been used for doing this, the running times with and without quadrant neighbors would have been fairly close. Note that the same code was used as for the initial experiments, although experiments were run on a faster processor.

Nguyen-Yoshihara-Yamamori-Yasunaga Lin-Kernighan Variant

This variant looks for moves that start with a valid 5-Opt move followed by an LK-search that uses 3-Opt rather than 2-Opt moves. The implementation uses don't look bits and a data structure with properties like those of segment trees. For more details, see this brief text description. These results were submitted after the Challenge summary chapter in The Traveling Salesman Problem and its Variations was completed.

Stem-and-Cycle Ejection Chain Algorithm (Rego, Glover, & Gamboa)

For a description of this algorithm and the experimental runs, click here.

Augmented Lin-Kernighan (Buesching) [ALK]
( raw data, normalized, summary )

A brief description of this algorithm can be found here.

Keld Helsgaun's Implementation of a powerful Lin-Kernighan Variant [Helsgaun]
( raw data, normalized, summary )

This variant adds several new ideas. One involves computing neighbor lists based on a distance functions modified by Lagrangean relaxation. The second is to replace the sequence of 2-opt moves in the basic search step of Lin-Kernighan with a sequence of 5-opt moves (made feasible by the use of short neighbor lists). For more information, see Helsgaun's home page. A short summary is provided here.


Repeated Local Search Algorithms

Concorde's version of Chained Lin-Kernighan

For details, see the draft "Finding tours in the TSP" by Applegate, Bixby, Chvatal, and Cook (59 postscript pages).

Applegate-Cook-Rohe Chained Lin-Kernighan

A paper, "Chained Lin-Kernighan for large traveling salesman problems," describing this implementation can be found in the Recent Paper section of Bill Cook's home page.

Johnson-McGeoch Iterated Lin-Kernighan (and Iterated 3-Opt)

See "The Traveling Salesman Problem: A Case Study in Local Optimization," D. S. Johnson and L. A. McGeoch, in Local Search in Combinatorial Optimization, E. H. L. Aarts and J. K. Lenstra (editors), John Wiley and Sons, Ltd., 1997, pp. 215-310. Postscript of preliminary draft (103 pages).

David Neto's Iterated Lin-Kernighan Implementation

This implementation attempts to compensate for clustering in the instance by modifying the gain computations appropriately. For more information, see this note describing the approach (8 postscript pages) or Neto's full 222-page Ph.D. Thesis (Efficient Cluster Compensation for Lin-Kernighan Heuristics, Computer Science Department, University of Toronto, 1999) available from this page, which also contains a link to Neto's downloadable source code.

Nguyen-Yoshihara-Yamamori-Yasunaga Iterated Lin-Kernighan Variant

These results were submitted after the Challenge summary chapter in The Traveling Salesman Problem and its Variations was completed. The implementation uses the NYYY implementation of LK mentioned above as its engine, with the maximum LK search depth set to 100. The first set of results is for a version using 12 quadrant neighbors (nearest neighbors on non-geometric instances), Greedy starts, and kicks generated by random 2-bridge moves.

The second set of results uses parameter settings chosen for greater speed: 12 quadrant neighbors, Quick Boruvka starts, and the geometric kicks of Applegate-Cook-Rohe (with k = 250).

Chris Walshaw's Multi-Level Lin-Kernighan Implementations

These algorithms proceed hierarchically, where in the inductive step a starting tour for one level is obtained by applying Concorde's/Helsgaun's version of Lin-Kernighan to a smaller problem obtained by requiring a subset of the most promising edges to be in the tour. For details, see C. Walshaw, "A multilevel approach to the traveling salesman problem," Operations Research, to appear (21 pages postscript) and C. Walshaw, "A multilevel Lin-Kernighan-Helsgaun algorithm for the travelling salesman problem" (9 pages postscript). Results for the latter algorithm arrived after the Challenge summary chapter was completed.

Keld Helsgaun's Multi-Trial Variant on Lin-Kernighan

In this variant, successive trials use the edges of the current best tour to prune the search. More details are available from Helsgaun's home page.

Burke-Cowling-Keuthen's Variable Neighborhood Search using Hyperopt

Here the basic local search engine is Hyperopt, and the kicks are chosen from a restricted (but increasing) set of hyperopt moves. For more details see postscript (3 pages).


Algorithms that Use Chained LK as a Subroutine

Balas & Simonetti Dynamic Programming heuristic

This algorithm uses dynamic programming to find the best way to locally reorder a tour, with a parameter k that governs how far a city may be moved. For a more detailed description, click here. For a full description, see "Linear Time Dynamic Programming Algorithms for New Classes of Restricted TSPs: A Computational Study," INFORMS J. Computing 13:1 (2001), 56-75. (Postscript and .pdf versions available from Neil Simonetti's TSP Page.) These were run using the Concorde Chained Lin-Kernighan results reported above for the 360 Mhz UltraSPARC-II as starting tours. Running time includes time both for CLK and for the dynamic programming; the last column of the raw data gives the dynamic programming time by itself.

Bill Cook's Tour Merging Algorithm

This algorithm is a combination of Concorde's Chained LK with the branch decomposition algorithm of Cook and Seymour (unpublished). The basic approach is described in Section 2.6 of "Finding tours in the TSP," available in the Recent Paper section of Bill Cook's home page. For these results, 10 tours were generated for each instance using N-iteration Chained LK (with geometric kicks), and the branch decomposition algorithm was then used to find an optimal tour subject to the restriction that each edge must be in at least one of the 10 tours. The code was run on all instances having strictly less than 10,000 nodes, but the tour merging failed (due to excessive branch width) on u2319, fnl4461, pla7397, E3k.0, E3k.2, and the M instances.

Chris Walshaw's Multilevel Approach

This algorithm proceeds hierarchically, where in the inductive step a starting tour for one level is obtained by applying Concorde's version of Chained Lin-Kernighan (MLCLK) or Helsgaun multi-trial variant on Lin-Kernighan (MLLKH) to a smaller problem obtained by requiring a subset of the most promising edges to be in the tour. For details, see C. Walshaw, "A multilevel approach to the traveling salesman problem," Operations Research, to appear (21 pages postscript), and C. Walshaw, "A multilevel Lin-Kernighan-Helsgaun algorithm for the travelling salesman problem" (9 pages postscript). Results for the latter algorithm arrived after the Challenge summary chapter was completed.


Metaheuristics (Tabu Search, Etc.)

Zachariasen-Dam Implementations of Tabu Search

These results are obtained using the original implementation described in "Tabu Search on the Geometric Traveling Salesman Problem," M. Zachariasen and M. Dam, Meta-heuristics: Theory and Applications (editors I.H. Osman and J.P. Kelly), Kluwer Academic Publishers, Boston, pp. 571-587, 1996 (17 postscript pages). The results reported here are for different combinations of descending and ascending move-types. For more details, see the linked 2-page postscript note.


Lower Bound and Optimization Algorithms

Rohe's Approximate Held-Karp Bound Computations (Lagrangean Relaxation)

Computes a lower bound on the Held-Karp Lower Bound. In the "html" summaries, "percent excess" is the amount be which the approximate bound falls short of the true bound.

Bounds Computed Using Concorde

Note: Concorde runs for Exact Held-Karp computations for E1M.0, E3M.0 and E10M.0 were were performed by Johnson-McGeoch on their local SGI Challenge using Concorde's default settings. Similarly, all optimization runs used Concorde's default settings. Results for random instances were obtained by Johnson-McGeoch on the target 500 Mhz Alpha machine. Times for TSPLIB instances (on a different machine of the same type) are reported on the Concorde webpage. Our run for E3k.4 was not completed because of a system crash, but after 73 days the gap between upper and lower bounds was .0053% and shrinking at a rate that might have yielded convergence in another 60 days or less, with memory usage at 113 Meg and increasing.


Challenge Homepage

About the Challenge

Download Page

Comparison Page

History of Updates