Sponsored by the Rutgers University Department of Mathematics and the
Center for Discrete Mathematics and Theoretical Computer Science (DIMACS)

Drew Sills, Rutgers University, asills {at} math [dot] rutgers [dot] edu
Doron Zeilberger, Rutgers University, zeilberg {at} math [dot] rutgers [dot] edu

Title: Two Tricks Related to Newton's Method

Speaker: Adi Ben-Israel, Rutgers University

Date: Thursday, September 28, 2006 5:00pm

Location: Hill Center, Room 705, Rutgers University, Busch Campus, Piscataway, NJ


Two tricks, based on the geometry of Newton's method, are presented.

  1. Trick 1: An iterative method for solving scalar equations f(x) = 0, that uses the same data as Newton's method (values of f and f'), has convergence order 1+sqrt(2), and coincides with Halley's method (cubic convergence) for quadratics, see [1].
  2. Trick 2: An iterative method for the minimization of convex functions f: R^n --> R, called a Newton Bracketing (NB) method, see [3].

The NB method uses brackets of lower and upper bounds on the minimum value. Each NB iteration does one iteration of the Directional Newton Method, [2], and the bracket is reduced, by a factor of 1/2 on the average, per iteration.

An advantage of the NB method (over say gradient methods) is a natural stopping rule (the bracket size). The NB method is applied to large scale Fermat-Weber location problems, where it works better the larger is the problem size. This anomaly is due to an interesting and unexpected "experimental" fact.

(Joint work with Yuri Levin of Queens University, Canada.)


[1] A. B-I, Newton's Method with Modified Functions, Contemporary Math 204(1997), 39-50 
[2] Y. Levin and A. B-I, Directional Newton Methods in n Variables , Math. of Computation 71(2002), 237-250 
[3] Y. Levin and A. B-I, The Newton Bracketing Method for Convex Minimization , Comput. Optimiz. and Appl. 21(2002), 213-229