DIMACS Theory of Computing Seminar


Title:

An Optimal Algorithm for Monte Carlo Estimation

Speaker:

Michael Luby
International Computer Sciences Institute

Place:

DIMACS Seminar Room 431, CoRE Building
Busch Campus, Rutgers University

Time:

4:30 PM
Tuesday, October 17, 1995 (Note Day Change: this replaces the Discrete Math Seminar for the week)

Abstract:

A typical approach to estimate an unknown quantity M is to design an experiment that produces a random variable Z whose expected value is M and run this experiment independently a number of times and use the average of the outcomes as the estimate. The goal is to produce a good relative estimate of M with high probability. Usually $Z$ is distributed in the interval [0,1], or can be renormalized so that this is the case.

We describe an approximation algorithm A which, given small parameters c and d, runs independent experiments and produces an estimate that is within a factor 1+c of M with probability at least 1-d. The number of experiments run by A is proportional to quantities that depend on the distribution Z, which are unknown a priori. We prove that the expected number of experiments run by A is optimal to within a constant factor for every Z.

There are a number of applications where Monte Carlo algorithm simulation of the type suggested above has been used and where the analysis of the number of experiments to run is far from tight for every problem instance. The algorithm A can be directly used in these applications to produce a provably good estimate while running the minimal number of experiments needed for the particular problem instance.

JOINT WORK WITH: Paul Dagum, Richard Karp, and Sheldon Ross


Document last modified on October 11, 1995