Computing Geodetic Coordinates
Document Type
Presentation Abstract
Presentation Date
3-4-1999
Abstract
Much of this talk is intended to be accessible to anyone with about a year of any calculus including at least some proofs. The talk will explain the design of an algorithm to compute the geodetic latitude and altitude of a point (aircraft, spacecraft, or submarine) above or slightly under the surface of an oblate-spheroidal planet.
The algorithm specifications include the requirement of a mathematical proof that the algorithm will deliver a specified accuracy within a specified number of computer-arithmetic operations, taking into account the mathematical approximation of the algorithm and the rounding errors from the computer. This means that for each tolerance of accuracy epsilon, and for each tolerance of computer rounding delta, the proof must supply an integral number of operations, which guarantees results within epsilon of the exact value, even if each operations suffers from a perturbation of relative size at most delta.
With IEEE double-precision floating-point arithmetic, the current proof of the current version of the algorithm guarantees a millionth of a degree for the latitude and one centimeter in the altitude, for any point from the deepest ocean to the edge of the galaxy.
There is an "exact" solution by solving a quartic equation, but apparently no tractable upper bounds on the rounding errors.
The lesson is that practical projects for which accuracy is crucial may require not only calculus but also epsilon-delta proofs.
Recommended Citation
Nievergelt, Dr. Yves, "Computing Geodetic Coordinates" (1999). Colloquia of the Department of Mathematical Sciences. 34.
https://scholarworks.umt.edu/mathcolloquia/34
Additional Details
Thursday, 4 March 1999
4:10 p.m. in MA 109
Coffee/Tea/Treats 3:30 p.m. in MA 104 (Lounge)