The main problem is not only that interval arithmetic is at least twice as slow as ordinary computer arithmetic, but also that the margins of error keep increasing over successive computations. Of course, this margin of error is anyway there in your computations whether you use interval arithmetic or not - at least now you have your "known unknowns" - but we humans normally do not like to face it. There are other problems too, including the difficulty of division when an interval contains the number 0, the non-distributive nature of computations, the necessity to anyway deal with floating-point precision and rounding errors when the endpoints of intervals are expressed as floating-point numbers, etc.
Despite all these problems, interval arithmetic might still be our best bet in attempting to perform meaningful computations on computers. Interestingly, Knuth also expresses a similar view in TAOCP Volume II ("Seminumerical Algorithms"), but sadly does not expand much on this topic.
If you're intrigued by such things, you might also want to check out affine arithmetic and arbitrary-precision arithmetic.
(Originally posted on Advogato.)
|
Tweet |
|
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.