<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
By the way, I really like the underlying theme that modern
floating-point arithmetic has properties as rigorous and well
defined as those of integer arithmetic, and these properties can be
relied on.<br>
<br>
Over the years I've had several people working on multi-precision
arithmetic tell me that they just don't "trust" floating-point
arithmetic, whether or not there are theorems that describe this
behavior precisely.<br>
<br>
And in 2003 the great Arnold Schoenhage replied to an email of mine
with<br>
<br>
<blockquote>
<pre wrap="">> By the way, you may be interested in a very nice paper by Colin
> Percival, which has the following review in Math Reviews:
.....
</pre>
<pre wrap="">> For modern processors where much effort has been placed to make
> floating-point arithmetic very fast (often faster than integer
> arithmetic), this paper might tip the speed balance to
> floating-point-based FFT algorithms.
The idea to use floating-point arithmetic because of its
actual speed due to extra silicon efforts by the processor
manufacturers is like recommending to a sportsman to `run'
faster by driving a car. --- Seriously speaking, it is
somewhat questionable to develop our algorithms under the
biases of existing hardware; rather the hardware should be
designed according to basic and clean algorithmic principles!
--- Imagine how fast our multi-precision routines would be
if some company would be willing to spend that much
silicon for a TP32 in hardware!</pre>
</blockquote>
<br>
So not using floating-point arithmetic was also a cultural issue for
him!<br>
<br>
TP32 is a virtual machine that Schoenhage designed to program
multi-precision arithmetic algorithms, much as Knuth designed MIX to
implement his algorithms. (They're roughly at the same level, too,
it's like programming in assembler.) The difference is that TP32
has a relatively fast interpreter.<br>
<br>
Brad<br>
</body>
</html>