[plt-scheme] Number crunching -> Matthias, Eli
On Jul 3, 2009, at 4:38 AM, Noel Welsh <noelwelsh at gmail.com> wrote:
>
> Yeah, bind to a C library.
>
> Seriously. In my field -- machine learning, which is very numeric --
> most people find Matlab is plenty fast enough. PLT is faster than
> Matlab...
Some push back here: Matlab does a good job of integrating some very
fast vector operations within a productive, dynamic environment. I'm
surprised to hear you paint things so starkly coming from a machine
learning perspective. While my experience is more in the area of
computer vision and robotics, I have some overlap with machine
learning, and the ability to transparently call fast matrix (sparse or
otherwise) operations is tremendously valuable. I seldom use Matlab
myself because its lack of expressiveness in general feels so
restrictive, but I wouldn't claim that PLT Scheme is outright faster.
The fact is, if I can reasonably represent my problem with Matlab's
core data structures, then Matlab will be far faster than a native
Scheme implementations. This isn't a criticism of anything, it's just
a reflection of priorities. But if I decide that I'm going to use a
non-native Scheme solution to close the performance gap in some
specific areas, then a typical result is that some of my valued
flexibility is lost because the mechanisms for, say, iterating through
the data may have changed. I agree with you that a Scheme program
augmented with native libraries can offer great performance, but a
really solid integration of the two is not trivial.
> The people who want more speed than PLT
> + select C libraries provides AND would actually consider writing
> numeric code in Scheme is 0 + epsilon.
I'm probably in that epsilon, so I take my own feelings on the subject
with a rather large grain of salt. :)
> For example, a runtime that could take advantage of
> multicore machines would benefit many more people that optimising
> floating point calculations. I love performance as much as the next
> guy -- my hard disk is littered with little compilers and so on -- but
> it really isn't that important in the grand scheme of things.
This is another area where I think many FP advocates tend to be a bit
too glib. Being able to claim near linear performance scaling on
today's 2-8 core machines is great, but to do so at the expense of
10-100x slower performance compared to, let's say C, is a really tough
sales pitch! Many functional languages are dealing with that today;
Clojure is a particularly interesting case because it has Java as a
ready point of comparison, and Java is not exactly incapable of
reasonably expressing concurrent programs. I think Haskell makes a
very strong case for itself by most definitely *not* ignoring
single-threaded speed. If a functional language is within striking
distance of C's performance, then I think it's fine to say that easier
parallelism negates that difference. But if you're giving up orders of
magnitude up front, then gains down the road have too much distance to
make up.
All that is to make the following (hopefully) non-controversial point:
it is possible for PLT Scheme's numeric code to be faster, and it
would be great if it was.
Anthony