[racket] query to sqlite3 with 2050+ columns slows down Racket 6.0 but not 6.0.1

From: Dmitry Pavlov (dpavlov at ipa.nw.ru)
Date: Tue Jul 1 14:15:06 EDT 2014

Hello,

TL;DR : Some hard-to-catch performance bug was present in Racket 6.0
and fixed(?) in 6.0.1.


No complaints, just wanted to share my experience in dealing
with a problem, origin of which I still do not remotely understand.

I have a fairly big procedure that does a lot of calculations
using flvectors, unsafe ops and FFI.

I use racket's db library with a customary sqlite3 binary
who is able to store and fetch up to 4000 columns
(SQLITE_MAX_VARIABLE_NUMBER = SQLITE_MAX_COLUMN = 4000).
Ordinary sqlite3 has those limits set to 2000.

The big procedure mentioned above does not use sqlite3.
However, if I fetch 2050 or more columns from the table
before the procedure (and even forget the fetched columns
and close the connection before the procedure), the
procedure slows down ~ 5x. There is no bottleneck
that causes the slowdown -- it seems that it slows
down uniformly, like Racket's VM suddenly does not
produce optimized code or something like that.

While there are so many columns in the table,
the number of rows is just a few, so no memory overuse.

I ran my tests in command line, so no DrRacket's
side effects. Using Raco make or Raco exe did not help.

Fetching 2040 or less columns instead of 2050 returned
the performance to its normal level.

Using 32-bit Racket 6.0 on Windows instead of
64-bit Racket 6.0 on Linux returned the performance
to its normal level.

I have scratched my head over the issue for a day,
got nothing, and then tried to install Racket 6.0.1
on my 64-bit Linux workstation. The problem went away!
I reinstalled Racket 6.0 and the problem is still there
with 6.0.

So some change in 6.0.1 magically fixed the problem,
whatever it is.


Best regards,

Dmitry

Posted on the users mailing list.