[plt-scheme] threads and mzscheme

From: Hans Oesterholt-Dijkema (hdnews at gawab.com)
Date: Mon May 23 14:44:42 EDT 2005

Eli Barzilay schreef:

>On May 23, Hans Oesterholt-Dijkema wrote:
>  
>
>This depends on many things like how do you transfer these things to
>the work thread, how the work thread uses the db etc.
>
>  
>
At

http://www.elemental-programming.org/epwiki/Scheme%20Persistent%20ROOS%20-%20Object%20Database

There's documentation about what I'm doing. Especially interesting is the
picture of the component design, which can be found by clicking on "OODB".
What I'm testing right now, is the /client handler, /i.e. the whole 
system, without
client connections.

One construction rule is that all objects will be accessed through the 
/object cache/.
Only searches are performed directly on the /backend-connections./

All changes are written to the /object cache/, and the change is written 
to the /FIFO/, which
is a FIFO between threads (used like a (command) message queue). In 
short, I'm performing
the following code:

         (define (ctest CH C)
           (if (= C 0)
            #t
           (begin
             (-> CH lock shared-oid)
             (let ((n (+ (oodb-unmarshall (-> CH get shared-oid 
'counters 'n)) 1)))
               (-> CH put! shared-oid 'counters 'n (oodb-marshall n))
               (let ((n1 (oodb-unmarshall (-> CH get shared-oid 
'counters 'n))))
             (-> CH unlock shared-oid)
             (and
              (= n n1)
              (ctest CH (- C 1))))))))

CH is a /client handler/ object. I'm starting this ctest function with 1 
to 15 threads, which results
in 1 to 15 /client handlers/, 15 /backend connections, /1 shared /object 
cache/, 1 shared  /object locks
/1 shared  /FIFO/. The /change handler/ is running in a separate thread. 
The /change handler/ commits
changes put in the /FIFO/ to the backend using its own /backend connection.

/The /change handler/ is a littlebit smart. When it gets time to work, 
it will first "snapshot the FIFO by putting
its own command message to the FIFO". Then It will process the snapshot 
(a "unit of work"). From this
unit of work, it will only commit the last change of an 
object/class/attribute combination to the backend.

As can be seen from the ctest function, for any number of changes in the 
FIFO, in this testcase,  this will
result in exactly 1 commit of a change per unit of work. The idea is, 
that the backend is slow and the
cache system is fast; so while waiting for the database to commit a 
change, other threads should be able
to work on.

This is what I see happening with one thread: /ctest/ writes about 150 
changes, before the /change handler/
starts handling them. This is repeated, until all 5000 counts have been 
done. With 2 /ctest /threads, I see about
200 changes per unit of work. With 3 threads, this number starts 
dropping fast, until with 15 threads, only
about 2 or 3 changes per unit of work are committed. The database now 
takes 75% CPU and mzscheme
doesn't get any time anymore. The performance drop is huge; starting 
with 3000 changes/s, dropping to
31 changes/s for 15 threads.

Now, I put a *1 second delay* between /unit of works/. What I see now, 
is that about 5000 changes/s can
be committed to the FIFO each second. And, this scales lineairly, i.e., 
with 15 threads it is still 5000 changes/s.
mzscheme gets 100% CPU. /I don't like the 1 second delay between units 
of work, because with say 200
changes in a unit of work to commit to the database it probably won't help./

So, the thread behaviour of mzscheme is not what I've expected. It looks 
like the database access is serialized
to whole mzscheme and no thread is running until the database access has 
been done. Just when I took so much
care to make sure that all threads had their own database connection and 
that all changes are written to the
database asynchronously!

So there's my question. How can it be that the /database connection/ 
blocks all other threads?!

Thank you in advance for your answer(s),

Hans Oesterholt-Dijkema


//
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.racket-lang.org/users/archive/attachments/20050523/3ad08cdb/attachment.html>

Posted on the users mailing list.