[plt-scheme] Limiting Concurrent Connections to Web Server

From: Jay McCarthy (jay.mccarthy at gmail.com)
Date: Mon Jan 12 12:44:17 EST 2009

Thanks for the catch. It's in SVN now.

Jay

On Mon, Jan 12, 2009 at 10:30 AM, Matthew Flatt <mflatt at cs.utah.edu> wrote:
> Be sure to use `handle-evt', not `wrap-evt', to make `loop' a loop.
>
> At Mon, 12 Jan 2009 10:20:02 -0700, "Jay McCarthy" wrote:
>> Ya, you're right. That works great.
>>
>> Henk, try this:
>>
>> (define (make-limit-dispatcher num inner)
>>   (define-struct in-req (partner reply-ch))
>>   (define in-ch (make-channel))
>>   (define-struct out-req (partner))
>>   (define out-ch (make-channel))
>>   (define limit-manager
>>     (thread
>>      (lambda ()
>>        (let loop ([i 0]
>>                   [partners empty])
>>          (apply sync
>>                 (if (< i num)
>>                     (wrap-evt in-ch
>>                               (lambda (req)
>>                                 (channel-put (in-req-reply-ch req) #t)
>>                                 (loop (add1 i)
>>                                       (list* (in-req-partner req) partners))))
>>                     never-evt)
>>                 (wrap-evt out-ch
>>                           (lambda (req)
>>                             (loop (sub1 i)
>>                                   (remq (out-req-partner req) partners))))
>>                 (map (lambda (p)
>>                        (wrap-evt (thread-dead-evt p)
>>                                  (lambda _
>>                                    (loop (sub1 i) (remq p partners)))))
>>                      partners))))))
>>   (define (in)
>>     (define reply (make-channel))
>>     (channel-put in-ch (make-in-req (current-thread) reply))
>>     (channel-get reply))
>>   (define (out)
>>     (channel-put out-ch (make-out-req (current-thread))))
>>   (lambda (conn req)
>>     (dynamic-wind
>>      in
>>      (lambda ()
>>        (inner conn req))
>>      out)))
>>
>> On Mon, Jan 12, 2009 at 10:06 AM, Robby Findler
>> <robby at eecs.northwestern.edu> wrote:
>> > I think you want one thread that just manages the limits (and perhaps
>> > runs with the dispatchers privs). Each connection would then send a
>> > message to that thread saying "can I go now?" and when it gets a
>> > response, it goes. The server thread would then track if its
>> > communication partners died or if they terminated normally and when
>> > below the limit and there are more waiting, it would reply to them,
>> > letting them go.
>> >
>> > You don't seem to need the full generality of kill-safety, since you
>> > have a custodian that you know never dies.
>> >
>> > Robby
>> >
>> > On Mon, Jan 12, 2009 at 10:57 AM, Jay McCarthy <jay.mccarthy at gmail.com>
>> wrote:
>> >> Perhaps, I don't think I'm clever enough though.
>> >>
>> >> Here's a hack that "works":
>> >>
>> >> (define (make-limit-dispatcher num inner)
>> >>  (let ([sem (make-semaphore num)])
>> >>    (lambda (conn req)
>> >>      (parameterize ([current-custodian (current-server-custodian)])
>> >>        (thread
>> >>         (lambda ()
>> >>           (call-with-semaphore
>> >>            sem
>> >>            (lambda ()
>> >>              (inner conn req)))))))))
>> >>
>> >> Basically before going inside the limit, it changes the resource
>> >> policy so that this connection's resources are charged to the whole
>> >> server, so when the connection goes down, the thread doesn't get
>> >> killed. If you do this, you won't get deadlocks, but you will get lots
>> >> of errors, because the connection's ports will die and the thread will
>> >> keep running.
>> >>
>> >> The obvious kill-safe strategy[1] has the same problem.
>> >>
>> >> 1. Create a "limit manager" thread at the server level, it receives
>> >> requests from the dispatchers to do the work, but doesn't die. But if
>> >> the connection died, I don't know how to communicate that back to the
>> >> limit manager. This would serialize requests, so you'd need N of them,
>> >> where N is the limit. Mz threads are cheap, but not as cheap as Erlang
>> >> threads, so this might not be a great idea.
>> >>
>> >> Jay
>> >>
>> >> On Mon, Jan 12, 2009 at 9:52 AM, Robby Findler
>> >> <robby at eecs.northwestern.edu> wrote:
>> >>> Do you think it can be made kill-safe with the current mz?
>> >>>
>> >>> Robby
>> >>>
>> >>> On Mon, Jan 12, 2009 at 10:41 AM, Jay McCarthy <jay.mccarthy at gmail.com>
>> wrote:
>> >>>> I can reproduce it. Essentially the problem is that when a connection
>> >>>> dies, the server kills the threads associated with it. In this case,
>> >>>> the killed threads were supposed to post to the semaphore when they
>> >>>> were done, but they never finished. Basically, the limiting is not
>> >>>> kill-safe [1].
>> >>>>
>> >>>> Jay
>> >>>>
>> >>>> 1. http://www.cs.utah.edu/plt/kill-safe/
>> >>>>
>> >>>> On Sat, Jan 10, 2009 at 2:15 PM, Henk Boom <lunarc.lists at gmail.com> wrote:
>> >>>>> Has anyone else had any luck duplicating this?
>> >>>>>
>> >>>>>    Henk
>> >>>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Jay McCarthy <jay at cs.byu.edu>
>> >>>> Assistant Professor / Brigham Young University
>> >>>> http://teammccarthy.org/jay
>> >>>>
>> >>>> "The glory of God is Intelligence" - D&C 93
>> >>>> _________________________________________________
>> >>>>  For list-related administrative tasks:
>> >>>>  http://list.cs.brown.edu/mailman/listinfo/plt-scheme
>> >>>>
>> >>>>
>> >>>
>> >>
>> >>
>> >>
>> >> --
>> >> Jay McCarthy <jay at cs.byu.edu>
>> >> Assistant Professor / Brigham Young University
>> >> http://teammccarthy.org/jay
>> >>
>> >> "The glory of God is Intelligence" - D&C 93
>> >>
>> >
>>
>>
>>
>> --
>> Jay McCarthy <jay at cs.byu.edu>
>> Assistant Professor / Brigham Young University
>> http://teammccarthy.org/jay
>>
>> "The glory of God is Intelligence" - D&C 93
>> _________________________________________________
>>   For list-related administrative tasks:
>>   http://list.cs.brown.edu/mailman/listinfo/plt-scheme
>



-- 
Jay McCarthy <jay at cs.byu.edu>
Assistant Professor / Brigham Young University
http://teammccarthy.org/jay

"The glory of God is Intelligence" - D&C 93


Posted on the users mailing list.