[racket] Efficiency of tight loops in Racket

From: Robby Findler (robby at eecs.northwestern.edu)
Date: Mon Jan 17 15:55:46 EST 2011

If you have not seen this yet, this is where you want to start:

http://docs.racket-lang.org/guide/performance.html#%28part._effective-futures%29

Robby

On Mon, Jan 17, 2011 at 2:40 PM, Jos Koot <jos.koot at telefonica.net> wrote:
> I trieed some examples of my own. But I shall try the examples of the docs
> ASAP.
> More tomorrow, for now it's my bedtime.
> Jos
>
>
>
>
>> -----Original Message-----
>> From: robby.findler at gmail.com
>> [mailto:robby.findler at gmail.com] On Behalf Of Robby Findler
>> Sent: 17 January 2011 20:43
>> To: Jos Koot
>> Cc: Noel Welsh; users at racket-lang.org
>> Subject: Re: [racket] Efficiency of tight loops in Racket
>>
>> With futures you have to be careful; it is easy to write code
>> that doesn't end up actually being parallel. Did you try the
>> examples from the docs?
>>
>> Robby
>>
>> On Mon, Jan 17, 2011 at 1:27 PM, Jos Koot
>> <jos.koot at telefonica.net> wrote:
>> > I did try futures, but did not observe two processors being used
>> > simultaneously.
>> > Jos
>> >
>> >> -----Original Message-----
>> >> From: robby.findler at gmail.com
>> >> [mailto:robby.findler at gmail.com] On Behalf Of Robby Findler
>> >> Sent: 17 January 2011 20:22
>> >> To: Jos Koot
>> >> Cc: Noel Welsh; users at racket-lang.org
>> >> Subject: Re: [racket] Efficiency of tight loops in Racket
>> >>
>> >> Oh, yes. DrRacket does not try to use two processors for anything
>> >> (unless your program uses futures or places, of course).
>> >>
>> >> Robby
>> >>
>> >> On Mon, Jan 17, 2011 at 10:25 AM, Jos Koot
>> <jos.koot at telefonica.net>
>> >> wrote:
>> >> > Thanks for your reply.
>> >> > What I am observing is that when running DrScheme
>> without any other
>> >> > apps running, only one processor is used at a time,
>> >> although control
>> >> > often swichtes bnetween the two processors. I also observe that
>> >> > windows 7 aborts DrScheme when more than 2Gbyte of
>> memory is being
>> >> > used. I have set the memory limit of DrScheme to infite and for
>> >> > windows to about 5 Gbyte. Under windows xp virtual memory
>> >> did function
>> >> > well, but that was with 1 Gbyte of memory and trashing made it
>> >> > impossible to go up to 2 Gbyte. Now I have two cores of 2
>> >> Gbyte, but can't put my machine to thrash on page swapping.
>> >> > Jos
>> >> >
>> >> >> -----Original Message-----
>> >> >> From: robby.findler at gmail.com
>> >> >> [mailto:robby.findler at gmail.com] On Behalf Of Robby Findler
>> >> >> Sent: 17 January 2011 16:14
>> >> >> To: Noel Welsh
>> >> >> Cc: Jos Koot; users at racket-lang.org
>> >> >> Subject: Re: [racket] Efficiency of tight loops in Racket
>> >> >>
>> >> >> I think the real reason is actually much sadder: no one on
>> >> the core
>> >> >> team regularly uses windows. Well, until about a month
>> ago, when I
>> >> >> started using windows for my development tasks so
>> >> hopefully that'll
>> >> >> change.
>> >> >>
>> >> >> But I'm not sure what Jos is observing and I was
>> expecting a reply
>> >> >> from Kevin or Matthew on this -- places are still pretty
>> >> >> experimental.
>> >> >>
>> >> >> Robby
>> >> >>
>> >> >> On Mon, Jan 17, 2011 at 8:58 AM, Noel Welsh
>> <noelwelsh at gmail.com>
>> >> >> wrote:
>> >> >> > I've seen lots of recent commits dealing w/ Windows 7
>> / 64-bit
>> >> >> > support, so I expect it is simply time. Windows is not
>> >> as developer
>> >> >> > friendly as Unix so likely to receive new features last (as
>> >> >> a guess).
>> >> >> >
>> >> >> > N.
>> >> >> >
>> >> >> > On Sat, Jan 15, 2011 at 5:22 PM, Jos Koot
>> >> >> <jos.koot at telefonica.net> wrote:
>> >> >> >> Is there a specific reason why there is no parallel
>> >> >> support for place
>> >> >> >> on a dual core processor with Windows 7.
>> >> >> >> Thanks, Jos
>> >> >> > _________________________________________________
>> >> >> >  For list-related administrative tasks:
>> >> >> >  http://lists.racket-lang.org/listinfo/users
>> >> >> >
>> >> >
>> >> >
>> >
>> >
>
>


Posted on the users mailing list.