[plt-scheme] PLT with very large datasets?
I was using Berkeley DB in a course I took this spring, and thought
I'd like to create an interface between DrScheme and Berkeley DB.
Only I didn't have a good use case for doing it. If I'm not mistaken,
this could be a good reason to create such an interface. Would you
find it useful?
Geoffrey
On Jun 29, 2008, at 18:26, Yavuz Arkun wrote:
> On Mon, Jun 30, 2008 at 00:55, Chongkai Zhu <czhu at cs.utah.edu> wrote:
>> My 2 cents:
>>
>> 1. PLT's GC start to work if you use half of the memory. So if you
>> actual
>> data consumes 1G, you need at least 2G of memory
>
> I have a 2GB machine, so its theoretically OK.
>
>>
>> 2. you said "in theory, about 9 bytes per triplet", but Scheme is a
>> dynamic
>> typed language, which means some memory is used as type tag.
>
> True. I will check out Eli's suggestion about heterogeneous vectors.
>
> Thanks :)
> --Yavuz