[racket] Greetings!
On Wed, Mar 18, 2015 at 1:59 PM, John Carmack <johnc at oculus.com> wrote:
> As Hacker News has brought to your attention, I am indeed enjoying my foray into Racket land. I have three separate efforts:
>
> I am doing the prototyping work for a real time multi-user virtual reality service backend in Racket. I am sort-of assuming this will be ported to something else to deploy at scale (and be maintained by others), but I'm finding it very productive in this phase, and I'm holding out some hope that it might stay in Racket.
>
FWIW, Hacker News itself is implemented in Racket and I presume they
scale pretty well. :)
> I have a specification for a VR related file format that is headed towards JSON, but I am seriously considering changing it to s-expressions and embedding a trivial (not Racket) Scheme for scripting.
>
> I'm teaching my son with Racket. He has worked in a few different imperative languages prior.
I presume you know about the book: http://www.ccs.neu.edu/home/matthias/HtDP2e/
> I'm still a total beginner with Lisp family languages, but the fact that I was clearly more productive doing the server work (versus C++ or java) was noteworthy to me. I'm still feeling out the exact nature of the advantages -- REPL is great, and using s-expressions for transport makes the server trivial and isn't too bad on the C++ client side, but I'm still unsure about how dynamic typing fits into it. I do feel a bit like I am driving without a seatbelt when I run my code versus a statically typed language (I plan on trying typed Racket). Dr. Racket and the documentation are both great, and the overall design feels quite "sane".
>
> I would be interested in hearing any guidance for high performance servers written in Racket. GC delays in the tens of milliseconds will be problematic for at least part of this application, but I could split that part off into a separate server and still leave the other part in Racket if necessary. The main benefit is currently development productivity, so obnoxious micro-architectural optimizations aren't useful, but broad strategic guidelines would be appreciated.
>
I can report a bit on this based on some results from the Racket Web
server (which I maintain and largely wrote).
The normal worry of IO waiting and multiple threads doesn't apply so
much in Racket because the synchronous threading operations are
implemented asynchronously in a giant select/kqueue/epoll/etc at the
bottom. I've experimented with building a simple FFI to libuv and
"reimplementing" the threading system in Racket with continuations. I
find that libuv+conts runs faster than the straight Racket IO system,
but it is a major pain because you don't get real ports. (More of an
experiment in how we could change Racket in the future.) If you found
yourself interacting with a very small number of port touching
functions, it may be worthwhile to do. (The basic threading technique
is given in this series of three blog posts
---http://jeapostrophe.github.io/cat-categories.html#%28part._.Concurrency%29
) But if you want to do async IO, then I suggest looking through the
synchronizable events (doc: sync).
Another standard server technique, 0-copy IO, is hard to do perfectly
in Racket, but I do it a bit in the Web server. First, rather than
representing responses as byte strings (char*), I represent them as
closures that will actually do the writing. Since I represent HTML/etc
as S-exprs, this means I don't serialize them and then pass the char*
around, instead I write them directly to the port. This is like 0-copy
and made a performance difference. Another thing that you may find
useful is copy-port, which is a pretty efficient way to stream from a
file. In the future, I'd like to optimize it so that it can be
buffer-less in the file->network port case (using sendfile).
If you're primarily worried about GC, then obviously you want to avoid
big long-term allocations and you obviously know that going to far in
that direction kind of destroys the point of using a language like
Racket. :) But remember that the nursery system makes it pretty cheap
to use allocations, especially in a way that obeys the generational
hypothesis. Running with PLTSTDERR="error debug at GC" will show you how
long collections are taking. You should expect to see lots of minor
collections at less than 10ms (normally around 2ms in my long running
apps) and I find it not super painful to avoid major collections
almost all together. You may find the custodian system useful for
tracking where memory comes from.
On the S-expr front... if your consumer is Racket, then racket/fasl is
faster than normal S-expressions---you can think of it as MMAPing the
internal structures or being like BSON.
Finally, structures defined with #:prefab are like S-exprs (very
simple to parse) but automatically provide constructors and field
accessors with error checking:
#lang racket/base
(require racket/port
rackunit)
(struct pos (x y) #:prefab)
(struct vr-shark (pos mouth-size) #:prefab)
(define some-shark (vr-shark (pos 1.0 3.0) 999.0))
(define shark-bytes (with-output-to-bytes (λ () (write some-shark))))
(check-equal? shark-bytes
#"#s(vr-shark #s(pos 1.0 3.0) 999.0)")
(define byte-shark
(read (open-input-bytes shark-bytes)))
(check-equal? byte-shark
some-shark)
(check-equal? (vr-shark-mouth-size byte-shark)
999.0)
(check-equal? (vr-shark-pos byte-shark)
(pos 1.0 3.0))
Jay
--
Jay McCarthy
http://jeapostrophe.github.io
"Wherefore, be not weary in well-doing,
for ye are laying the foundation of a great work.
And out of small things proceedeth that which is great."
- D&C 64:33