[racket-dev] new package system status

From: Greg Hendershott (greghendershott at gmail.com)
Date: Thu Nov 21 11:39:14 EST 2013

> What's the status of the package system?

I've used the package system pretty heavily for third-party stuff and
I think it's worked really well. IMHO it's less "heavy" for a package
developer, and for a package user.

(a) No hosted docs bugged me a lot at first, too. In practice I've
found that GitHub usually has the necessary info for a package. For a
simple package, a well-written README.md and a glance at the source is
enough for me to feel good about using it. For more complicated
packages, people are either hosting the Scribble HTML using GitHub
Pages or their own web server, or, using `scribble --markdown` to
generate the README.md.

(b) For me the biggest issue has been that the options have evolved
from 5.3.6. There's a subset of stuff that works on 5.3.5, .6, and
5.90. If you care about this let me know and I'll write up my notes.
However if you're comfortable telling folks, "just use Racket 6" then
it will be simpler plus you can use some of the conveniences (like
single-collection packages, #:version, and whatever else I'm
forgetting).

> * "https://pkg.racket-lang.org/" has roughness like requiring JavaScript to
> navigate, which is not only bad for browsers but bad for search engines.
> (Usually, for this kind of site, we'd want to start with a page structure,
> and then add in backward-compatible JS for slickness.)

The status quo is a result of wanting to better uptime by making it as
static as possible, with things stored on Amazon S3. Originally it was
a fully live web server. Downtime meant nothing worked. Whereas now,
downtime means a dev can't add/update a package, but users (human and
automated builds) can still get packages. This is smart and good.

My only question is whether the core should be a web _site_. I think
probably the core should instead be a RESTful web _service_? That way,
it could be used by a variety of tools, as well as by various
front-end web sites, one of which would be pkg.r-l.org.

(A well-designed RESTful web service _can_ be navigated by humans in a
browser, even though it's not intended as the primary human UI, i.e.
not a web _site_ in the normal sense. e.g. If it returns XML responses
and your browser auto-linkifies URLs in XML, so you can click them,
you can navigate the web service API.)

I think the interesting point is that you could make a (mostly) static
web service, in which GET requests have ULRs that hit a static file
server like Amazon S3, and only the PUT/POST/PATCH requests go to some
live server.

What about "dynamic" GETs like search by name or tag?

1. One choice is, don't support search. Client has to GET the full
catalog, or GET a list of _all_ packages with tag X, and filter
further itself. Not entirely unreasonable when there are hundreds not
thousands of packages.

2. Another choice is that the write requests precompute the full set
of answers and update the static GET files. This may seem crazy, but
not necessarily depending on response space size. It's a sort of
over-eager evaluation with memoization. (Whereas a live web server is
as-needed eval without memoization, and a live web server with e.g.
Varnish in front is as-needed eval with memoization.)

(Sorry for the back seat thinking out loud.)

Posted on the dev mailing list.