[racket] sound + universe report: success.

From: John Clements (clements at brinckerhoff.org)
Date: Thu Oct 20 14:49:35 EDT 2011

On Oct 20, 2011, at 6:47 AM, Viera Proulx wrote:

> This is why I think that playing MIDI sounds is the correct model for the world.
> On tick we add to the world the sounds that are to be played - they stop playing when the next tick starts. (Or, if we control the duration, we include the information for how many ticks should the sound be played and turn off those sounds after the desired time has elapsed.)
> 
> I handle the key events by processing separately key pressed and key released events, and any sound to be played on key event starts when the key is pressed and remains playing until the release. (All other key events are only possible on key press as before. The key release is not visible to the programmer.)
> 
> And I include appropriate hooks so the programmer can write tests whether the specified sounds are currently active (being played).
> 
> Of course, (sorry) this is all Java --- but I think it is a model that preserves the design of the world and universe.

I see several (short-term?) problems with this.

1) To the best of my knowledge, the existing MIDI interfaces don't support playing arbitrary sampled sounds, which I think is vital for the kinds of games that I imagine students creating with world and universe.
2) I'm not aware of a cross-platform MIDI sound-playing library.

To frame the discussion better, I think that we should probably divide the set of possible sounds into "momentary" and "ongoing" sounds.  If I associate a "pop" with a ball hitting the ground, I'm probably not interested in thinking about whether the sound from the previous frame is still going on. If I'm playing a soundtrack, then I definitely agree that I might want to alter the ongoing sound. 

I think that what I'm providing at this point is support for "momentary" noises, a.k.a. sound effects, and it looks to me like it works pretty well.

With respect to "ongoing" sounds: I claim that actually the MIDI models also don't generally use the model you describe.  That is, a midi stream consists of a sequence of "events": start this key playing, stop this key playing, etc.  What you're describing is more of a "pull" architecture, that provides a new buffer of data on each time step. The portaudio library also supports this model, but it's currently not working well under Windows (every other frame gets lost).  This could be a bug in my code (I'm hoping this is the case), or a fundamental problem with portaudio (this would be sad). In any case, I can definitely imagine supporting this. I think that the interface would probably be higher-level than what you describe, something like "keep playing from this sound," or "switch to this other sound". 

John

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4624 bytes
Desc: not available
URL: <http://lists.racket-lang.org/users/archive/attachments/20111020/684dc8d3/attachment.p7s>

Posted on the users mailing list.