[racket] sound + universe report: success.

From: Viera Proulx (vkp at ccs.neu.edu)
Date: Thu Oct 20 16:30:12 EDT 2011


On Oct 20, 2011, at 2:49 PM, John Clements wrote:

> I see several (short-term?) problems with this.
> 
> 1) To the best of my knowledge, the existing MIDI interfaces don't support playing arbitrary sampled sounds, which I think is vital for the kinds of games that I imagine students creating with world and universe.

There is plenty of interesting sounds in the MIDI library - various percussion sounds (helicopter, gun shot, ???).

> 2) I'm not aware of a cross-platform MIDI sound-playing library.

Well, my library is in Java and uses the standard Java MIDI library - if the computer supports a MIDI synthesizer, it plays. It has been used with Windows, Unix, and Mac without any modifications.

> 
> To frame the discussion better, I think that we should probably divide the set of possible sounds into "momentary" and "ongoing" sounds.  If I associate a "pop" with a ball hitting the ground, I'm probably not interested in thinking about whether the sound from the previous frame is still going on. If I'm playing a soundtrack, then I definitely agree that I might want to alter the ongoing sound. 

The library keeps track of the current MIDI state, one 'bucket' for the timed sounds, one for those initiated by key presses. There it records what is currently playing, updates the duration on each tick, and stops the sound when it finished playing. It allows the programmer to check that the contents of these 'buckets' corresponds to the expected values. It issues the appropriate MIDI commands (sound on, sound off) for the selected collection of instruments, notes, program (selection of instruments) as specified by the programmer. 
> 
> I think that what I'm providing at this point is support for "momentary" noises, a.k.a. sound effects, and it looks to me like it works pretty well.
> 
> With respect to "ongoing" sounds: I claim that actually the MIDI models also don't generally use the model you describe. That is, a midi stream consists of a sequence of "events": start this key playing, stop this key playing, etc.
-- see above

>  What you're describing is more of a "pull" architecture, that provides a new buffer of data on each time step. The portaudio library also supports this model, but it's currently not working well under Windows (every other frame gets lost).  This could be a bug in my code (I'm hoping this is the case), or a fundamental problem with portaudio (this would be sad). In any case, I can definitely imagine supporting this. I think that the interface would probably be higher-level than what you describe, something like "keep playing from this sound," or "switch to this other sound".

The tune bucket holds up to sixteen currently assigned MIDI instruments, and every instrument can play an arbitrary chord (and different notes within the chord can have different durations as well). One of the interesting exercises is to generate music - sequences of tunes that are combined according to various musical patterns.
>  

-- Viera


Posted on the users mailing list.