I’ve been toying with the idea of implementing something like Paul Hudak’s Euterpia in Elm, and would like comments thereon. The system is described in the book The Haskell School of Music by Hudak and Quick. One uses it to compose (and play) music directly in Haskell, so one has the full power of the language. Here is the Music a
type (p. 30 of the book):
data Music a =
Prim (Primitive a)
| Music a :+: Music a
| Music a: =: Music a
| Modify Control (Music a)
The code Music a :+: Music a
is for sequential composition, e.g.,
if p
and q
are phrases, then p :+: q
is the longer phrase
obtained by laying the two phrases end-to-end.
The code Music a :=: Music a
is for parallel composition, e.g.,
if p
and q
are voices, say the treble and base in one of
Bach’s two-part inventions, then p :=: q
is the music with
those two parts.
Here is a one draft of an Elm version (omitting Control
for now)
type Music a
= Prim (Primitive a)
| Sequence (Music a) (Music a)
| Stack (Music a) (Music a)
Suppose one has pieces of music p, q, r, … which
one wishes to composer sequentially.
In Haskell, one writes p :+: q :+: r:+: s
. With the
above Elm implementation, one would write
Sequence (Sequence (Sequence p q) r) s)
This is pretty awkward. Another way might be this:
type Music a
= Prim (Primitive a)
| Sequence (List (Music a))
| Stack (List (Music a))
so that one could avoid such telescoping pile-up of constructors. Any comments
on what the best way forward might be? Best to get the basic types right before
traveling too far.
NOTE. I’ve implemented some of the above at DrumLanguage. The odd name for the repo is to an extent explained by the useless but fun Techno Drum Language App. It uses a crude phonetic analysis to transcribe text into music a la Hudak, That transcription is the rendered by sending a suitably encoded version of the “music” via ports ot Tone.js
. The idea is inspired by James Gleick’s account of African drum languages.