Music composition in Elm

I’ve been toying with the idea of implementing something like Paul Hudak’s Euterpia in Elm, and would like comments thereon. The system is described in the book The Haskell School of Music by Hudak and Quick. One uses it to compose (and play) music directly in Haskell, so one has the full power of the language. Here is the Music a type (p. 30 of the book):

data Music a =
    Prim (Primitive a)
    | Music a :+: Music a
    | Music a: =: Music a
    | Modify Control (Music a)

The code Music a :+: Music a is for sequential composition, e.g.,
if p and q are phrases, then p :+: q is the longer phrase
obtained by laying the two phrases end-to-end.

The code Music a :=: Music a is for parallel composition, e.g.,
if p and q are voices, say the treble and base in one of
Bach’s two-part inventions, then p :=: q is the music with
those two parts.

Here is a one draft of an Elm version (omitting Control for now)

type Music a
    = Prim (Primitive a)
    | Sequence (Music a) (Music a)
    | Stack (Music a) (Music a)

Suppose one has pieces of music p, q, r, … which
one wishes to composer sequentially.
In Haskell, one writes p :+: q :+: r:+: s. With the
above Elm implementation, one would write

Sequence (Sequence (Sequence p q) r) s)

This is pretty awkward. Another way might be this:

type Music a
    = Prim (Primitive a)
    | Sequence (List (Music a))
    | Stack (List (Music a))

so that one could avoid such telescoping pile-up of constructors. Any comments
on what the best way forward might be? Best to get the basic types right before
traveling too far.

NOTE. I’ve implemented some of the above at DrumLanguage. The odd name for the repo is to an extent explained by the useless but fun Techno Drum Language App. It uses a crude phonetic analysis to transcribe text into music a la Hudak, That transcription is the rendered by sending a suitably encoded version of the “music” via ports ot Tone.js. The idea is inspired by James Gleick’s account of African drum languages.

4 Likes

Just a thought, we usually don’t compose directly by constructing values. So maybe a pipeline-based API would look nice?

prim : Primitive a -> Music a
thenPlay : Music a -> Music a -> Music a -- 2nd then 1st for pipeline
stack : Music a -> Music a -> Music a

(n.b. I want to call thenPlay then but I think that may be a reserved keyword—can’t remember ATM. Some word that says “make this into a sequence” rather than “this is a sequence” would be appropriate.)

Then your sequence p :+: q :+: r :+: s would look like:

p
    |> thenPlay q
    |> thenPlay r
    |> thenPlay s

One point against is that it’s not intuitive to use this with stack. I think that’d be fine. You compose the treble and bass separately, then stack treble bass. You could also have List { treble : Music a, bass : Music a } and then:

parts
    |> List.map (\{treble, bass} -> stack treble bass)
    |> List.foldr thenPlay empty

Hi @brian, great suggestions re pipelines & a pipeline based API. One comment about values. As I understand from my limited reading of Hudak’s book, one does work with them on occasion. The idea is to build up a composition from little snippets and motifs using various transformations and operators. Here is an example:

x1 = c 4 en :+: g 4 en :+: c 5 en :+: g 5 en
x2 = x1 :+: transpose 3 x1
x3 = x2 :+: x2 :+: invert x2 :+: retro x2
play $ forever x3

This is taken from Interesting music in four lines of code in Donya Quick’s website. She goes on in the cited post to show how this four-line composition can be made more interesting.

I’d be tempted to do something like

type Music a
    = Prim (Primitive a)
    | Sequence (List (Music a))
    | Stack (List (Music a))

sequence : List (Music a) -> Music a
sequence =
    Sequence

stack : List (Music a) -> Music a
stack =
    Stack

which would then allow things like

Music.sequence [ p, q, Music.stack [ r, s ] ]
1 Like

If you wanted to keep (mostly) the same underlying data structure so you could more directly port code from the Haskell version, you could even do something like

type Music a
    = None
    | Prim (Primitive a)
    | Sequence (Music a) (Music a)
    | Stack (Music a) (Music a)

sequence : List (Music a) -> Music a
sequence items =
    case items of
        first :: rest ->
            Sequence first (sequence rest)

        [] ->
            None

and similar for stack. But I suspect just storing and working with Lists directly would be easier in the long run…

Thanks Ian! I’m pondering all these good suggestions and experimenting a bit … waiting to get than feeling of comfort and clarity.

Ah – that is nice! E.g., doing this: Music.sequence [ p, q, Music.stack [ r, s ] ]

I am still debating how closely I should follow Hudak’s code.

I’m running into some small difficulties – the function as o d_ = note d_ ( As, o ) for constructing A# values of given octave and duration has to be implemented as aS because it conflicts with a keyword. Similarly the function e o d_ = note d_ ( E, o ) has to be implemented as ee (or some such).

As an example, one constructs a half note D flat in octave 4 like this:

> df 4 hn
Prim (Note (R 1 2) (Df,4))
    : Music ( PitchClass, number )

The notation is copied from Hudak’s Haskell School of Music.

How about aSharp, eNatural and dFlat? And maybe eigthNote, quarterNote and halfNote while you’re at it…

1 Like

Great to see you’re working on this. I’d love to be able to use an Elm music composition language.

Perhaps one aspect to consider as a design guide is how easy and expressive would it be to live-code as a performance. Haskell-based TidalCycles is pretty well-used in the live coding / algorave scene. It takes advantage of Haskell’s concision which is definitely a help when live coding.

It seems to me that what Elm might offer for live coding (where code is projected on screen for all to see while simultaneously listening to the music) is greater clarity of expression. But there is a trade-off between brevity for fast and dynamic code entry during performance and clarity of written code. But hitting the sweet spot between the two could make this an interesting tool for live coding performance.

This may not be a direction you’d want your project to go, but thought I’d mention it now just in case this helps in directing some language design choices.

I’ve just started to learn Euterpea because there was nothing alike for Elm. So its nice to finally have a project for Elm.

I think like @brian said, pipelines should be the way to go. But I would also not want to reinvent the wheel.

I suggest instead of

let
  x1 = c 4 en :+: g 4 en :+: c 5 en :+: g 5 en
  x2 = x1 :+: transpose 3 x1
  x3 = x2 :+: x2 :+: invert x2 :+: retro x2
in
forever (instrument AcousticBass x3)
:=: forever (instrument AcousticBass (tempo (1/3) x3))

to have something like

let
  x1 = [c 4 en, g 4 en, c 5 en, g 5 en]
  x2 = List.concat [x1, x1 |> List.map (Note.transpose 3)]
  x3 = List.concat [x2, x2, x2 |> Note.invert, x2 |> Note.retro]
in
[ (x3 |> Music.instrument AcousticBass)
, (x3 |> Note.tempo (1/3) |> Music.instrument Pad3Polysynth)
]
|> List.map forever
|> Music.patch
1 Like

I haven’t really thought of live coding, but will definitely keep that in mind as the project moves forward. At the moment I am trying to establish some basic infrastructure, e.g. a way to render my current Elm implementaton of Euterpia’s Music Pitch structures to sound. I’m working with Tone.js right now. Do you have suggestions in this regard? I should probably have MIDI out, and maybe MIDI in.

@Lucas_Payr thank you for your suggestions. Yes, I would like to try to stay as close to Hudak’s Euterpia but with pipelines as both you an @brian suggest. As mentioned in my reply to @jwoLondon above, my biggest pain point right now is with sound generation — your |> Music.patch. (Very nice and flexible syntax). I’m using Tone.js right now and have made a little (but not nearly enough) progress. I can envision various back ends, e.g, Music.patchToTone, Music.patchToMIDI. Do you have any suggestions or expertise in this regard?

This project should eventually, maybe even soon, become a collaborative one. With, for example, a contributor who could run with one or more of the Music.patch*s.

Hi @ianmackenzie – am struggling with the balance between the two E’s: explicitness versus ergonomics. My current thinking is to lean towards ergonomics in situations where human composers may need to do a lot of typing, as in the line c 3 qn, ... below – imagine you, the composer, writing a fairly long line of notes.

At present, for example, I have typed

    c 3 qn, e 3 qn, g 3 qn, d 4 hn, c 4 wn

which is transformed this way

  > parseSequence "c 3 qn, e 3 qn, g 3 qn, d 4 hn, c 4 wn"
    Ok (Sequence [Prim (Note (R 1 4) (C,3)),Prim (Note (R 1 4) (E,3)),Prim (Note (R 1 4) (G,3)),Prim (Note (R 1 2) (D,4)),Prim (Note (R 1 1) (C,4))])

before being sent to Tone.js as a sequence of events:

    [{"time":"0","note":"D2","dur":"0.75"},{"time":"0.75","note":"F2","dur":"0.75"},{"time":"1.5","note":"A2","dur":"0.75"},{"time":"2.25","note":"D3","dur":"0.375"},{"time":"2.625","note":"F3","dur":"0.375"},{"time":"3","note":"A3","dur":"0.375"},{"time":"3.375","note":"D5","dur":"0.1875"},{"time":"3.5625","note":"F5","dur":"0.1875"},{"time":"3.75","note":"A5","dur":"0.1875"},{"time":"3.9375","note":"D6","dur":"0.09375"},{"time":"4.03125","note":"F6","dur":"0.09375"},{"time":"4.125","note":"A6","dur":"0.09375"},{"time":"4.21875","note":"D3","dur":"0.75"}]

Tone.js then plays the above, rendering it into sound.

Personally i though Note.batch would be to patch notes (aka Midi). I would expect to also have a Music.batch to batch sound waves.

I would love to help, but I can’t help a lot for times sake. If you have a GitHub repository, then I’m glad to help with smaller contributions.

Also, personally I definitely would expect Midi-In and live coding - I was thinking of using elm-in-elm or drathier/elm-interpreter for that. Though i’m not sure in what state they’re in.

Edit::
I’ve just check out your DrumLanguage repository, and started watching it :wink:

1 Like

@ianmackenzie do you see any downside to storing / working with lists? I’m not familiar enough with Euterpia yet to understand the up/downsides.

If you have comments from time to time, that would be great. Perhaps I could give you a heads up for an occasional code review – it does’nt have to be thourough — any thoughts would be useful.

1 Like

The only real downside of using lists that I can think of is that you have to handle the empty list case everywhere - but in this case that seems like it should be quite straightforward.

Fair enough on explicitness vs ergonomics - and if you want to support live coding then that certainly tilts the balance towards succinct code that’s fast to type. (On the other hand, it would be easy to prep for a live coding session by making a bunch of one- or two-character helper functions, and I think the more explicit names help with learnability/approachability.)

1 Like

Thanks so much for fixing the playback issues!

1 Like

Hi,

I’ve been interested in Euterpea for quite a while.

I recently ported a lot from it to Elm and stayed as close to the source as possible. I think it can be made more Elm idiomatic but the lib is so well designed that I didn’t want to mess around with it.

So the sequencing is solved already in the Haskell code. There is a function line that does the trick, Here is the Elm version:

line : List (Music a) -> Music a
line =
    List.foldr Seq empty

Here is the repository of the Elm version: https://github.com/battermann/Mousikea

I just recently started this so this is still much WIP and I don’t know yet where this should go. But maybe it would be nice to create a library out of this at some point and maybe an accompanying npm package?

My first attempt was to do playback with webaudiofonts, you can see a live demo here: https://elm-euterpea.surge.sh/

I’d love to collaborate on this if someone is interested?

Thanks!
Leif

2 Likes