Music composition in Elm

Hi Leif,

I looked at your code. Very, very nice!! (Listened to the demo as well). I think in most respects you are farther along with an Elm version of Euterpea than am I. I’ve been working on this for a short while – it grew out of my wacky drum language project – first commit on June 27.

About a library – this is most definitely what should happen. Let’s talk some more. There is plenty to do to make a really good Elm Euterpea, and collaboration might be a good way to accomplish that. I’m interested. I think that @Lucas_Payr would be also. He’s helped me several times already.

Oh – one more point. Despite my experiments, I agree that it is desirable to stay as close to Hudak’s design as possible while writing idiomatic Elm. He has put a lot of thought into this.

PS. Here is the latest version of my test app: https://jxxcarlson.github.io/app/euterpia-test.html – not very impressive. It really only plays one voice at the moment. I’m using Tone.js as the backend.

I

Jup

Yes that really should happen. Personally, I’d rather call it elm-music than elm-euterpea though.

We currently use tone.js. It seems like webaudiofont is sample/waveTable based while tone.js simulates a synthesizer. I personally like having a synthesizer but on the other hand webAudioFont has Midi-In and Midi-Out. Maybe we want to join both libarays? tune.js has a Midi format. It shouldn’t be that difficult to compine both…:thinking:

Sure, I don’t feel attached to that name at all.

I worked with tone.js a bit and I like it so far. (I think it is pretty much state of the art for web audio.)
For what I have seen, Euterea has extensive MDI support. It would be really cool to be able to make use of this, like instrumentation of voices, etc. webaudiofont was just a first attempt. I am experimenting and I’m not really an expert in web audio and MIDI.

Re names: yes, elm-music sounds good to me. Perhaps the best is to use a neutral name to but to credit Euterpea. I also liked the Mousikea name.

Re output: Having different backends or sound renderers or whatever they should be called is an attractive feature that should make the library useful for a wider set of people and interests – Tone.js, MIDI, maybe more. In the latter half of the Hudak-Quick book, they talk about “Sounds and Signals”. Tone.js or something of that nature could be useful for this

1 Like

I would go further. Make notes more like
type NoteName = C D E F G A B and the accidental type Accidental = Sharp Flat None ... I wrote more about representing music stuff on my blog at https://pianomanfrazier.com/post/music-theory-in-elm/#designing-with-elm-types

1 Like

The problem with as fs ... is what about double sharps and flats? What about naturals? Now you have an explosion of types 7 x 5 instead of 7 x 2.

What if I wanted to render your generated music to print it? You could possibly generate LilyPond output from this so you would want a musical representation sophisticated enough to describe readable music.

Btw I’m speaking at Elm conf this year about doing music theory stuff with Elm.

Hey @jxxcarlson,

Thanks for sharing those data types. It looks really interesting to model music that way in code.

I happen to be working on my own music project in Elm/Haskell The UI is in Elm, (here is a picture of what it looks like) and the backend that generates all my audio is in Haskell (like this audio). Maybe our interests overlap a little bit? I’m not sure.

In my project, the front end is really just providing a spreadsheet, where the Y axis represents time, X axis represents different voices, and the strings in the cells of the spreadsheet are notes, containing pitch, volume, and duration information. The spreadsheet is sent to the backend, as a long list of notes paired times. The backend parses the notes, and turns it into audio.

Its a lot less abstract than your music types. Just a lot of lists of notes and voices. Heres a data type for all the different kinds of voices I have so far:

data Voice
    = Sin (Osc.Model Sin.Model)
    | Saw (Osc.Model Saw.Model)
    | Harmonics (Osc.Model Harmonics.Model)
    | DullSaw (Osc.Model DullSaw.Model)
    | Percussion Percussion.Model
    | Test Test.Model

Each voice type has its own note type, and its own function for converting a list of notes into audio.

I guess one thing that seems interesting seeing you all talk about music code, compared to my own project, is the data types. My project isn’t trying to do much more than compile human written notes into sound, and so I guess I can get by just fine with lots of List Note types. But maybe what you all are doing is more like mutating the shape of musical structures in code automatically and algorithmically, so you need those structures in your code to begin with.

I dont know. Does that sound about right?

3 Likes

Hi,

I like that approach. It is about the compiler helping to write correct code and making illegal state unrepresentable.

I found that there are two concerns that are related but also quite different, that is music notation and music performance (as in playback).

The fundamental difference is that in notation you have to handle the problem of enharmonic spelling well. Whereas in performance you only care about pitch, and this makes things a lot easier.

Euterpea is primarily focussed on music generation, modification, interpretation and performance as far as I can tell.
So e.g. it doesn’t know the concept of intervals with names and qualities (like e.g. minor third, where “minor” is the quality and “third” the name) at all. In Euterpea intervals are represented as number semitones.

3 Likes

That is a great distinction. It would be hard to type all that out in real time for a performance.

It would be cool however to generate harmony and more complex music programmatically. I’m imagining something like generating a lead sheet harmony and generate random bebop lines over the top. If you had a concept of tonality and harmony you could generate patterns that fit the chords.

You could adopt a notation similar to lilypond. For example aes'4 bisis,2. I know it strays from the original api but the way of representing notes is very compact.

Hey @Chadtech looks very interesting! I’m currently in the process of a family move from the US to France. I want to study your code in detail as soon as I get a chance. I am very much interested in your Haskell interop.

Lots of interest in music, it seems. This is good!

@chadtech, yes, sounds about right, I’m primarly interested in algorithmic composition (as well as the conventional kind). Of course, to enjoy that kind of composition, you need some way of rendering it in to audio. My first crude attempt: https://jxxcarlson.github.io/app/euterpia-test.html – at the moment voice 2 overwrites voice 1, so it isn’t really doing two voices.

There is nothing original in what I’ve done so far. The data types are taken from the book The Haskell School of Music, by Paul Hudak and Donya Quick.

I think that one could have various input languages that work with the underlying data structure.

Yeah, I think so too. Our focus should be a package and a .js file that are intuitive and easy to use. From there everyone can build upon.

Personally I’ve not got enough experience with ports to know how we can include the Js-side with the package. I would hope that its possible that we can have a Js-File in the same folder as the compiled HTML file and then everything works.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.