Thanks for sharing those data types. It looks really interesting to model music that way in code.
I happen to be working on my own music project in Elm/Haskell The UI is in Elm, (here is a picture of what it looks like) and the backend that generates all my audio is in Haskell (like this audio). Maybe our interests overlap a little bit? I’m not sure.
In my project, the front end is really just providing a spreadsheet, where the Y axis represents time, X axis represents different voices, and the strings in the cells of the spreadsheet are notes, containing pitch, volume, and duration information. The spreadsheet is sent to the backend, as a long list of notes paired times. The backend parses the notes, and turns it into audio.
Its a lot less abstract than your music types. Just a lot of lists of notes and voices. Heres a data type for all the different kinds of voices I have so far:
= Sin (Osc.Model Sin.Model)
| Saw (Osc.Model Saw.Model)
| Harmonics (Osc.Model Harmonics.Model)
| DullSaw (Osc.Model DullSaw.Model)
| Percussion Percussion.Model
| Test Test.Model
Each voice type has its own note type, and its own function for converting a list of notes into audio.
I guess one thing that seems interesting seeing you all talk about music code, compared to my own project, is the data types. My project isn’t trying to do much more than compile human written notes into sound, and so I guess I can get by just fine with lots of
List Note types. But maybe what you all are doing is more like mutating the shape of musical structures in code automatically and algorithmically, so you need those structures in your code to begin with.
I dont know. Does that sound about right?