Elm-music-theory: A toolkit for musical ideas

I was fifteen and on a trip with my family in Minocqua, Wisconsin when I heard “Slumber Song” by Glenn Miller on a local radio station one night.

I had never heard music like it before. It was intelligent and organized, with great warmth and sensitivity. It sparked my interest in the technique I now know as sectional harmony, in which most of the melodies in a piece are harmonized into multiple parts and played together by groups of instruments. In this case, these are the trumpet, trombone, saxophone, and vocal sections.

The techniques of jazz arranging

Many years later, wanting to apply these techniques to my own music, I started taking private lessons from a jazz arranger. As I began to understand the various concepts that made up this obscure and specialized field, like voicing types, approach techniques, and available tensions, I saw how much attention and mental energy is required in order to apply them.

For instance, in any given situation when harmonizing a melody, an arranger generally has to ask themselves the following questions:

  • What techniques are available for harmonizing the current melody note?”
  • “Which of the options that these techniques could produce would fit the conventions of the musical style I am working in?”
  • “Which of the valid options is the best choice in this situation?”

None of these questions can be answered simply. Students work for years to improve their facility with arrangement techniques and develop their musical judgment.

The question of the best choice requires a musical sensibility to answer, but the others are questions of generating musical structures and validating them against certain criteria. Could those parts of this work be automated for an arranger’s benefit?

Announcing elm-music-theory v1.0.0

Today I’m happy to announce the initial release of elm-music-theory , a new music theory library for the Elm language and the result of my work to answer this question.

elm-music-theory allows you to work with musical concepts like pitches, intervals, chords, keys, and scales. This includes (but is not limited to) the tasks involved in arranging sectional harmony.

For instance, if you were an arranger trying to harmonize the melody to “Slumber Song”, you could use elm-music-theory to generate a list of chord voicings that included the current melody note in the top voice, and sort them by various musical criteria to find the most viable options:

voicings =
        { voiceOne = Music.Range.clarinet
        , voiceTwo = Music.Range.altoSax
        , voiceThree = Music.Range.tenorSax
        , voiceFour = Music.Range.tenorSax
        , voiceFive = Music.Range.baritoneSax
        [ Music.Voicing.FivePart.close ]
        (Music.Chord.majorSix Music.PitchClass.d)
        |> List.filter (Music.Voicing.FivePart.containsPitchInVoiceOne Music.Pitch.d5)

This could save you time, reveal options you might not have considered, and help you focus on your high-level goals.

To my knowledge, elm-music-theory is the first library of its kind to treat the topic of voicing chords in a structured way that reflects an arranger’s process.

I have made every effort to model the concepts I have chosen for this first release as accurately as I could. Because of this design effort, I believe elm-music-theory provides not only a foundation for arranging harmony, but also one that supports many other potential musical applications, such as harmonic analysis, writing counterpoint, and generating music procedurally.

Take a look at the examples that cover analyzing and transposing the chord progression in a song and generating four-part chord voicings.

And if these terms are new to you, here are some learning resources for getting acquainted with the music theory concepts modeled in this library.

What’s next?

Music theory is a large topic, and although I feel elm-music-theory is a solid foundation, it does not provide immediate support for more complex musical use cases.

Here are a few I am working on:

Rhythmic values and time-based structures: Right now this library does not support notes with durations, or structures for organizing them, such as measures, staves, or systems. These features will eventually allow easier generation and manipulation of musical compositions.

Melodic lines and sequences: It is possible to analyze a melodic line and generate variations on it. These variations are known in musical terms as sequences, and they are an important compositional tool for developing a melody. I’m excited for this library to support this in the future, since it has a lot of potential for compositional applications.

Chord progressions with key changes: elm-music-theory already supports Roman numeral analysis of chords in a single key. But with a concept of harmonic movement across key changes, there will be potential to identify more harmonic relationships and to represent more sophisticated harmonic plans, which will be helpful for generating compositions procedurally.

Voicings for polyphonic instruments: This initial release has focused much attention on voicing chords for small groups of monophonic instruments. Chord voicings for polyphonic instruments (like the guitar and the piano) are subject to different principles and constraints, and these will need to be modeled separately for these cases to be well-served by this library.

I hope you enjoy elm-music-theory! Feel free to reach out to me at @duncan on the Elm Slack about your projects and questions.

Originally posted at dmalashock.com


This is so cool. Maybe one day there will be an in-browser DAW for musical exploration and live performance!

1 Like

It would also be cool to have a tool to be able to quickly find all the available chords, scales, that you can use on a song given a specific music style.

1 Like

This is awesome! Been following your progress for a while and I’m excited you are able to get to a 1.0.0 release! Your api is well thought out and provides a strong foundation for both application and new framework support. My musical application using your code has fallen to the side in favor of other projects but I look forward to using your library for playing with concepts of music generation!


Thanks to everyone who has liked and responded. I feel very encouraged by the positive response to this package.

@evelios thanks for the kind words, and I’m looking forward to seeing what you build!

@FranzSkuffka @francescortiz I like your ideas! I think they agree with my own hopes for a computer-aided compositional environment of some kind, and I would like to hear more of this kind of thing.

I think the question you raised about finding chords that are appropriate to a particular musical moment is maybe both simple and difficult.

On the one hand, it would be simple to use the library to find all chords diatonic to a key, within some constraints; the Scale.allChords function will do that, given some basic knowledge of which chord types are involved.

And it is also simple to find all chords, again within some constraints, that include a particular pitch class (as in a melody note), using Chord.detect. Either of these approaches could provide material for doing reharmonizations, but the second would give you many more possibilities and probably be more useful.

I think the difficult part is making sense of those possibilities, of which there would be a lot. There are conventions for how chords can progress, but no strict rules. And I think harmonic progressions are most interesting when they strike a balance between satisfying and defying our expectations (I’m thinking right now of “The Girl From Ipanema”, a very popular song with chords that don’t resolve “correctly”).

Could chord options be sorted in some way from most “formulaic” to most “surprising”, for use in an application? That seems like a hard problem, but it would be interesting to see how you could approach it!

@duncanmalashock It sounds like we have similar ideas on what we want to create in terms of applications. However, you are many, many years ahead of me in terms of musical understanding. One of the thing that I struggle with the most is that I like to dive head first into the concepts and then have to spend months/years putting those concepts into practice into my playing. I’m far ahead of my skills in terms of theory. This is an extension of why I became a programmer. That being said, I have a lot to learn in terms of how and what tools would actually be useful for a performer or composer.

I would be curious to hear what tools you think would be most beneficial to those two groups to have at their disposal. I have had to start playing with concepts that I find most interesting, but I don’t think that those are the things that most people would be using in their works. I would like to be able to work on a tool that benefits a wider audience.

I would love to hear what problems you think would be best solved by the computer and where you think the pieces of ambiguity (exploding problem space) should be guided by the artist.

One day when the inspiration strikes again, I would like to explore the use of the harmony search algorithm and something like novelty search to provide rough musical filler which could be a good base of work and inspiration to get started. Hopefully it could be a good tool for fighting the blank page syndrome or for quickly expanding a melodic idea into at least a moderate harmonic base.

Keep of the great work and I am just as excited to see where you take your work as well!

I would be curious to hear what tools you think would be most beneficial to those two groups to have at their disposal.

My approach to answering this question in general has been to learn from the creative processes of composers and musicians, and learn to model the techniques involved in the decisions they make.

Here is one such technique I’m working on modeling, which is giving me some trouble; I wonder if any of the folks here might be interested in discussing it, because I feel the design of a solution would benefit from some discussion with both engineers and musicians:

Generating variations on a melody
One technique that composers use a lot is a melodic sequence. The term “sequencing” means taking a melodic line or motif and changing it, while retaining some of the characteristics of the original version.

The opening of Beethoven’s 5th Symphony is one of the most famous examples of a series of melodic sequences:

First the motif: G G G Eb – three of the same note followed by one note that’s lower
Then a variation: F F F D – similar but starting on a different note
Two more variations follow: Ab Ab Ab G, Eb Eb Eb C

Similar to the functionality of generating possible solutions to the problem of voicing a chord that I described in the original post, I think modeling the features of a melodic line and generating variations on it would be very helpful as a compositional tool, and almost essential if your goal was to generate music procedurally (e.g. to generate, say, bebop lines over a set of chord changes).

How to model an abstracted melody in this way?

A simple approach might be to model it as integer differences between notes on the scale it occurs in. Variations could be created by using different starting notes, and/or different scales.

analyzeMelody : Scale.Scale -> List Pitch.Pitch -> List Int

generateVariation : Scale.Scale -> Pitch.Pitch -> List Int -> List Pitch.Pitch

original : List Pitch.Pitch
original =
    [ Pitch.g4
    , Pitch.g4
    , Pitch.g4
    , Pitch.eFlat4

abstractMelody : List Int
abstractMelody = 
    analyzeMelody (Scale.minor PitchClass.c) original
    -- [ 0, 0, -2 ]

variation : List Pitch.Pitch
variation =
    generateVariation (Scale.minor PitchClass.c) Pitch.f4 abstractMelody
    -- [ Pitch.f4
    -- , Pitch.f4
    -- , Pitch.f4
    -- , Pitch.d4
    -- ]

This works well, but only under these assumptions:

  1. A melody uses only one scale
  2. A melody uses only pitches contained in the scale
  3. Each pitch in a scale is an equally viable possibility
  4. A variation should maintain the same scale steps, in the same directions, as in the original

All of these assumptions, unfortunately, break down very quickly:

(1) does not describe the many melodies and melodic fragments that are written across chord changes. Chord changes often imply a change in scale, so a melodic variation should be able to be specified in a way that includes transitions between scales.

(2) leaves out chromatic notes in melodies. The opening line of “When You Wish Upon a Star” is one example of a simple melody that nonetheless includes notes outside the scale.

(3) ignores the distinction between stable and unstable tones. This means some notes are not good options to emphasize in a melody because of their relationships to the current harmony. This idea varies with musical idiom; in classical music, any note that is not in the chord must be resolved. In jazz, all notes in the scale are available except for so-called “avoid notes”.

And (4) ignores the usefulness of variations that adjust the direction or distance of pitch transitions (like the variation Ab Ab Ab G from before).

How can these aspects of melody be modeled in a way that lends itself to the generation of variations? And how can this variation process be parameterized in a helpful way?

I have tried to model in terms of a current harmonic context, and a melody note’s relationship to it, and have some preliminary code working to generate variations in this way, but I am not happy with my designs so far.

I am considering modeling in terms of techniques for resolving nonharmonic tones like escape tones and anticipations. This might have the benefit of being easily understood by musicians, but given that one of my design goals is for designs to apply broadly across musical styles, I wonder if this will be successful in other musical contexts like jazz.

I would be very happy to hear anyone’s input on this, particularly if you have experience with these musical ideas.


Thank you for expanding on this for me. You bring up a lot of good points that I have struggled with as well.

All of these assumptions, unfortunately, break down very quickly

This was something that was at the heart of the issue as you brought up. I think for me, the path I was going to go down was to make simple tools that made these kinds of egregious assumptions and hold each tool in a rigid compositional box.

  1. A melody uses only one scale
  2. A melody uses only pitches contained in the scale
  3. Each pitch in a scale is an equally viable possibility
  4. A variation should maintain the same scale steps, in the same directions, as in the original

Make all of these assumptions explicit from the outset so that the user of the tool understands the limitations and scope of when this tool would be useful.

I have been thinking of these in terms of my own musical compositional understanding. For example,
when starting to write a composition, I follow these explicit rules.

  1. Create a harmonic progression
  2. Create a harmonious melody to the composition that follows the following rules
    1. Melody notes must harmonize with the accompanying chord
    2. A melody can use passing notes as non-chord tones

Then create a tool that follows these rules.

Then as my understanding grows, I would create a tool that would have less assumptions, and less rules. However, every tool by nature of automation is always working within a box of pre-defined assumptions.

This would allow me to ignore the problems of each assumption initially to create a framework that works well in some contexts. Also, making all of these assumptions bold and clear to the user so that they knew that they were being creatively limited in some context. As long as they are aware of the limitations, they then had the ability to choose a different tool, or use it as a framework to then manipulate and create their true work. This is where the true works are, when people work within a framework, but then know what assumptions they are working with so that they can then break those rules to accomplish their true goals.

I think by nature of music, no tool will cover all contexts, nor should they.

There is also one more big problem lurking in the background. I know you are focused on western music and notation which does help limit this problem a lot. However, different genres and cultures have different means of understanding harmony. It seems like your training is heavily into the jazz culture of harmonic ideas which then brings with it it’s own framework for understanding these relationships of diatonic and chromatic harmonies. I don’t think that there will ever be the one model to rule them all to model.

I do think you already have a strong understanding of what the limitations are for each are which gives a good basis for the context in which each tool is most applicable to the particular composer which is using it.

Every time you remove one of those assumptions, the model complexity explodes. This also breaks each large assumption into many smaller assumptions, each with their own baggage.

  1. This song only uses a single scale
    • Modulations would then be mode shifts
    • How do we consider stable and unstable melodic notes
    • Which harmonic model are we using for accompaniment
  2. This song uses key changes
    • When is a modulation a mode shift or a key change?
    • Should this be modeled locally or globally?
  3. This song uses a chromatic melody
    • Which model of adding chromatic notes are we using?
    • What implication does this have on the relationship of the chromatic note in relation to the diatonic note?
      • This also brings in assumptions that the chromatic notes must actually be related to diatonic notes
      • This also assumes that we are using the diatonic major/minor scales as a musical foundation
      • If we allow other scales other than diatonic modal keys, how do we then abstract this tool to account for all the scales and their accompanying chromatic counterparts
      • Does this then devolve into a richer model of musical set theory

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.