Proposal: Mutable Type

Lately I’m thinking a lot about Mutability. Mutability in Elm. Is that even possible?

Here is how I imagine that mutability could work in Elm. This is just a suggestion, I’m not expecting that this would actually be implemented in Elm. It’s just a thought experiment.

My Idea would be to have a Adress type that can be thought of as a variable in the JavaScript world. Here is some made up code to showcase how I imagine this adress type would behave:

{-| Swapping the values of two adresses

let temp = b;
b = a;
a = temp
swap : Adress Int -> Adress Int -> Task Error ()
swap a b =
  (Adress.get b) --Task Error Int
  |> Task.andThen (\intB ->
      [ b |> Adress.equals a --Task Error ()
      , a |> Adress.set intB --Task Error ()
    |> (always () )

What do you think? Is that something that might be worth investigating? I’m not sure how this might be implemented though. I don’t want to suggest adding a new Type to elm/core, before I haven’t explored all other options. Can this be done using Ports or using custom elements?

Now that elm-conf is cancelled I would like to use that free time to investigate into this topic. So if anyone is interested, I’d love to collab.


Why is mutability something you want?

The only practical use I have is in-place mutation of data structures. So e.g. in

Array.set 10000 2 myArray

that would modify the underlying array directly, instead of first copying it and then modifying the copy. That would be much faster, but without some way of proving that myArray is not shared (i.e. other live references to it exist) mutably updating is very unsafe. But I don’t think that’s what you’re thinking about.

Rather it seems like you want mutable references. Now that looks a lot like elm 0.15’s Mailboxes or haskell’s IORef. Looking into those may help you learn what you want to know.

There are lots of designs to look through!

It’s probably worth checking ST which has an interesting approach for keeping mutations “local” to a particular context. (It requires a feature that Elm does not have, and I don’t think that feature is in the “keep it simple” spirit of Elm.)

I recall hearing about transients in Clojure and wondering if it’d be possible to do something like that without the forall s trick. For example, maybe it’d be possible to have an API like:

type Transaction v a

set : Int -> v -> Transaction v ()
get : Int -> Transaction v v
andThen : (a -> Transaction v b) -> Transaction v a -> Transaction v b

run : Transaction v () -> Array v -> Maybe (Array v)

Basically, restrict the API such that a “mutable array” cannot escape the Transaction. I think one could explore this as a pure Elm API to figure out how many scenarios could be covered by something like this. How does append work as a transaction? Map? The actual optimization of transactions (e.g. combining many set calls into a single allocation) could be done after the API itself is shown to be workable.

This angle is also interesting in that I think this is something that comes up for people using Elm more often than other cases where mutation is helpful. (E.g. I need mutation for UnionFind in type inference, but that’s a pretty rare scenario!)

If the goal is to explore mutable references with more of a Task API, it’s good to check out IORef and MVar in Haskell. (I have grown to be a fan of MVar over time. The compiler uses them to manage concurrent compilation without extra queueing. It’s pretty cool!)

Anyway, just some ideas of what might be fun to look into!


Is there any reason one might want mutability outside of performance considerations (linear time, less pressure on the garbage collector)?

1 Like

I’d say Yes.
If performance is not the problem, one can always model a memory as an array and do the mutable part as operations upon the immutable array (similar to the Transaction above).

This is just theoretical, in practice this makes no sense at all. (It’s really, really slow)

I’ve seen a lot of algorithms that really need mutability (all in the realm of mathematics), UnionFind being one of them.
There was also a contest on twitter: proving correctness of a functional vs a procedual program. For this contest a problem was given that essentially was “impossible” to be solved without mutations. [citation needed]

If mutability is needed only for performance and highly specialized algorithms, I don’t think it makes sense to add it to the language.

It would, however make a lot of sense to create an extension of Elm with this thing added and use that where mutability is needed. Think of it like KernelElm where algorithms based on mutability expressed in this KernelElm would be compiled to efficient JS limiting the actual use of JS even more.

It time, such an extended language could evolve to include low level facilities that could model resource management and effectively allow the implementation of Elm in KernelElm with no performance loss. But this KernelElm would not be Elm.

The thinking around this is that the things you leave out (e.g. mutability) empower optimization. Three Things I Wish I Knew When I Started Designing Languages presentation has a wonderful perspective on this. I think mutability is called +1 in that presentation.

Unrelated to the above, have you looked into the relation between Linear Types and mutability?

1 Like

No, I have not. Looks interesting

I’m with you on that. Adding anything new to the language would be the last option.

What I’m more interested in is what we can do WITHOUT changing the core language.

That’s what I’m interested in: Exploring Idea’s within the possibilities of Elm.

1 Like

Really, Elm deep copies the model on each update? I did expect, the compiler optimizes it away, or at least to copy on write.

Elm copies a record (but not necessarily its contents) when you use record update syntax, which is usually what the update function does. Most of the time that’s actually fine. models aren’t that big (just a bunch of pointers numbers and pointers usually). Most elm apps are constrained by the DOM, not memory allocation or the CPU.

For data structures, functional languages try to be smart and add internal sharing. For instance in

x = 1 :: 2 :: 3 :: []
y = drop 1 x

Elm does not deepcopy the whole x list and then drop the first element. Rather y will be a pointer to the second element of x, and they both share the tail 2 :: 3 :: [].

But such sharing doesn’t work for normal javascript arrays. Any modification to one element requires the whole array to be duplicated. Therefore elm’s Array is really a tree of small JS arrays, so the same sharing trick as above can be used.

But a flat javascript array would easily beat elm’s tree implementation in most benchmarks. Having an elm api for mutable arrays would be useful, but it has to be safe and fast. Maybe that can be achieved with careful api design.

Having the compiler optimize this copy-then-update into a update-in-place is hard, because it’s unclear when that optimization is safe. Certain (uniqueness or linear) type systems can do this though, which is something I’m currently looking into.

1 Like

The more I read into the subject and the more I think about it, the more I feel like this is the way to go. Especially if we rename Transaction to MutArray (Because Transactions could be applied to a lot of other datatypes as well)

After a day of thinking about it, I feel like I’m still not yet up to speed with where you’re at. (I’m not yet seeing the problem with Map. I guess i encounter it as I move along)

Append is a tricky question. Where is that new array coming from? Would

append : Transaction v (Array v) -> Transaction v ()

do the job? If so, would a function

NestedRun : Transaction w () -> Transaction v (Array w) -> Transaction v (Maybe (Array w))

be useful to have?

Thanks for your long reply. As spare arrays are supported in JS, I expected JS arrays behave like JS objects, Elm Records or PHP arrays, not like arrays in Java or C, but that seems to be an implementation detail.

This works in Firefox and Chrome on a PC with 16GB, but other ways to copy the array (without contents) I tried did not work:

<!DOCTYPE html>

let i=(1<<30)*64 // 64 Giga

let a=new Array
let b=new Array


// copy
Object.keys(a).forEach(function (i) {b[i]=a[i]})


console.log(i, a[0], a[i/2], a[i])
console.log(i, b[0], b[i/2], b[i])

// Output:
// 68719476736 Begin undefined End
// 68719476736 Begin Mid End


But I understand, duplication of all keys of an array will take longer than the duplication of the tree for arrays used in Elm.

The browser almost certainly decided to fall back to a hash table/tree based implementation (similar to the object fallback if you add/delete properties a lot), since you only set those 3 keys. Javascript arrays are allowed to be sparse like this, so browsers have to optimize for edge cases like that…

You can check for yourself in node:

a = []
a[0] = "begin"
a[(1<<30)*64] = "end"
a[(1<<30)*32] = "mid"
process.memoryUsage().heapUsed / 1024 / 1024 // 6.29 mb
1 Like

This is a really interesting topic as algorithms that need mutability is a thing where Elm so far can both fall flat and can come as a surprise well into a project.

I’ve played a bit with the API @evancz proposed above, and made an Ellie with a working implementation (obviously the working implementation doesn’t actually mutate anything, but that is a good thing, since it would require no language changes). You can play with it here:

I implemented quickSort with it just to get a feel how programming in this style feels:

  1. elm-format isn’t great for this as there are many, many nested function calls.
  2. The implementation ended up being a pretty straightforward port from Wikipedia, once I built a couple of library functions like while.
  3. It would be neat to see how this works for more complex algorithms.
  4. To get good performance I think some clever compiler optimisation to remove all the nested function calls would be needed???
  5. Worth exploring how this could work for mutating multiple arrays at the same time?

But this is clearly simply a small perf optimisation. One use case for mutability I’ve needed is that some algorithms need what are essentially doubly-linked lists for performance (or similar data structures). Are there any good resources on that for pure languages?


Mutability would be a big deal for videogames.

  1. Videogames in general have to move a lot of state at high frequency, while optimisations exists often times it is not possible to avoid cloning the whole state hundreds of times per second.

  2. Videogames have a flat hierarchy where everything can mutate everything else, while there nice ways to express this, these ways are often not compatible with the optimisations above at point 1).

  3. Videogames require algorithms (pathfinding, AI) that must run several times per second.
    After all optimisations are done, for my Elm game I had to implement pathfinding in javascript because Elm was simply too slow.

I think the only reason mutability matters is performance. This includes implementation of various algorithms. For example, you could implement QuickSort with immutable arrays but you wouldn’t get any of the speed benefits because the primitive operations have the wrong time complexity.

There are definitely times, e.g., in compilers, where we basically code as if we were working in an imperative context using some combination of the state monad, results, etc. Haskell do notation tries to bury that but I’m not convinced that doing faux imperative programming is really a good choice. If you want an imperative programming model, maybe you should use a language that has one because a language that looks like something it isn’t is almost certainly going to be harder to reason about.

Now, that said, performance can be a huge issue and the overhead for immutability can be pretty harsh.

I’ve been reading this week about Linear Haskell ( which seems pretty straightforward until one gets to needing support for polymorphic function arrows at which point my guess is that you have a feature that is only viable for experts. The nice thing about Linear Haskell is that it allows the compiler to check that the mutable data really is being used safely. It’s also designed to be fully compatible with “normal” code without people having to know about it. But the cliff is pretty steep when you actually come to it.

1 Like

I’m not entirely sure this holds, but I think it holds often enough that the rest of your post isn’t affected. So this is something of a side issue.

In functional programming circles we tend to assume that immutability always leads to fewer bugs, this sounds persuasive because there are certain classes of bugs that occur in impure languages that cannot occur in pure languages. And since any bug that can occur in a pure language can obviously occur in an impure language it seems to follow that there must always be fewer bugs in programs written in pure languages. So we all think “great, pureness works”. And by and large it does. But, just because a bug can occur, doesn’t mean it’s as likely to, and I think there are certain classes of bugs that are more likely to occur in pure languages than impure ones.

An example of such a class of bugs is where the local state can be captured pretty easily. Think of random number generators. In impure languages you can always just call Random(), and you don’t need to worry about maintaining the seed. In a pure language you need to remember to keep track of the seed, so it’s easy enough to make a mistake such as:

update message model =
  case message of
       DiceRoll ->
                 (newNumber, newSeed) =
                       Random.step ( 1 6) model.seed
           (  { model
                      |  dice = newNumber
                      -- I should be setting the seed here to `newSeed` but I forgot.
           , Cmd.none

In this case, in Elm, if you’re using elm-analyse you’ll get a warning about an unused local variable. But it’s just an example of the class of bug that is more likely to occur in a pure language than an impure language.

I think most of us here, wouldn’t give up purity for the sake of this class of bug, but I just thought it worth highlighting in this discussion.


As far as performance for game engines, the Nu game engine might provide some good insights. I don’t know implementation details but it might have some good thoughts about where mutation is and isn’t needed.

1 Like

I’m not really know Haskell or other FP, but there is Immutable.js, that deals with mutations using “withMutations” methods… but that looks totally same as what @evancz suggested with mVar and it requires new type wrapper for all basic types…

That could solve lot of stuff, but again we returning to place with lot of magic, that was dropped together with native-modules, and effect modules… but maybe (I hope) we could introduce something more safe in that batch mutations stuff… I see LOT of benefits out of that for game and package development, but big risks going away for simplicity of elm it self, as there will be lot of discussion regarding performance, and that you should mutations every time when it is possible…

That’s a good point. When arguing for FP, I will have to add to the caveats that implicit imperative state is much harder to capture than explicit functional state and while ease of capture might seem a good thing it also means you can’t protect it. Capture prevention is essentially what the linear Haskell designs are focusing on.

I’ve extended the Ellie with the remaining functions from the core libary: isEmpty, length, push, append, slice, toArray, map, indexedMap, foldl, foldr, filter.

I implemented mergeSort and also used the implementation presented on Wikipedia. The big problem here was that the code uses two mutable arrays. To solve this I used toArray |> andThen append to have two copies of the same array. Not sure if this is the way to go.

Here are some thoughts I had while programming:

  • Would additional functions for sub arrays be useful?
  • Can we pass a Reference? How would a reference be modelled? If my method of appending arrays together is the way to go, then it would be better to think of the initial array as a store/as Memory and append as allocating new memory.
  • What happens if two arrays are generated independently of each other but later need to interact, how can this be modelled? If we append one array to another, how can we assure that the other array is no longer accessable?

At university we use (Array (a,Int),Int) for linked lists. Double linked lists would be (Array (a,Int,Int),Int).

For example the List [42,-2,53,-5] would be presented as

( [(42,1,-1),(-2,3,0),(-5,-1,3),(53,2,1)]
, 0

Here -1 represents the null pointer