When I start a new project meant for production, I need to evaluate whether to use Elm or something else.
Elm is good and gives a lot, so I am willing to put up with some workaround and the occasional reimplementation of a smaller JS library.
But what are the real dealbreakers?
What are the things that make me decide “Nope, I can’t use Elm for this”?
The potential “production dealbreakers” I can think of are:
Large codebases: a breaking change in a library means that the whole app must be upgraded in one go; if the codebase is large enough this might not be possible. Beyond a certain size, the compiler may become slow or very memory hungry. Lack of asynchronous code loading. In general, Elm is not battle-tested against such large codebases.
Reliance on particular libraries that can’t be used via ports or must access to the DOM in ways that interfere with Elm’s virtual DOM.
Alpha status: things change and suddenly you might find that a feature that you relied on has been removed or substantially modified.
Slow bug fixing: Elm tries to address issues “in batches” and only when a stable solution has been found. This is good for quality, but increases the risk of stumbling into a bug that will not be fixed for a while, which could be a big problem for production environments.
I have no skin in your game, so take my answers with a grain of salt.
Breaking changes are not unique to Elm. In my experience, elm updates go way smoother than any other language, because of static typing.
Yeah. I’ve heard there are some problems with extensions, too, e.g. grammarly. I think that’s also the case for other vdom approaches (e.g. React).
Please keep in mind you can also use webcomponents in your elm app: you can communicate with it using regular html events, attributes and properties, although you need to serialize everything in the process, so no passing functions, e.g.
I’d also like someone demonstrating slots with webcomponents. Apart from that I’ve been using webcomponents with elm successfully so far.
Evan really tries to update elm as infrequently as possible for exactly this reason. Also, there’s people working on auto-upgrade software. The simplest changes could be automated in the upgrade from elm 0.18 to elm 0.19 via elm-format --update. I can see this being becoming even more powerful in future updates, if I understand @avh4’s plans for elm-refactor correctly.
I have heard this fear. I’d love others to chime in on this issue. Personally, I (maybe luckily) haven’t been affected by any elm bugs, yet.
I don’t want to invalidate your claims, just add some information.
I have been generating code for AWS APIs, and since AWS have a lot of APIs, I ended up with a lot of Elm code. I should really repeat this experiment as it has been a while… I found that the 0.19.1 compiler could handle more lines of code than 0.19.0. If I remember correctly I was able to compile 1 million lines of Elm. But I got a problem when I tried to recompile as it would out of memory when loading the previous binaries - can’t remember what size codebase I could do without running into that issue.
The compiler can pass command line options through to the Haskell runtime, I think? So it may just be a question of letting the runtime have more heap during a compile.
The ways I think you could do it with Elm, are to have multiple separate Elm programs that communicate with each other over ports, or through DOM events (like having a web component written in Elm running inside another Elm program). Maybe that is a good thing as it forces you to consder the interface carefully, how would you version such a thing for example?
But generally, I think Elm hits limits when you start trying to think about how to build extensible architectures with Elm.
I ran into an issue yesterday when trying to publish a package, but it failed to load the docs of the previous build to run elm diff against. The docs are fine, apparently this is a bug introduced in 0.19.1, in rare circumstances it failes to decode its own docs. The workaround was to downgrade to 0.19.0. Thankfully, my package was on that version, and did not require 0.19.1. It would seem if it did require 0.19.1, I would be stuck.
I think for any project aiming to use Elm for production, its a really good idea to have engineers with a few side projects under their belt, to have confidence in everything, to know where to get help, and so on.
The quality of 0.19.1 is very high, so its a low probability but potentially high impact thing.
Reasons to use Elm: It’s type system will make you design better code; it’s libraries are well-designed; and it works amazingly well for re-factoring large projects.
Reasons not to use Elm in production (you more or less covered them but I want to emphasize their potential effects):
Relatedly, Elm 0.19 made this more of an issue by making it harder to go through the backdoor. As Paul Biggar put it, Elm wants to dictate where you take your technical debt and that may be a problem for commercial projects and in particular for startup projects.
Elm major releases have a history of changes that become significant issues for at least some projects. The death of FRP. The effective death of effects managers — once hailed as the future. Etc. On the plus side, major releases are rare.
On the minus side, minor releases are rare. This has meant that bugs — including crashing bugs — have lingered for long periods. Fortunately, Elm isn’t riddled with bugs, but they do exist. The defense of those delays has been a combination of a desire not to push out new major releases (good) together with an unwillingness to take time away from major releases to do minor releases. What this has meant historically is that bugs may not be fixed until the next major release and that may come with its own set of headaches. The backdoor lockdowns have made it harder to make one’s own fixes — e.g., AFAIK bugs in the virtual DOM cannot be addressed by just forking the virtual DOM code.
So, from a production standpoint, if it’s your project and you go in aware of these issues, there is a lot you can get out of it. On the other hand, if you are being employed to make this choice, it might be wise to note that the Elm community routinely falls back to pointing to its alpha status when issues are raised. Do you want to advocate for using alpha-stage technology?
There are some outliers in the data, like this one, where certain files take longer for some reason. It seemed related to GC pauses (i.e. heap getting full) but I wasn’t able to figure out the root cause with that person without access to their code.
That said, I think the time is still quite good compared to other compilers. I definitely have slower Haskell builds with much fewer lines of code!
Anyway, I recommend looking through that repo to get an idea of the current situation. The general-case build times seem to be really good, but I wanted to figure out what was going on with the outliers before publishing something about it. Also, happy to talk with people who have outlier build times to figure out what is going on!
P.S. I have not heard of object files getting so big that they wouldn’t fit on the heap! Not til @rupert mentioned it here! I believe adding something like +RTS -H512m -RTS will let you change the initial heap size as he says though. More details on that here, and lots of other flags to try. Independently, I have some ideas on how to reduce the size of those files on disk and in the heap because that’ll make things faster in general, but I think it won’t be until 0.20 at least before I revisit that, which will probably be one or two years out.
I should definitely repeat my experiment and report back in that case. The thing that this prevents me from doing is putting all AWS stubs into one huge package. Maybe that isn’t a bad thing, one package for each API or group of closely related APIs is probably better - or just let end users generate the stubs they need for each application instead of publishing as packages.
The ways I think you could do it with Elm, are to have multiple separate Elm programs that communicate with each other over ports, or through DOM events (like having a web component written in Elm running inside another Elm program).
We do this for a project that has yet to see production. It works, but to some extent it feels like going back to OOP, albeit on a higher level.
I think Elm hits limits when you start trying to think about how to build extensible architectures with Elm.
I repeated the experiment, trying to compile as many Elm files as I could succesfully generate. Things have moved on since I last did this, so this time I only had about 260K lines of code to work with.
It worked just fine with 0.19.0 and 0.19.1, and was also able to recompile after a small change too, without hitting the out of memory issue I saw previously.
So I can at least verify that it is happy with about 9Mb of source code and 260K lines of code. That should keep your devs busy for a while…
Our app at work is around is around 100k lines of Elm, and long compile times were a problem on 0.18 (Info on it), but with 0.19 the compile speeds improved massively, a full build takes 3-4 seconds now, and incremental builds are 1-2s.
Async code loading hasn’t been a big blocker, our Elm code is relatively small all things considered, it’s the other libraries we use that really add to the bundle size, some of that can be mitigated by using custom elements and only loading the dependencies they need when they are mounted to the DOM.
On breaking changes or refactoring, this is where Elm really shines, we’ve gone through some major refactoring as the code base grew and requirements changed, no other language I have worked with can touch Elm when it comes to refactoring, just make changes in the types and follow the compiler errors, once you have fixed them all things usually work the way you want.
Custom elements are the main way around this, and we make extensive use of them, we have D3 wrapped in a webcomponent, Google Places, cropping libraries, HLS video, Quill.js, lots of other stuff, our current app has around 80 custom elements. Aside from being a way to place nice with the virtual DOM, it also leads to better code I think, at the spot where you instantiate the custom element it has all of the inputs and outputs defined right there, with ports I don’t like the kind of “spooky action at a distance” that is needing to use subscriptions for communication back from JS.
The 0.18 to 0.19 upgrade was pretty large, many things changed, and the blocking of native/kernel code hit a lot of projects that relied on them. We had been careful to avoid using them, though I still believe an effects manager for relating effects to time would be useful (think throttling and debouncing). For projects that made heavy use of native/kernel code I could see not wanting to upgrade, it would be a heavy lift to refactor them out.
Most of the issues we found could be fixed in user-land, if you end up needing to fork a package that uses kernel code it can be a real pain, especially if working with a team because you have to get around the compiler restrictions for yourself, your team, your CI servers, etc, etc. Luckily, we haven’t ran into a situation that required it yet.
None of these are dealbreakers because, as you mention, they can be more or less fixed in user-land.
However, I think they point out to a deeper problem with Elm, which is how extremely centralised the core development is. I slammed my head hard around this problem when I tried to do some non-trivial WebGL with it, and it wasn’t good. https://github.com/elm/compiler/issues/2056
This is also why I loathe the term “bus problem”: besides being unnecessarily morbid, IMHO it focuses on the wrong part of the problem.
But this is not the threat to discuss this, I’ll post something when I’m confident I can turn my opinions into something constructive.