Why couldnt Elm (or an Elm like language) work on the back end


SAFE and Elmish.Bridge looks very cool! :open_mouth:

At my previous job we used Meteor, which is all about reactivity, pub/sub and seamless client/server integration, so kind of similar to Elmish.Bridge in that regard, although using JavaScript. At my current job we have a classic Java SQL REST backend, and I’m pretty sure that in the same amount of time it’s taken me and three backend developers to implement a new feature, I could have done it all myself in Meteor, with a more coherent and flexible API, and no stale client data issues (mostly anyway).

Meteor is not very SAFE though, so a statically typed pure functional alternative definitely seems more reliable.

handleRequest : ClientRequest Value -> Task (ServerError Value) (ServerResponse Value)
handleRequest request =
	case request.url of
		"" ->
			ok (Json.Encode.string "hello world")
		name ->
			ok (Json.Encode.string ("hello " ++ name))

At Nomalab we’ve been using Elm on the backend for 2.5 years and I personaly find it quite beautiful! But it slowly became a pain because we had to develop kernel stuff for the DB, logging, crypto, an automatic JSON serialization/deserialization library… We knocked ourselves on the door with 0.19. Many mistakes were made in the process anyways, we especially made it hard to debug. The new backend guy felt alianated (knowing Scala and Haskell), so we’ve decided to slowly ditch Elm from our backend. End of story o/

But this experience allowed me to explore some aspects of what could be done with Elm, right now. First, it’s stateless by design: processing a request quite matches the concept of Elm’s Task with chaining; the downside being that you can’t parralelize Task, or develop custom kernel Task. That’s not a huge deal though, since most of the time each operation depends on the previous one and it makes it much easier to recover if it goes badly. The other aspect is that as of today most services (DB, logging, emailling, pub/sub…) are reachable with Http so we already have everything we need to build some kind of smart proxy.

What’s very missing to make it even better?

  • a logging api (in the mean time you can send back to JS in the response)
  • a fast & solid crypto api (in the mean time there are some libs available, beware)
  • an official Http shim (the xhr npm package supports GET file:// !)

It’s also serveless compliant (being stateless), and I’ve been very keen to see Cloudflare’s ServiceWorker which is the browser’s ServiceWorker api, running on the edge of Cloudflare’s network. If you don’t know any of this, I invite you to check it out, but basically ServiceWorkers in your browser are shared accross all tabs and intercept http requests to allow you to manage cache or auth, among other things. On Cloudflare, it’s basically a programmable (a function) cache and proxy service (think auth, cache, html rendering…). I’m wondering if this is in the community radar?


I feel like some things that would be nice to add to and to address in the roadmap explanation as an incentive to want Elm on the back-end would include:

  • I want a server side language with a sufficiently powerful and friendly compiler that if my code compiles it’s got exceptionally strong guarantees to be free from code-induced runtime exception.

    I feel like what to do about unavoidable runtime exception for states over long periods of time lays more with the kernel, so maybe an approach like Erlang’s would help for the kernel itself at least: both for handling runtime exceptions and for concurrency.

  • I want a compiler both strict and minimal enough to help make impossible or undesirable states unrepresentable.

  • I want a purely functional, strongly side-effect controlling language on the back end and I want the compiler to support concentrating all side effects (including talking to network client, talking to disk persistence, managing concurrency, etc) into plugins at the kernel so that those kernel-plugins can be drained of as many moving parts as humanly possible. I want to be able to write a majority of my business logic in a realm where I am guaranteed that every function will return the same output from any given set of inputs every single time: a property which greatly improves caching, lazy evaluation, and the effectiveness of testing.

I’m not aware of any other languages that can offer all three of those guarantees simultaneously (especially the one about YAGNI minimalism), so that represents the Elm-shaped hole that I see in the server-side arena. :slight_smile:


Debug.log for server-side logging can suffice?

If I correctly interpret “Sending back to JS in response” to mean having the server side logs show up in browser’s console – that’s kinda cool but for development?

do you mean https://www.npmjs.com/package/xhr ? how do you hook it up with Elm’s? I’m using w3c-xmlhttprequest but it doesn’t handle errors well and crashes entire node process (e.g. when network is unavailable)

const { XMLHttpRequest, OVERRIDE_PROTECTION_DESCRIPTOR } = require('w3c-xmlhttprequest/lib/constants')
global.XMLHttpRequest = XMLHttpRequest


Since 0.19 Debug.log is not available when using optimized code, and it’s printing a label, a column char, and then Elm formatted objects. My OPS just wants plain JSON for each line to read it in Python. And when I’m saying send it back to JS, what I mean is that my responses come from a port, then go back from another port. I’m using the later to also send logs. With that strategy though, if for any reason the request handling crashes there are no logs, just the stacktrace.

As for the xhr library, I’m in fact using https://www.npmjs.com/package/xmlhttprequest -mistaken, sorry- I just put it as a global to make it available for Elm. AFAIK, I haven’t had any runtime error from this library for 2 years.

I have an old repo for a proof of concept, I’ll update it today to show what I mean exactly.


I’ve somehow thought “optimised code” merely refer to size of compiled js and didn’t think it’d matter much on server side :joy:

Thanks I’ll try out that other xhr :bowing_man:‍♂:bowing_man:‍♂


I’d like to add my 2 cents to a few sub-topics discussed.

I think I know to what you are referring. Haskell webserver code relies on IO monads (i.e. imperative paradigm) a lot because IO is what you are doing when you model your solution as a program/executable, in which case you can indeed drop connections etc. etc.

The world isn’t functional, so you need some abstraction to model things in a functional way. In Haskell you have imperative code (IO monads) evaluating pure functions (sort of). In Elm the entire imperative part is compressed to TEA. I don’t think either abstraction makes a lot of sense for the backend, but functional programming does.

Function-based serverless architectures (AWS Lambda, Google Cloud Functions…) essentially model a backend as a bunch of Request -> IO Response functions, similar to Haskell, but with fewer options in the ‘IO monad’, so each handler is independent of others. This lets the system scale horizontally and allows/forces the programmer not to care how, when and where functions are evaluated. In other words, the ‘functional bit’ (even though it’s internally imperative) is evaluated by a network. This seems like a saner route than TEA for a backend abstraction, as the later handles messages sequentially.

I would like to point out that with stateless request handling, (de)allocation can be done really fast: keep growing the stack while handling the request and throw it all away when you’re finished. This achieves a lot of what you would get by mutating arrays, as you can just reuse the same stack over and over. I vaguely remember this being called tournament allocation but I have not been able to find any references to it so I’m probably wrong. You might even be able to get rid of that minimal allocation logic and use constant memory addresses instead if your functions have no recursion. So immutability does not have to be a bottleneck in the right circumstance (statelessness, non-recursion, …).


I gave some thought to this topic recently. I think the simplest difference between front-end and back-end is the number of users you deal with. No matter how complex your front-end Elm code is, there is only one user running an instance of your Elm app. The amount of state for that one user and the types of concurrency are manageable with a single in-memory model. Also, there is always a clear state for your app that maps to the UI at any given moment.

On the backend, your server typically has to deal with requests from many different users, and keeping a single model with request and response details doesn’t make much sense, nor is it practical from a security or performance standpoint.

I think programmers often abstract things too far and try to apply the same architecture to every problem and domain. One of the selling points of Elm is the intense focus on the specific domain of building web UIs. Even within that restricted domain, there is much work to be done to improve debugging, testing, and operational tools. Although there is certainly value in exploring Elm-like approaches to the backend, it would require a very large amount of work and the result would likely be quite different than what Elm is today.


using global.XMLHttpRequest = require('xmlhttprequest').XMLHttpRequest with a standard example usage in Http.get

GotText (Ok str) -> 
                _ =
                    Debug.log "ok" str

prints ok: <internals> ??


I haven’t tried with the latest Http, it seems to me that it tries to give you a json object (internal)


I think a lot of opinions and ideas have been contributed on this thread. It’s going off topic which seems to indicate that we should close it.

It would be super cool if people explore this area experimentally and post any new findings!


Maybe I should note that I have been working on a personal project in Haskell and Elm, and Ive been writing the Haskell part as closely to TEA and Elm style as possible (partially because I dont know the true Haskell way, and partially to experiment and see what kinds of problems I will run into. Heres my update function, for example https://github.com/Chadtech/Radler-ui/blob/master/engine-src/Update.hs#L30

But anyway, this project has slightly increased my confidence that something TEA-ish could be done on the backend, in and Elm equivalent. For example, I dont yet have a reason why this api and code wouldnt work:

import Server exposing (Request, Url)
import Json.Decode as JD
import Json.Encode as JE
import Route exposing (Route)

main : Program JD.Value Model Msg
main =
    { update = update
    , init = init
    , onRequest = RequestReceived
        |> Server.run

type Msg
    = RequestReceived Request Url

update :  Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        RequestReceived request url ->
            case Route.fromUrl url of
                Just Route.Echo ->
                    ( model
                    , Server.respond
                       { statusCode = 200
                       , body = request.body

                Nothing ->
                    ( model
                    , Server.respond
                       { statusCode = 404
                       , body = 
                          [ ("error", JE.string "Page does not exist") ]
                              |> JE.object


I think it depends on your target. Http can be more complex than that: just for the request you can do header parsing in Elm or rely on node’s; body parsing is async in raw node, should it be in a standard module as well? You can stream responses in node, so suddenly you need slightly more complex I/O. Nothing impossible here, but from proof of concept to prod, there are tons of small decisions that will have as many opinions. If you want a standard Http Server and content everyone, it needs to be quite generic, if you want a simple Json RPC like mechanism you can be more opinionated and pick conventions.


This topic was automatically closed 8 hours after the last reply. New replies are no longer allowed.


I lowered the “close automatically” time so that any last comments could get in for posters who were wrapping something up here.

If you would like to continue a specific thread of discussion in this thread, please make a new post about that specific topic. Just reference this thread and I believe links will show up to the new conversations at the bottom here!

(I personally think threads tend lose focus after about 10 or 15 posts, and participants can help keep things focused without intervention by starting new threads about their specific subconversations if it feels like there are two or more different conversations interleaved in one thread. E.g. is this for OP to read? Or is this something you’d like to say independent of OP? Etc. I hope that makes sense!)