Are SPA's particularly well suited to statically typed languages, compared to software systems in general?

Front-end web is the least predictable platform. Not only do you have to deal with just as many business requirements changes, but you also have changes coming from the runtimes in all directions–from Firefox, Chrome, Safari, and oh-by-the-way-we-have-older-users-who-say-they-need-Internet-Explorer.

(And I’m afraid I can’t help but say it: I think that the fact that Elm works so well in browser despite all this is a strong indictment of the premise: If front-end web is such a quick-changing environment, and languages like Elm and TS are known for making those changes easier to implement without causing runtime exceptions, then it follows that complex system are ill-suited to dynamic typing.)


Rich makes a big thing about maybe, and I agree, the number of maybe in the code will increase year by year.

But if you do extensions right, it will mostly be more maybe-fields to records. And you can do a cleanup every 3rd year, when you do not need to support some old api:s anymore which didn’t send these fields. And when you remove the maybe’s, the compiler will tell you when you are done.

I have been doing functional programming since 1983, both static ones and the untyped like Erlang and Clojure. I have systems written in Erlang, Clojure, ELM, F#, Ocaml in production, mostly web-backends.

The disadvantage with Erlang and Clojure is that I personally waste time while developing the initial versions, and that fixing minor things in software I haven’t touched in 5-10 years is a nightmare. Therefor I still use Erlang R18 for some productions, since even the slightest changes are risky. Same is true for my Clojure system, where the latest JVM changes had affects on the language and libraries.

Clojure was written to develop the database Datomic, and if you compare the activity around Datomic compared to ScyllaDb (similar size of companies originally), ScyllaDB seems much more successful. Clojure is good for fast initial versions created by super-talented developers, but even super-talented developers will have a problems using Clojure in the long run. It might explain the difference in success between these 2 companies.


Thanks @eleanor, you sure seem to have captured the general sentiment here.

Well, the dynamic typing crowd will of course answer: “No runtime exceptions, but at what cost?”. If my understanding is correct, they would be perfectly happy to trade some runtime safety for speed of development (i.e. ‘fighting the compiler’) and shorter feedback loops (REPL-driven design).

(I’m afraid it’s almost impossible to refute the premise. Or to prove it, for that matter. We would have to carefully craft empirical studies, comparing both approaches, while controlling any confounders, like developer talent. It would take not one, but many studies like that, to make sure the conclusions are reproducible. But this is a totally different discussion.)

Not trying to troll here. I’m just a neutral bystander, genuinely impressed by the accomplishments of both the Elm and the Clojure community. Both languages seem to bring exceptional joy to their users and the fact that great products are built with either of them is a testament to the genius of both Hickey and @evancz.

Anyway, one way to reconcile your answer with Hickey’s premise, would be to acknowledge that, in the end, building SPA’s, as well as any other software system, boils down to personal preference. You have these two approaches, with totally different tradeoffs, successfully used by different enthusiastic people to build great things. Just pick the one that best fits your personality.

Not exactly groundbreaking, and to be honest, personally this feels a bit underwhelming because I don’t have an outspoken preference (that’s probably partly why I started this topic). But it is of course, objectively, a perfectly valid conclusion. When you think about it, so many things are driven by personal taste. Why wouldn’t your choice of programming language be one of them?

1 Like

A SPA is effectively a closed system. Web frameworks — which is essentially the role Elm plays if you squint hard enough — serve to isolate the browser issues. And even where it doesn’t, it’s a relatively small set of gradually changing APIs that affect presentation implementation not app functionality. Server APIs may change but the burden of the open system is on the server side needing to support older clients. The SPA team only has to worry about one version of the server API assuming that the same company owns the server. That leaves changes driven by wanting to add new functionality to the app, and that’s pretty much an issue for any software system.

1 Like

It is also issue of expressibility. There’s a spectrum what can be expressed by the type system. There is also a cost of expressing something that satisfies compiler and you pay this cost every time you try to read this code. And you more often read code than change it.

On topic: Elm tries to create this box of closed system, where you leave all real world issues outside. Due to nature of web apps, it is quite nifty but would easily break in bigger systems.

1 Like

I wonder if maybe this is the wrong approach for these kinds of discussions. In general it’s not terribly difficult to build any particular project with any particular language, framework, typing style, etc. Sure Elm may not be suited to servers or clis, but it can be done. To me the more interesting question is how that code base holds up 5 or 10 years down the road? Is it still easy to build? Is it still easy to onboard? Is it still easy to add features? This is where most programming tools don’t focus from what I’ve experienced.

So to maybe rephrase: Are SPAs, and other monolith projects, particularly well suited to statically typed languages compared to software systems in general? To which I would answer, it depends on the static type system but generally I’d say yes. The experience of an HM style type system combined with Elm’s custom types is substantially different experience from say writing Java despite them both being statically typed.


I think the issue is more widely philosophical, but the way I would approach it is that it’s the analogous (or perhaps even isomorphic) question to whether it is appropriate to use mathematics to model a particular domain. This can go from a fairly enthusiastic yes in the case of say physics, to a maybe, if you are exceedingly careful and you count statistics as math in the case of something like psychology, to a no in the case of something like art.

I say this corresponds, because types are logical formulae and programs are their proofs, therefore types are going to be more useful for programming domains where using logical formulae would be a natural way of talking about the domain.

Most logics are more defined by what they cannot say; similarly type systems use is in restricting the kinds of programs that can be expressed. Ideally, a type system would only prevent “bad” programs from existing, while allowing every possible “good” program to be expressed, but as it usually is, it’s difficult to pin down what “good” and “bad” exactly mean in this context. For instance, Elm prevents programs from crashing in most contexts (I think that’s “good”), but also prevents programs from being faster in some contexts by disallowing mutation (which may well be “bad”).

So one way of deciding if a type system is suitable for a particular domain is to understand if the kinds of things it forbids from saying are not things you need to say as well as well as ideally it forbidding things you don’t want to say.

The costs of getting this wrong are different in the two cases: if you need to express something the type system forbids, then you will need to figure out a suitable escape hatch and potentially that might have some important consequences (for instance if you need a mutable algorithm in Elm, you will need to implement in JavaScript, then figure out the architecture around it to be able to use ports, then figure out if the encoding/decoding needed at the boundaries doesn’t nullify any performance benefit you would gain, etc.). In general it will lead to much more complex code. On the other hand, if the type system allows something that shouldn’t be allowed, this typically manifests itself as bugs. This has the knock-on effect of making you less confident in refactoring as well as needing to rely on other correctness mechanisms such as testing.

Dynamic languages ensure that you never need to pay the first kind of cost by letting you pay the second kind in full.


There’s a hidden assumption in that reply which I think isn’t justified: That runtime safety implies slower development speed and, even worse, longer feedback loops.

I get that this is a very nuanced topic but at least the above seems to me to be the opposite of the truth in my experience with some languages, especially Elm.

IMO I think a better reply would be to point out the cost in the sort of limitations such languages impose, not their supposed longer feedback loop or development speed.


Thanks @gampleman, nicely put. But the question remains: where do you think SPA’s fit in? To use your analogy, do they lend themselves particularly well to mathematical modeling? Or does it, in turn, depend on the business domain of the SPA?

I don’t think this is a question of what type of program is suitable for what. I think this is simply an instance where people use cleverly hidden argumentation faults for proving their own biases are right. There are only two types of problems. Those which can be solved in multiple ways and those which can’t be solved in any.

I also think Rich is very well aware of some of his argumentation faults himself. I think this is that part of informatics that is closer to engineering than to hard science as math. The chase for the best solution with the best set trade-offs. It’s valid, it’s just not that definitive. Just few examples:

The solution to strengthening a promise vs relaxing requirements is obvious. you just don’t break the existing function if you don’t want to break callers. instead you impelemt new function newFoo : Maybe Bool -> Foo and reimplement the old one in terms of it originalFoo : Bool -> Foo which is implemented as originalFoo = newFoo << Just.
This much like the solution in language without maybe will mean callers might have leftover unnecessary checks but it’s at least explicit about the fact that they do.

Another example might be Maybe Sheep. The problem here is that what he suggests is Sheep - probably some record / hasmap of data. That is in fact not a sheep. Sheep is not bounch of addresses in memory filled with 0s and 1s. The argument about “reality” is completely off the table. Maybe sheep means “I maybe have data that describe properties about entity called sheep” which is IMO much better than to say “My data about entity called sheep are referencing the same object as my data about car or any other thing which does not exist” (because that’s exactly what null/nil is)

That doesn’t mean that he doesn’t have any good points. He has plenty of good points. But he is making them from the point of making his case. Perhaps what is more important than points themselves is to understand the point from which he is making his case. But to think he proven Maybe type is a bad idea? I think that’s not the case at all.

I think you’re missing the point. It is not Maybe vs littering code with null checks. He proposes a real solution - spec. But what’s funny, we rather agree in Elm community on similar solution for information modelling - extensible records.

Information is open and defining closed types complicate things. And functional programming folk tried putting Maybes, Eithers and other functors inside other data aggregates (In Haskell you can parametrize type over functor, so it is easy). But that leads to proliferation of types due to nominal typing. And that proliferation of types leads to explosion of functions that are needed to convert between types. Which are not essential complexity!

If we need to handle Posts from server and posts inside user form, you need similar types. Aggregate of author, content and creation date. But posts from server have and id from database. Posts in form don’t, because they’re not saved yet. One could add field id with type Maybe uuid.

Better yet solution is to have named extensible records. One with author, content and creation date, and another, more specific with id. This is clojure.spec in Elm/Purescript/Typescript world. It is still open, no Maybe re-wraping, easier functions that work across different types.

Please also note, that Rich talks about information. That’s big part of our application, but we also need data that corresponds to computation artefacts. For example result of division, or status of balanced tree node, etc. There Maybes, Eithers and Variants are great. And clojure also has “dynamic” version of Variants, that is rather closed - deftype/defrecord.

Cute thing, Typescript is more flexible than Elm, and it is easier to express clojure.spec like selection with ad-hoc type expressions like Pick, Omit etc. We had a macro for Elm that generated code from similar definitions.



Is this thread about polymorphism and the limitations imposed on it by strict typing?

Would it therefore be fair to say that the permissibility of types (and coercion?)
in dynamic languages allows greater polymorphism?

Does a greater degree of polymorphism in dynamic languages give us the impression that dynamic languages can be faster, and easier, to develop than some static type systems?

A recent continuation of a 2016 discussion about Type Classes in the F# community included a listing of their pros and cons by the creator of the language.

Would it be correct to say that the F# discussion was also concerned about using type classes to provide polymorphism?

If we were to talk to someone who applied Domain Driven Design extensively using algebraic data types would they prefer the benefits of entity creation over polymorphism?

Would a discipline like DDD prefer well established AlgDT domain entities over clever, concise polymorphism? What would the Domain say?

In Elm can we have our cake and eat it, too?

1 Like

I think SPAs are not any better suited to typed languages, than software in general.

Looking at Elm, one thing that is hard to do with it is a micro front-end architecture. One where the application is composed of many independent sub-systems that dynamically plug together - so that at a later time 3rd parties can add new components without rebuilding the rest of the application. We don’t have interfaces in Elm (but can emulate them with records of functions), and we don’t have dynamic code loading. We do face some challenges trying to use Elm to build systems like this.

Of course, not all typed languages are quite so restrictive as Elm. But I give the above example to show that there can be a desire to have extensible architectures in web applications, and that strong and closed type systems face the same difficulties there as with extensible architectures elsewhere.

The way I see it, the benefits of strong typing outweigh the disadvantages. Do they really slow down development? I am not convinced. When I took over as custodian of elm-serverless there was a code generator with it, written in Javascript. I wanted to work on this and needed to understand the data model it was generating from, except… nowhere in the code was that data model declared, I just had to read the generator code and figure out what the implicit data model was. That was such a high barrier to understanding that code that I rewrote the whole thing in Elm, with an explicit data model - after that my development work could proceed much faster. Types aid understanding and that saves me time in the long run.

Types also catch bugs sooner in the development cycle, at compile time instead of run time. And there is an argument to be made that the later on a bug is found, the more it costs to fix, both in terms of time and money.

Spec (Clojure - spec Guide) looks interesting. Strikes me that it is on the wrong side of the “Parse don’t validate” argument though?


Yes, this is a good example of something that works quite well in a dynamically typed language and something that doesn’t in a statically typed language.

A different distinction I sometimes notice is the amount of meta-programming you need to do. Dynamically typed languages often make this painless enough that you don’t even realise you’re meta-programming. A while back I wrote a blog post about this.

I think this is roughly why writing a generic forms library in Elm is difficult.

So for example, imagine you want to write an admin interface for a web application, where you mostly want to allow the admin to change any of the data, in any way they want. So you’re going to end up with a form to edit a particular database record. If you want to do this for a single web application it’s quite doable, but if you wish to write a generic admin interface, that takes in the database-schema (or some description of the data) and presents the user with a reasonable form for that kind of data, that’s quite hard in a statically typed language. You end up partially writing a dynamically typed language interpreter, an application of greenspuns tenth rule.

So are SPAs more likely or less likely to require the kind of generic programming that would be easier/best done in a dynamically typed language? I don’t know either way, but my feeling, for what it’s worth is that SPAs aren’t especially likely or unlikely to require that kind of generic/meta-programming.


As I said, this is not divide across dynamic and static typing. You can have highly polymorphic code in static language.

It is about expressivity. You need to be able to express your domain in your language. If language needs boilerplate or in other way hinder this ability, your development will suffer. Dynamic languages are more flexible, so it is easier to express your domain.

I can understand every single word here, yet meaning of the sentence eludes me. I don’t see how polymorphism can be chosen over algebraic data types or vice versa. These are orthogonal.

Not really. s/conform does parsing part. Of course, just like in every other language, you need to be careful and parse data at correct boundaries.

Wow, things in programming get real trippy real fast.

It seems that Rich is trying to solve a very old problem by:

Definition of, access to, and use of, named values and their collections - through the use of schema (if I understand correctly).

Is Clojure’s schema spec more of an evolution of the language’s own maturity in improving explicitnes in a dynamic language while further improving flexibility?

Considering that Elm is designed for SPAs and the whole environment is optimised for usability, it’s definitely worth a try.

If you want to see how well the algebraic type system can be used for modelling domains check out Scott Wlaschin’s Domain modelling made functional talk. It translates very well to Elm.

One could argue that this old problem does not have a solution yet. Static languages are preoccupied with type theory and try to shoehorn reality into types. Clojure takes inspiration from research around information and semantics.

Very basic examples. Does not show how you should tackle real domain that consist of open information and has inherent lifecycle. And what most important, information comes in form of aggregates, often recursively. That’s why it is very hard to find good examples in articles and talks, because real examples are big and messy.

Talks presenting functional programming and DDD also fail to mention how looks communication across boundaries. What happens when you need put all those fancy phantom types on the wire.

1 Like

There are quite a few examples from talks about fancy types across boundaries. There’s Lamdera, generating types from Haskell Servant types, GraphQL and its many generators, proto buffs and their various generators, and more.

1 Like

Interestingly I just found this in Scott’s preface to his book “Domain modelling made functional”:

"This book focuses on the ‘mainstream’ way of doing domain modeling, by defining data structures and the functions that act on them, but other approaches might be more applicable in some situations. I’ll mention two of them here in case you want to explore them further:

If the domain revolves around semistructured data, then the kinds of rigid models discussed in this book are not suitable and a better approach would be to use flexible structures such as maps (also known as dictionaries) to store key-value pairs. The Clojure community has many good practices here."

Even complex SPAs would likely be considered mainstream by most, including Scott.

The other thing about the philosophical approach of DDD is that it approaches software design from the business first at a very fundamental level. I just read how a dynamic language programmer appreciated just being able to dive into starting to code without needing to think about and define types at this early stage. This is definitely not a DDD approach and I think this is where the rigidity of concrete algebraic types is not a disadvantage but are an advantage. Adding the ease in which code can be safely and quickly refactored in Elm further reduces the seeming disadvantage of the rigidity of an algebraic type system.

One question is could something like a map system be used with an algebraic type system? How complementary would they be? I guess Record Aliases are a kind of map but perhaps a map based modelling system is so fundamental to the language as to be entirely different and incompatible with the idea of Maps with Types (either containing types or being contained by types). I’m wondering if Typeclasses are an attempt to solve some of the rigidity of algebraic types, albeit in a very different way.

The thing that Rich seemed to also indicate was that maps handle highly nested data quite easily.

It’s a fascinating topic.

1 Like