EDIT: I’ve misunderstood parts of the paper; please don’t miss the second post where I try to correct myself.
I’ve recently found an interesting paper called Let Should not be Generalised. It argues that Hindley-Milner languages don’t really need to implement let-polymorphism.
For uninitiated, let polymorphism allows you to write:
foo : (List Int, List Bool)
foo =
let
makeList : a -> List a
makeList x = [x]
in
(makeList 1, makeList True)
Without let polymorphism you’d get a compiler error along the lines of “makeList has already been used with Int, so you can’t use it with a Bool!”
What do you (perhaps @evancz in particular) think about the paper’s proposal to ditch let polymorphism? Is it essential for some parts of the Elm ecosystem? Or could we simplify the language?
I’m still reading the paper, but it was surprising to read in particular that Haskell ecosystem doesn’t really need let-polymorphism (as much as thought previously):
We show that generalisation for local let bindings is seldom used; that is, if they are never generalised, few programs fail to typecheck (Section 4). We base this claim on compiling hundreds of public-domain Haskell packages, containing hundreds of thousands lines of code. Furthermore, those programs that do fail are readily fixed by adding a type signature.
Given I have yet to implement let-polymorphism in elm-in-elm, I’m somewhat invested in this possibility And removing features from Elm does have some precedent already
I appreciate that from a compiler-implementor perspective this might sound like a good idea.
However, I am a little confused about any benefit in the language design perspective, specifically in regards to consistency and type inference:
It seems oddly inconsistent if let binding behave differently than top-level bindings (although that is already unfortunately somewhat the case).
More seriously I think in elm you shouldn’t change behaviour by adding a type annotation (apart from getting better error messages, and even that should be eliminated whenever possible).
But perhaps I am also misunderstanding what you are suggesting…
This is a fun, older paper in a long line of Haskell type inference papers. Many of the core challenges addressed in that line of work have to do with type classes and GADTs (generalized algebraic datatypes, where constructors can give narrower types than usual—allowing for stronger invariants). I’m not 100% up to date, but I know Haskell’s type inference continues to evolve as they have increasingly fancy type-level programming.
I wonder, with @gampleman, what the gain would be here. It might simplify the implementation, but at the cost of heavy refactoring. If Elm were going to get full support for typeclasses, then it might be worth it. If Elm were going to get mutli-argument typeclasses with constraints/functional dependencies, it would definitely be worth considering.
Finally, I don’t see any “changing behavior” here, though: what changes is whether or not the program compiles, not how it runs. It’s already the case that adding type annotations might change the compiler’s behavior: writing a monomorphic type for a polymorphic function might break polymorphic usages. (Apologies if compiler behavior is all you meant—this may be a distinction only us semantics weenies care about.)