Where is Elm going; where is Evan going

Unfortunately I don’t know, I just learned about it in the elm weekly newsletter.

I started a discussion specifically on that meetup event:

https://discourse.elm-lang.org/t/evans-presentation-at-swedish-meetup/9642

1 Like

I’m trying to establish Elm at work. Unfortunately, the mere existence of some parallel fork like Zokka will make all my colleagues extremely skeptical, as it indicates the main project will not accept simple bugfixes. This is a catastrophic situation. If the goal is to make more people use Elm, the real compiler repo needs to merge those bugfixes.

I think defining a narrow domain for the language was the key reason why it was possible for Evan to keep it so simple and elegant for this specific use case. Although I understand the desire, elevating Elm to a general purpose language seems risky for that reason. PureScript went the general purpose route, and now it can’t do TEA as elegantly as Elm.

To me, it seems a better strategy to make Elm a better Elm, by trying to solve the actual real world pains it currently still has first, and then later expanding the target domain to non-web? I’d be interested in contributing to such a language, but not to the general purpose version of it.

I think adding Abilities would be the one big feature everyone wants in the current Elm, isn’t it? What are the biggest pain points, do we have data on that? Polls?

The ad-hoc polymorphism from earlier was also a workaround I tried. This approach puts functions into records, which seems fine initially. Tragically, this now means we can’t compare two records of this type anymore, or even any type containing this record. Even more tragically, equality checks still compile, but result in A RUNTIME ERROR. Even though the compiler should easily be able to see this at compile time!

It will be hard to convince my coworkers of this, if the basic promise of ‘no runtime errors’ is broken this quickly. Does Zokka fix this?

6 Likes

tldr: It’s very easy to ship many-thousand-line PRs, multiple times per week/month without breaking your app.


I can’t speak to all possible errors, but I’ve been working at Vendr for over 1.5 years now with over 600k loc and I’ve yet to see a runtime error. There is something we hit once that is runtime error adjacent but it’s caused by using web components in hacky way and not due to Elm itself. And since moving to native browser modals I might be able to clean up the related hacky-fix.

For context we’re deploying multiple times, per-person, per-day without runtime errors. I’ve also merged a 2 giant PRs, +4,211 −4,501 (moving all 600k loc to native modals) and +5,710 −6,286 (the final PR to move from elm-graphql to elm-gql), and both have gone flawlessly. We also had quite a few other PRs of this size since I’ve been at the company. One I can recall off hand was to migrate our entire UI theming and styles.

I’m not suggesting that every PR should be this big, more that you can do this and your app will keep running fine. I’m saying that you can move fast and not break things™, something I’ve never been able to do at any other company I’ve worked at.


edit:
Because nothing, not even Elm, is a silver bullet. The PR mentioned above to remove elm-graphql almost had a bug at runtime. There was an elm-gql bug (now fixed) that incorrectly decoded some GQL enums. We only caught this because this morning before I merged my PR someone else wrote an E2E test that happened to catch 1 of 4 edge cases.

6 Likes

thanks for your insights, I’m already a convinced Elmer, but now I’ll have a nice reference to show my colleagues if they have this issue. I have rarely encountered this issue in real in my personal time, but I thought it would happen more often in daily use. Do you use the ad-hoc polymorphism pattern at work, or do you try to avoid it?

Record equality runtime error check. No it does not, because that would require a change to the semantics of the language, and Zokka aima to remain 100% compatible with Elm, so will (at least for now) not be making changes like this to the language itself.

One possible way in which this could be fixed, would simply be that any equality check where one or both sides is a function should return False. Just like divide by zero is zero - wrong answer rather than a runtime. At least this solution would not require changes to the type checking and semantics of equality, so would be less of a change to incorporate into existing code bases.

1 Like

@rupert,

I just discovered that Mojo (a fast Python compiler that is - somewhat - back compatible to Python) uses MLIR and, even though it is in beta development stage, is very fast and can compete with C/C++/Nim/Rust/etc. However, Mojo is very much an imperative language so one would have to check out the Lean4 implementation to see how well it works for an functional language…

That is correct. Chris Lattner who is the main person behind LLVM is also the driving force for MLIR and Mojo. A lot of people are excited about Mojo because Python is such an ugly mess. Mojo brings stronger typing and high performance without bolting on performance libraries written in other languages. Its main use case is intended to be AI programming, so MLIR opens a gateway into high performance accelerated compute infrastructure. But it is also a widening of the compiler infrastructure of LLVM to better support different compiler front ends, as it reduces the need for every compiler to invent its own high level IR. You can build the high level IR in MLIR instead, and also find that there are many “dialects” already available to take advantage of to save effort on this and also that come with many optimisations that you can use without re-inventing them.

I am doing some python work at the moment. I now dream of something ML/Elm-like that would give me access to the same space - that is, a strongly typed pure functional language but with vectors/matrices/tensors all compiling down to an accelerated runtime.

Not that I’m aware of. We mostly write pretty boring Elm. Not trying to sell the product, but vendr.com has a free sign up now so you could see the app in action. It’s not quite reading the code but you can see what we’re building at least and see how it compares.

3 Likes

I am doing some python work at the moment. I now dream of something ML/Elm-like that would give me access to the same space - that is, a strongly typed pure functional language but with vectors/matrices/tensors all compiling down to an accelerated runtime.

I’ve been dreaming of this for years ^^. I’m hoping Roc can eventually fill that niche, since it has the right bases: pure-functional core with easy low-level c-like interop.

Meanwhile you can have a look at Futhark https://futhark-lang.org/

And my moonshot, is hoping Victor Taelin madness (in an affectionate meaning) with interactive nets ends up being the perfect acceleration platform for purely-functional languages.

1 Like

Can you say more about this:

Victor Taelin madness (in an affectionate meaning) with interactive nets ends up being the perfect acceleration platform for purely-functional languages.

I think he is referring to this: GitHub - HigherOrderCO/HVM: A massively parallel, optimal functional runtime in Rust

1 Like

Indeed @jxxcarlson , I was referring to his work with HVM and its ecosystem.

I think because MLIR and Mojo is being sponsored by the company Modular, and Chris Lattner obviously has a huge reputation because of LLVM, and the way that AI compute is undergoing a sort-of cambrian explosion right now, there is likely to be sizable industry interest in MLIR and Mojo. Compute used to be x86, then we got GPUs but NVidia was smart and built the CUDA infrastructure, so the choice of hardware remained fairly narrow. But now there are lots of different AI chips, and each manufacturer must produce their own compiler tooling. It really makes sense that an open source and common infrastructure comes into being, as it will enable a cost saving to chip makers to not be re-inventing things so much. You can see a network effect at play here, its like a missing piece of the puzzle that brings a lot of things together. Exciting for someone writing a new compiler front-end because it will enable you to run the same code on so many platforms.

Possible to write an ‘hvm’ dialect for MLIR perhaps?

HVM looks really promising. Will be interesting to follow.

@rupert,

I had a browse of the GitHub repo for the lean4 project for the source code and the current documentation, and see some interesting things as follows:

  1. The language is “self hosting” as it is written mostly in Lean with some C++ for the required low-level code.
  2. I couldn’t locate where MLIR code is generated by the compiler, but perhaps someone else with more knowledge knows where that is done.
  3. The generation of the Abstract Syntax Tree (AST) is quite different in that it depends on the languages ability to reason and prove to generate it, thus it doesn’t much resemble “conventional” language compilers such as that of Elm.
  4. Its priorities seem to be optimizations through proofs such as eliding bounds checks by proving that are never necessary (equivalent done by the code or at a outer level scope), using dependent types to prove when mutation can be done in place, using dependent types to prove when memory structures can be freed and/or reference counted, etc.
  5. General optimization does not seem to be a priority, seemingly dependent on MLIR to do those.
  6. Data types seem to be added to the language, not to make it a general purpose language, but as necessary to make it able to compile itself: the native Int type is an “infinite” precision type, a BigNum, and the Nat type is an unsigned version of Int, the only native fixed precision types are unsigned ones, and these may have been added just so as to be able to compile efficiently…

So while Lean4 is an interesting project and definitely a FP language, there doesn’t seem to be too much to be learned that can be applied to an ElmPlus language as there is no way we want to add the complexities of theorem proving to it…

I have been playing with the latest 0.7 version of Mojo and it does indeed do as advertised: generate code that runs at least as fast as Rust can at its best and faster than Rust when Rust just blows it for some algorithms: LLVM as used by Rust can generally be better than GCC at auto vectorization and the efficient use of SIMD but generally slightly worse when outside these domains; Mojo’s use of MLIR seems to make it not “blow up” in Rust’s worst cases and generate fairly fast code, although outside of vectorized opportunities sometimes a few percent slower than that generated by GCC.

In the cases where Mojo shines, as in highly vectorized algorithms, it is faster than the same algorithm in Rust…

However, although it greatly improves on its root of Python in speed and static type checking, etc., Mojo is very much an imperative language and (currently) has quite a bit of trouble even expressing pure functional algorithms with persistent data types; this is perhaps largely because it doesn’t allow recursion at the local scope level…

I wanted to explore Roclang, and “stole” the F# plugin and adapted it to Roc. But I quickly lost interest with Roclang when I saw that currying by default was unavailable, because currying was confusing to beginners.

Beginner speaking here:
I did find functions declared with incomplete arguments the single most confusing part of reading other people’s code. I grasped the concept now and find it tempting to apply currying wherever I can, because it feels cool to write more concise code. On the other hand, I don’t see any real benefits of currying other than writing harder to read, more concise, feel-cool-about-myself code. Maybe I just haven’t found a suitable use case of currying yet?

On the other hand, I do find it useful to use point free notation when applying functions to fold or map. But that doesn’t depend on currying as far as I can tell.

@benjamin-thomas Why do you like currying so much, that you cannot imagine a language being good without it? What is it actually useful for and not just nice to have?

3 Likes

Its really not easy to find… but I did some detective work on the github account of one of the authors of that “Lambda the ultimate SSA” paper and found it here:

It’s definitely in the domain of “nice to have”. In the same sens that having type safety is also nice to have, you can still make programs without it.


EDIT:

Re-reading myself, I realize that I made a mistake in my initial demonstration. I actually find it a little difficult to explain exactly what I like about automatic currying.

In some sens it’s a question of ergonomics. Since in FP, the fundamental building block are functions, I feel that it’s important that we can chain them, combine them, change them as easily as possible.

Let’s think about it like this: how would an Elm decoder look like without automatic currying? You would have to expand every argument into an anonymous function (or be forced to create a named function instead).

userDecoder : Decoder User
userDecoder =
    Decode.map3 (\fn md ln -> User fn md ln)
        (field "first_name" string)
        (field "middle_name" (maybe string))
        (field "last_name" string)

vs

userDecoder : Decoder User
userDecoder =
    Decode.map3 User
        (field "first_name" string)
        (field "middle_name" (maybe string))
        (field "last_name" string)

There are many times where the names of a function’s argument are not that interesting and add visual noise. There is no value in having to read \fn md ln for instance here. This could be worked around by creating a specialized constructor function but that’s just extra work for the programmer and again, with no real added value.

Also returning partially applied functions by default is nice because we can easily spot and re-use bits that are common to create more specialized versions. And the cost of creating those customized versions is quite low in comparaison to defining fully applied functions.

I’d have to think of a good example for that but I hope that’s clear enough.


Okay so I think I’ve got an okay example: let’s say I want to unit test a bunch of functions.

For the sake of argument, as a first step, let’s say I want to test the behaviour of List.reverse.

But I find the API of elm-test a bit verbose for my use-case, I’d rather just give an input list and an expected output list. So I’m thinking of something like this:

[ test_
   [ 1, 2, 3 ]
   [ 3, 2, 1 ]
, test_
   [ 1, 2, 3, 4 ]
   [ 4, 3, 2, 1 ]
]

Now to make that work, I’ll define this general function

makeTest description testedFunction expectation input expected n =
    test (description ++ " #" ++ String.fromInt n) <|
        \() ->
            testedFunction input |> expectation expected

Then my test suite would look like: (I could still simplify things further but we don’t care)

suite : Test
suite =
    let
        test_ =
            makeTest "reverse" List.reverse Expect.equal
    in
    describe "suite" <|
        List.indexedMap (\n f -> f n)
            [ test_
                [ 1, 2, 3 ]
                [ 3, 2, 1 ]
            , test_
                [ 1, 2, 3, 4 ]
                [ 4, 3, 2, 1 ]
            ]

Thanks to currying, to define (the partially applied) test_ function we only give it the arguments which are important in this context:

test_ =
  makeTest "reverse" List.reverse Expect.equal

And it can become a reusable building block.

suite : Test
suite =
    let
        test_ =
            makeTest "length" List.length Expect.equal
    in
    describe "suite" <|
        List.indexedMap (\n f -> f n)
            [ test_
                [ 1, 2, 3 ]
                3
            ]
suite : Test
suite =
    let
        test_ =
            makeTest "add" (\( x, y ) -> x + y) Expect.equal
    in
    describe "suite" <|
        List.indexedMap (\n f -> f n)
            [ test_
                ( 1, 2 )
                3
            ]

Etc.

Without currying, we would have to do this:

test_ input expected n =
            makeTest "reverse" List.reverse Expect.equal input expected n

And repeat input expected n which we are not interested in, every time.

There are probably a bunch of ways to go about this, but currying makes creating reusable building blocks like this quite easy (and quite nice)

I hope that’s clear enough now :slight_smile:

1 Like

Thanks for that.

I didn’t find any time to explore the compiler implementation until now, but I don’t know that there is that much from Lean4 that one can use in implementing a ElmPlus compiler, even though both are functional programming languages. It seems that the whole Lean4 compiling strategy (which is written in Lean) is built around having the proof theorems in place based on dependent types, which ElmPlus will not have…

It did show me how much easier it is to use Meta Language Intermediate Representation (MLIR) than LLVM-IR in that being able to compile to a meta level seems to mean that one doesn’t have to concern oneself with all the details of alloca’s, and application of phi’s leading to properly programmed SSA code, as the “meta language” level programming seems to take care of that. This means that the use of MLIR would be an option for a code generator if one wanted to go that route, and probably contributed to how fast Mojo has been able to develop…

Mojo’s speed above Rust can perhaps be attributed to that all of Mojo’s data types are specified to be SIMD types, which contributes to how easily Mojo code can be optimized into SIMD operations, at which LLVM is already better than compilers such as GCC…