I was curious how this implementation compares to the old pipeline implementation in terms of error messages, so I put together this ellie app (https://ellie-app.com/3yvjBn8YMVQa1) to compare the same mistake made in both APIs. To test this, I made the equivalent decoder with the new and old apis, made sure they worked, and then in both I deliberately tried to decode the email using Decode.int.
Here are the error messages side by side:
Experimental API
I cannot send this through the (<|) pipe:
34|> require "email" int <| \email ->
35|> default "name" string "Guest" <| \name ->
36|> succeed (User { id = id, email = email, name = name, selected = False })
The argument is:
String -> Decoder User
But (<|) is piping it a function that expects:
Int -> Decoder b
Hint: Want to convert a String into an Int? Use the String.toInt function!
Old API
This function cannot handle the argument sent through the (|>) pipe:
34| succeed UserModel
35| |> JDP.required "id" int
36| |> JDP.required "email" int
^^^^^^^^^^^^^^^^^^^^^^^^
The argument is:
Decoder (Int -> b)
But (|>) is piping it a function that expects:
Decoder (String -> String -> Bool -> UserModel)
Not that different I suppose, but it looks like the old API has the advantage of pointing to the line where the mistake was made. I bet the error message of this new API could be improved if the code didnt have lambdas, but Im not sure how it could be written with comparable clarity.
I’m really surprised that’s your impression! To me these are night-and-day different.
The first one says “You’re passing a String -> Decoder User function to <| but it expects an Int -> Decoder b function instead.” That tells me the \email -> function is taking a String where an Int is expected. I understand all of that!
The second one says “You’re passing a Decoder (Int -> b) when you should be passing a Decoder (String -> String -> Bool -> UserModel) instead.” In the previous example I know what a String -> Decoder User function is, and I can see at a glance that the \email -> anonymous function fits that description. In this one, which of these expressions is the Decoder (Int -> b) exactly? Which one is the Decoder (String -> String -> Bool -> UserModel)? What is the root mismatch here, Int ≠ String or Int ≠ String -> String -> Bool?
Granted, both errors are helpful in that they draw attention to the broken require. (The former has it on the first line of the error message; the latter has it on the last line.) It’s also certainly more helpful that the second one underlines the problematic thing.
But to me, I can only read one of these errors and actually understand what the types in the message are telling me - and I wrote both of the libraries in question!
I’m really surprised that’s your impression! To me these are night-and-day different.
Maybe I could share a bit about how I would read this, just to share my perspective, not because I think its conclusive about which one is the better error message (I have no clue which one is better!). I also dont know how much of my perspective is just me and my life experience and how much of it is like ‘this is fundamentally how all people think and read’.
So if I were confronted with this experimental API’s error, I would discount value of the String -> Decoder User – Int -> Decoder b part. This is because I know that at least one of the functions really is supposed to be a String -> Decoder User and there are multiple <| pipes in that snippet. It could be pointing to either side of either <|, which includes most of the code snippet. If it were pointing to a particular line maybe I could focus on that one, but I didnt actually read the error as pointing to the top line. It actually never occurred to me that the compiler made the problem-line the top line of the snippet (thats interesting, I will have to keep an eye out for that moving forward).
I would also doubt that the compiler is really pointing to the problem spot in the code, since my expectation is that sometimes in really dense and abstract code the mistake can be in a different spot than where the compiler calculates the type mismatch to be.
Put this all together, and its not a bad experience, but it is a complicated enough experience that my go-to would be to just eye-ball the code until the problem surrenders itself; which for me puts it at about the same level as the old api.
So that version is how elm-format formats it today, and since elm-test makes widespread use of that style, presumably it would continue to be supported. I’ve seen decoders that are so long, they would get pretty hard to read if they were all indented that way, so I definitely prefer the non-indented version myself!
Do you think if both styles worked, you’d choose the indented style in practice?
We had this problem with large records, it wasn’t nice.
At first the non indented very is a lot more difficult to parse, there is a lot happening in a packed space.
Then after a few looks is not too bad, the benefit of not having runaway indentation is huge. Still not a great API in Elm unfortunately.
Just for fun and exploration, if you are willing to start with a default record you could use pipes like:
decoder =
succeed newUser
|> decodeNext (field "first" string) (\r v -> { r | first = v })
|> decodeNext (field "middle" string) (\r v -> { r | middle = v })
If Elm had automatic setter functions, this could be quite nice.
As feedback from beginners was explicitly requested:
I understood the examplecode right away and had an “aaaahh” moment because:
This style of writing decoders resembles “normal” elm code much more than the current syntax. It was intuitively obvious that I now had access to all previously decoded values, just like I do while pattern matching.
I have written a small elm app for internal tooling so far but did not understand the current syntax enough (even after three tries) to be able to decode a type where one value depends on two others.
I don’t really mind the additional lines of code as I think it fits with the rest of elm which seems to favor being explicit over being concise. Also I am sure elmjutsu and the other awesome plugins would be able to autocomplete a reasonable amount of code.
Hi there,
I like the new proposal and the benefits it brings. For discussion I just wanted to bring up that shadowing might be an issue here, just because (assuming a large data structure of say 10 fields) you end up with lot of nesting where each outer scope “pollutes” the inner ones. While the theoretical likelyhood of name clashes rises with each field, I could very well see that this isn’t an issue in practice, because:
Field names will be different so name clashes are unlikely and
The elm compiler won’t let you shadow anyway.
However with a lot of let bindings in there name clashes could come up unexpectedly for intermediate result (e.g. unwrapping maybes or so) and might confuse beginners.
So maybe something to include into the discussion. Another thing that would be interesting is how Haskell behaves with the do notation. Are the scopes somehow cleared there (which makes do more than syntactic sugar) or does it have the same behavior and acts like nested functions? In the latter case maybe shadowing is not a big deal in practice.
To me the continuations-style looks much easier to use and understand, especially when using optional values.
I also think that verbosity is not bad thing here, when it makes code so much better.
(I’m a complete beginner on Elm, just learned about the language few days ago, and have now been reading a lot about it. I have been programming as a hobby for 25+ years, but almost nothing in functional style.)