Best way to write intensely monadic code in Elm

What is the best way to write code that involves long chains of .map and .andThen with anonymous lambdas in Elm? Such code might arise when you are doing a lot of IO with tasks, or composing lots of functions that produce a Result or Maybe.

Elm does not have ‘do’ notation like Haskell, so I look instead for a good idiomatic Elm style of coding that is explicit and readable.

There is a previous discussion on this topic to be found here:

The OP there gives this as an example of code that is starting to get hard to read in Elm:

extractTag : String -> Result Parser.Problem ( Tag, String )
extractTag str =
    Parser.run openTagParser str
        |> Result.andThen 
            (\( name, attributes, afterOpenTag ) ->
                Parser.run (insideStringToParser name) ("<" ++ name ++ ">" ++ afterOpenTag)
                    |> Result.map (\( inside, rest ) ->
                        ( { name = name, attributes = attributes, inside = inside }, rest )
                    )
            )

The problems with this code is that since arguments to the lambda functions are needed deeper within inner lambda functions, the .andThen and .map calls end up nested inside each other. When elm-format is applied to this, it creates a deeply nested structure that pushes everything further to the right.

Code I am working with currently is pushing stuff right off the RHS of page! and that is just unworkable with Elm.

Here is my attempt to write this more cleanly:

{-| Extracts the first tag after the opening tag.
-}
extractTag : String -> Result (List Parser.DeadEnd) ( Tag, String )
extractTag str =
    Parser.run openTagParser str
        |> Result.andThen extractTagInner
        |> Result.map toTagAndRemainder


{-| Extracts a tag and attributes from inside a set of tag markers.
-}
extractTagInner :
    { name : String, attributes : String, afterOpenTag : String }
    -> Result (List Parser.DeadEnd) { name : String, attributes : String, inside : Bool, rest : String }
extractTagInner { name, attributes, afterOpenTag } =
    Parser.run (insideStringToParser name) ("<" ++ name ++ ">" ++ afterOpenTag)
        |> Result.map
            (\{ inside, rest } ->
                { name = name
                , attributes = attributes
                , inside = inside
                , rest = rest
                }
            )


{-| Converts the results of parsing a tag into a tuple of tag and remaining unparsed text.
-}
toTagAndRemainder :
    { name : String, attributes : String, inside : Bool, rest : String }
    -> ( Tag, String )
toTagAndRemainder { name, attributes, inside, rest } =
    ( { name = name
      , attributes = attributes
      , inside = inside
      }
    , rest
    )

It is more work to do it this way, but things I like about it are:

  • There is a single pipeline which does not nest to ever deeper levels at the top.
  • The functions being pipelined have descriptive names, making the entire pipeline readable as a sequence - what does this do? Oh, it opens a tag, extracts the inner tag, and yields a tag and remainder, just like the pipeline says.
  • When params must pass through, I pass them through as record fields, so that they are named.

In general, when I have something like this:

thing
    |> Thing.andThen (\outer -> f outer |> ThingCons
        |> Thing.andThen (\inner -> g outer inner |> ThingCons)
    )

I am turning it into something like this:

thing
    |> Thing.andThen (\outer -> { inner = f outer, outer = outer} |> ThingCons)
    |> Thing.andThen (\{inner, outer} -> g outer inner |> ThingCons)

And then breaking out the anonymous lambdas into named top-level functions, so that I can have good names and somewhere to put a comment. It is quite a verbose style, so I wonder what anyone thinks about that?

There is this package, which lets you collect the passthrough variables into nested tuples. It seems aimed at letting you continue to work with anonymous lambda functions but break up the nesting into a flatter pipeline:

For example:

computeAnswer : Maybe Int
computeAnswer =
    getA
        |> Maybe.andCollect getB
        |> Maybe.andCollect (\( a, b ) -> getC a b)
        |> Maybe.andThen (\( ( a, b ), c ) -> solve a b c)

What is your approach to doing this?

5 Likes

So here is a structure that I have been playing around with today:

type Procedure s x a
    = State (s -> ( s, T s x a ))

type T s x a
    = PTask (Task.Task x (Procedure s x a))
    | POk a
    | PErr x

The idea is that this combined 3 things together, Result, Task and the state monad. It has the usual combinators for map, andThen, andMap and onError. It has constructors to build one from a pure value, an error or a task. It also has the combinators from state monad for get, put, modify - I will put the code up somewhere when its ready.

Since it is a state monad, there is no need to pass the state around explicitly down a pipeline. That can certainly help and removes a common need for each step to return something like a (state, val) pair.

The idea is that this can be used to write procedural programs that can at any time, request some IO be done (as a Task), and can also at any time raise an error. Since Task x a resolves to Result x a, these capabilities are quite well matched. It could of course all be done with Task.fail and Task.succeed, but that requires an unnecessary trip around the Elm runtime, so I added POk and PErr cases to short circuit that.

Now if I have code that is pure, but running inside a Procedure, I can make it impure by adding on a task wherever I like. Say I have a Task that logs to the console, now I can add logging statements quite freely within the program.

The idea is that this is the “railway” to run programs which can perform IO and fail (and recover) with errors. For example, a cli tool that is going to read and write some files, interact with the user on the console, and that may find errors within the files on which it operates.

If this was Haskell there would be no need to combine these monadic things together as a custom type, since the compiler knows they all share the same typeclass and gives me syntax to work with that. But in Elm, there is something quite satisfying also about building my own structure with a certain style of program in mind.

More than 1 error at a time too, since I probably will run a program with this kind of overall shape:

Procedure Model (Nonempty.List Error) ()

Here is a little example of using it, that I have been running just to check the flow of it runs as expected, errors recover correctly, and so on:

type alias Model =
    { messages : List String }

example : Procedure Model String ()
example =
    task (Task.succeed "success1")
        |> andThen push
        |> andThen (\_ -> err "error")
        |> andThen push
        |> onError recover
        |> andThen (\_ -> pure "success2")
        |> andThen push
        |> andThen (\_ -> task (Task.succeed "task"))
        |> andThen push
        |> andThen (\_ -> task (Task.fail "failed task"))
        |> andThen push
        |> onError recover
        |> andThen (\_ -> get)
        |> map (\s -> Debug.log "state" s)
        |> andThen (\_ -> modify (\state -> { state | messages = List.reverse state.messages }))


push : String -> Procedure Model String ()
push msg =
    modify (\state -> { state | messages = msg :: state.messages })


recover : String -> Procedure Model String ()
recover msg =
    pure ("recovered " ++ msg) |> andThen push


main =
    program { messages = [ "initial" ] } example
1 Like

This is the update loop, the evaluator that runs these as programs:

evalTasks : Procedure s x a -> s -> ( s, Cmd (Procedure s x a) )
evalTasks (State io) state =
    case io state of
        ( innerS, PTask t ) ->
            ( innerS
            , Task.attempt
                (\r ->
                    case r of
                        Ok x ->
                            x

                        Err e ->
                            err e
                )
                t
            )

        ( innerS, POk x ) ->
            ( innerS
            , Cmd.none
            )

        ( innerS, PErr e ) ->
            ( innerS
            , Cmd.none
            )

This is evaluating Tasks, until it runs out of Tasks, and which point it yields a Cmd.none, and the program is complete.

I will probably change this a bit, since I cannot check == Cmd.none at runtime in Elm, and I want a more explicit way to signal program termination.

1 Like

This is a very interesting topic. Your approach looks pretty good to me, but it does kind of rely on having full control over the code. The way you refactor extractTag was kind of attempting to do so without changing openTagParser or insideStringToParser.

Personally I think the original code has three problems:

  1. insideStringToParser requires that you change the input string and so
  2. The type manipulations are occurring within the Result monad as opposed to the Parser monad.
  3. The parsers are just simply returning the wrong types. The insideStringToParser name parser should return a ‘Tag’, not just the inside which then has to be converted into a Tag using externally available name and attributes.

So your refactor is kind of done within the constraints of “don’t change the code we haven’t seen” and attempting to make “the entire pipeline readable”. Except we end up with this:

Parser.run openTagParser str
        |> Result.andThen extractTagInner
        |> Result.map toTagAndRemainder

Okay, but toTagAndRemainder is really just shuffling types around so it’s feels wrong to have that at the same level as openTagParser and extractTagInner. So I feel the refactor achieved the sub-goals but lost sight of the original goal which is to have clean/readable/maintainable code.

So, I guess the lesson here is, if you’re having trouble with monadic code being shuffled to the right, then you likely need to step back and give yourself a better abstraction, such as your Procedure and T types, rather than try to contort the original with extra definitions with the goal of getting to a single readable pipeline. It sort of has a feel of people attempting to get definitions down to their “point-free” normal form, and losing sight of the fact that the actual goal is readable code.

I realise that your main topic here is monadic heavy code in general, rather than extractTag in particular, so I hope I haven’t made an unnecessary tangent here.

Yes, I agree. I kept the original parser code as it was as much as I could, because I was really looking for something to demonstrate my refactoring into a pipeline, and wanted to try it on something a bit messy that I did not write myself. But I had the same thoughts - I would probably write it quite differently. Starting out with the right kind of approach in mind would probably have influenced the way I would have written that code for the better I think.

Here is an example of some procedural code that is entirely my own work, and I think has come out quite well in terms of readability/maintainability: elm-realtime/packages/functions/src/EventLog/SaveChannel.elm at master · rupertlssmith/elm-realtime · GitHub

I get really frustrated when asking any community “How would you do X?” and get back responses of the form “Don’t do X.”

So let me first say I personally like

  • Your accumulating record approach. I do not love the Tuple approach. While the tuple approach is clever and succinct, I really prefer records to tuples on the basis that names are good and nesting tuples are noisy and hard for me (personally) to read.
  • Named functions which can be defined in a let binding if you want to preserve locality and avoid polluting the module top-level.

Next, I beg your :folded_hands: forgiveness as I give an answer of the form “You probably don’t need monadic binds as much as you think so this may not be that big of a problem.”.

Before working professionally in a language with do notation, I was one of the people that silently (or maybe not silently :laughing:) wished Elm offered it. I have come to deeply dislike do notation after spending

  • 6 months writing Scala 30% of the time
  • 18 months writing Haskell 20% of the time (Elm the other 80%)
  • 24 months writing PureScript 95% of the time and Haskell 5%.

do was abused more often than not at the three companies where I worked. Other engineers with far more professional Haskell / PureScript experience likely have a more mature perspective, but I think that my grievances speak directly to your question.

andThen or the monadic bind, in my opinion, should only be used to express that one computation :right_arrow: depends :left_arrow: on the results of a prior computation. If there is NO dependency then, while do can be used for convenience, it is not truthful / declarative.

In surveying over 300K lines of PureScript / Haskell code in 4 large applications I believe the most genuinely needed monadic binds I ever saw in any properly sized function was 4 and was 1 on average!!!. In other cases,

  1. No Monad was even necessary because it collapsed to identity!!!
  2. Functor was the real operation.
  3. Applicative was the real operation.
  4. The engineer did not understand the State monad or the State monad transformer.
  5. The function was too large.

I have some PureScript/Haskell pseudocode below. I intentionally use parenthesis rather than $ (which is like <|) and # (which is like |>) and use map rather than <$> and <#> to try to make it a little more readable for Elm developers. Apologies for the mishmash of code style (for anyone looking to potentially hire me in the future as a Haskell/PureScript engineer: this is not how I write Haskell/PureScript. :grinning_face:).

1. No Monadic Bind Needed

noWorkDone = do
  x <- someResult
  pure x

-- the above is basically like the following in Elm
noWorkDone = 
  someResult |> Result.andThen Ok 

-- so... um... this doesn't do anything! It unwraps the result and then re-wraps it 
-- without doing any work
noWorkDone = someResult 

2. Functor Masquerading as Monad

actuallyMap = do
  x <- someResult
  pure $ f x

-- in Elm this mistake would be written 
actuallyMap = 
  someResult |> Result.andThen (\r -> Ok (f r))

-- which can be simplified to just map
actuallyMap = 
  Result.map f someResult 

3. Applicative Masquerading as Monad

actuallyApplicative = do
  a <- aResult
  b <- bResult
  c <- cResult
  pure { a, b, c }

-- the above can be simplified in PureScript
actuallyApplicative =
  lift3 (\a b c -> { a, b, c }) aResult bResult cResult

-- or in Elm
actuallyApplicative = 
  Result.map3 (\a b c -> { a = a, b = b, c =c }) aResult bResult cResult

3.a Combinations of Functor and Applicative

The function below looks like it is using 3 monadic binds. But it is really only using one monadic bind.

mostlyApplicative = do
  a <- aResult
  b <- bResult
  c <- f a b
  pure { a, b, c }

-- re-written in PureScript
mostlyApplicative = do
  Tuple a b <- lift2 Tuple aResult bResult
  map (\c -> { a, b, c }) (f a b)  

-- in Elm... this is messy and should be re-framed but I am just trying to make
-- a point of showing only one real `andThen` needed
mostlyApplicative = 
  Result.map2 pair aResult bResult
     |> Result.andThen (\(a, b) -> Result.map (\c -> { a = a, b = b, c = c }) (f a b))

4. Misunderstanding State

mostlyState = do
  textFromUser <- readLine
  currentState <- State.get
  let newText = currentState.accumulatingText <> textFromUser
  State.put (currentState { accumulatingText = newText })

-- the code above performs a get only to then call put which makes the update appear
-- effectful/monadic when the state update is really pretty pure. There is only one 
-- monadic bind. 
mostlyState = do
  textFromUser <- readLine
  State.modify (\state -> state { accumulatingText = state.accumulatingtext <> textFromUser })

Now to Your Sample

I cannot be certain from your code example the exact semantics and intention of the code. I also cannot tell which Parser library you are using. So I cannot make a recommendation for what you should do. However, making some serious assumptions I might try to “stay inside of the Parser monad” (rather than breaking out of the monad by running) and then I might write this with a single Parser.andThen because the way I have written this there is only one data dependency from the closeTagParser, which needs to know about the open tag name to validate an appropriate close.

import Parser exposing ((|.), (|=), Parser)
import Parser.Extras as Extras

type alias Tag =
    { openTag : OpenTag, contents : List Content }

type Content
    = TaggedContent Tag | TextContent String

type alias OpenTag =
    { name : String, attributes : List ( String, String ) }

openTagParser : Parser OpenTag
openTagParser = Debug.todo "Not implemented"

-- Parse a close tag, failing if the tag's name does not match the open tag
closeTagParser : OpenTag -> Parser ()
closeTagParser { name } =
    Parser.symbol "</" |. Parser.token name |. Parser.symbol ">"

contentParser : Parser Content
contentParser = Debug.todo "Not implement"

tagParser : Parser Tag
tagParser =
    openTagParser
        |> Parser.andThen
            (\openTag ->
                Parser.succeed (\contents -> { openTag = openTag, contents = contents })
                    |= Extras.many contentParser
                    |. closeTagParser openTag
            )

So I think that if you only use andThen when there is a genuine data dependency and if you keep your functions “right-sized” then you will avoid most problematic nesting. That said

  • right-sized is an obnoxious value judgment.
  • what constitutes problematic nesting is a matter of taste. I actually didn’t think you original example looked bad so…
  • there are obviously going to be cases where this is not true and where there are genuinely five or more monadic binds in a function. I suspect that you will be happy with your suggested approach of
    • Decomposing with named functions (and defining them in the let of a function to prevent module top-level scope pollution).
    • Using your record approach to accumulate interim results.
8 Likes

That was fascinating, thank you. I suppose the trouble in Haskell is that do notation is convenient, and you can treat it a bit like a let..in block to define a bunch of intermediate variables that you later use.

For example, I wrote this:

let
        camera =
            Animate.value model.camera

        frame =
            Animate.value model.frame

        insertAt =
            Camera2d.pointToScene camera frame pos
in

Instead of this:

let 
        insertAt =
            Camera2d.pointToScene
                (Animate.value model.camera)
                (Animate.value model.frame)
                pos
in

Just because the extra names help document things and also break down the steps in a way that is easier for me to think about.

Since we do not have that syntactic sugar of do notation in Elm, would I be right to suggest that there is less temptation to use an andThen when a map would do?

I agree that was fascinating. I feel like a review of 300k lines of Purescript/Haskell deserves at least a full blog post (were there any other interesting things you found?) I say a blog post because this is a pretty good point in favour of not adding monadic style syntax to Elm, and would be good to be able to point people to.

I think that if there were an elm-review style project for purescript/haskell, you could code up all 4 of your patterns as review rules (and even have automated fixes for them).

The one I’m a little unsure about is 3. Here I find the do notation version at least as readable as the others. In 3a I frankly find the middle – re-written in PureScript pretty unreadable (though I do think it’s instantly recognisable as ‘type-munging’ code, for which you tend to really only have to look at the type signature). However, I think in most actual cases of this pattern, there is a bit of actual logic as well and that can get lost in amongst all the “type munging”. Anyway in 3a I find that the core thing to understand about the code would likely be that c is basically the result of applying f to a and b (albeit and then extracting the result), and I find that much easier to see/understand in the do notation version than either of the other two. Though I guess if you have more realistic names than c and f it’s perhaps clearer, e.g. if c is actually distance and f is getCartesianDistance and perhaps a is startPoint and b is endPoint or something like that.

A nicer way to write 3a in Elm would be:

mostlyApplicative f aResult bResult =
    pure (\a b -> { a = a, b = b, c = f a b })
        |> andMap aResult
        |> andMap bResult

That does look nicer but not quite correct because f a b returns Result not the type of c directly.

1 Like

Code here: elm-imperative/src/Imp.elm at master · the-sett/elm-imperative · GitHub

Renamed it to Imp as its shorter.

Things I am still thinking about:

  • Might shift it to use elm-procedure Procedure instead of Task, or maybe make two versions one for Task and one for Procedure. The reason is that Elm only has so many Tasks and users cannot write new ones, without hacking kernel code anyway. elm-procedure provides a way of writing new tasks effectively with ports, so I can actually write some useful IO this way - I prefer it to the HTTP task hack.

  • Operations on multiple Imps may produce multiple errors, and generally I want to collect multiple errors rather than fail on the first one. Maybe all that is needed is a fold-like function to do the accumulation added to all operations over >1 Imp.

Instead of:
sequence : List (Imp s x a) -> Imp s x (List a)

With error folding:
sequence : (x -> y -> y) -> List (Imp s x a) -> Imp s y (List a)

This will enable me to write programs with type: Imp Model (Nonempty.List Error) ()

Now I am scratching my head trying to think of a neat way to write it, but yes, the original is about as clean as it gets.

1 Like

I wouldn’t write a blog post because

  • I don’t feel qualified to be a source that anyone cites or references. :laughing:
  • My review of the Haskell codebases
    • Was incredibly informal. I spent about a total 4 hours across multiple days (months apart) random sampling various instances of do notation and studying how many of them were entirely applicative (BTW, PureScript has ado for that), just functors, etc..
    • To be considered citation worthy would probably require me sharing the origin of that code to give credibility and that is rude.

Also, I am certain Evan is familiar with all of this and carefully considered the tradeoffs. You don’t need higher kindedness or Monads to support do notation and Evan knows that. LiveScript, Scala, .NET Linq expressions, and PureScript feature a kind of do notation that is just syntactic sugar with no strict coupling to monads or a requirement for higher kindedness. I’m pretty sure Roc did at some point as well. Elm could pretty easily support do notation like this

example = do.Result 
  a <- aResult
  b <- bResult
  c <- cResult
  Ok { a = a, b = b, c = c }

where the construct is do.MODULE_NAME_OR_ALIAS where the module must publish an andThen function of the form andThen : (a -> TYPE b) -> TYPE a -> TYPE b. This is similar to how .NET Linq expressions actually work with SelectMany (see the .NET library Sprache which allows parsing with LINQ expressions even though most people think LINQ expressions are only for enumerables).

I am pretty sure that Evan explicitly chose not to include do notation. I could be wrong. I am often wrong. :laughing:

The central thrust of my post, and I think I buried the lede there, is

  • Many things that look monadic are actually either doing nothing, mapping / <$>, <#> (Functor), or applying / andMaping / <*> / mapN / liftN (Applicative, <*>).
  • I have observed significant amounts of code that appear to fixate on Monadic operations (bind, andThen, >>=, =<<, etc.) as soon as there is a one monadic operation in the data pipeline.
  • Therefore, I was trying to point out that if you are worried about the nesting incurred in “intensely monadic code” you may find there are many cases where there is less required monadic operations than it seems.

So I felt qualified to share the re-writing rules as one way to relieve pressure on monadic nesting. BTW, a lot of the re-writing comes either directly from the Monad laws or can be derived easily from those laws.

The rest of what I wrote was cranky old man “get off my yard” style where I took issue with what I personally believe to be do abuse.

I am not the only person who feels that way but I am probably one of the least qualified to have an opinion. See also

Here I find the do notation version at least as readable as the others.

Absolutely! do is easier for everyone to read at first glance. It is a really great syntactic construct for certain cases and I would be lying to say that I haven’t reached for it a lot. As you mention, linter rules / static code analysis can also catch the problematic cases. So one could argue that it is a fire situation: dangerous but useful enough to warrant use with caution. I just hold a different opinion that it is a foot gun and you absolutely will shoot yourself with it.

do notation really seduces engineers into thinking imperatively and I just haven’t seen a lot of discipline actually applied in the wild because we all have deadlines. First off, if you are running in IO (Haskell) or Effect (PureScript) or Aff (PureScript) then the code in those monads / monad stacks is pretty much imperative. Full stop. Now, if you run in a free monad DSL or in a tagless final style the code can be abstracted away from the concrete effects (and unit testable) BUT with do notation engineers still fall back into the habit of structuring their functions like imperative statement lists; incurring a lot of the badness we use functional programming to escape. Sure. You still get a FAR more descriptive type system. You get a sound type system. But your code gets hard to test and reason about.

I’ve seen way way way too many functions of the form

doThing :: forall m. 
     PersonRepository m 
  => MonadThrow MyError m 
  => MonadState MyState m 
  => m Unit

The function doThing above declares that it may use a PersonRepository to load / save person data, it may throw errors of type MyError, it may read or update the state MyState… but it takes no arguments and it returns… :drum: Unit (() in Haskell and Elm)!!! The relationship between explicit function inputs and explicit function outputs is part of what makes functional code so transparent. However, monad stacks and do notation can be used to push these data relationships into the background in the name of convenience / ergonomics which sacrifices transparency (BTW, JavaScript is also crazy convenient — I’m not sayin’, I’m just sayin’). So now people can write code like

biggerThing :: forall m.
     PersonRepository m 
  => MonadThrow MyError m 
  => MonadState MyState m
  => TextInterface m 
  => m Unit  
biggerThing = do
  doThing
  doOtherThing
  x <- anotherThing
  writeString x

The code above could throw errors, update the state, access a repository, or read and write text to a virtual interface, etc. etc.. It is absolutely functional, deterministic, and it can be tested. One can implement versions of the type classes in some concrete monad (ex. MyTestMonad) that is pure and allows mocking data and all that. However

  • the tests are harder to write because that concrete test monad is a pain in the backside to author.
  • the data dependencies have been pushed into the background. We no longer have explicit function inputs and outputs. We aren’t explicitly dealing with Result but rather just knowing that the monadic computation may fail when eventually run. We aren’t explicitly seeing a state get updated but rather knowing the implicitly the state may be updated.

I much prefer placing as much logic as possible into pure functions that are easier to test and reason about and then lifting those functions into effects with the applicative. Kind of functional core / imperative shell style. Sometimes stuff is monadic and you cannot avoid it. But I like to try and I find do makes it too easy to fail.

Yikes! Sorry for the rant!!!

1 Like

This is certainly true - and I think you make a good argument for Elm not having do notation. Even if I take code that uses too many andThen and refactor it to remove many of them, I am still potentially left with code that has too many anonymous lambdas in it, and I just don’t find that readable. I try to use lambdas sparingly.

The particular code I find unreadable looks like this:

thing
    |> andThen (\a ->
        ...
            |> andThen (\b -> 
                ...
                    |> andThen (\c -> 
                        ...
                            |> andThen (\d -> 
                                ...
                                    |> andThen (\e -> 
                                        ...
                                           |> andThen (\f -> 
                                               ... and so on right off the rhs of a decently sized code editor which never helps does it?

Which doesn’t seem quite so bad in do notation because at least it doesn’t flow off the right. To become reasonable code in my view it needs flattened out, as much as it needs named functions and unnecessary andThens reduced out.

Not overly upset by the occasional unnecessary andThen, but I don’t think Elm incentives that like do notation does, so new code I write will tend to avoid it and use the available combinators more precisely.

Thanks for this detailed reply. I definitely think you have moved me (quite significantly) more to the “do notation is not worth it” camp.

With respect to the deeply nested code, I think we’re converging on the opinion that it’s something like a code smell, and you perhaps wish to back off a little and re-consider the surrounding architecture that has lead you down this path
and/or consider whether you need quite as many monadic operations. I can easily see that you may have wrapped up some function in Result.map for example that could be called directly.

In the mostly applicative case

mostlyApplicative = 
  Result.map2 pair aResult bResult
     |> Result.andThen (\(a, b) -> Result.map (\c -> { a = a, b = b, c = c }) (f a b))

The problem is mostly that the result both uses a and b and returns those as part of the result as well. Otherwise it would be relatively easy to chain the operations and return the result of f a b. If you really must keep that I think you’re as well making an auxiliary function:

mostlyApplicative = 
  let
     combine (a, b) =
        f a b
           |> Result.map (\c -> { a = a, b = b, c = c })
  in
  Result.map2 pair aResult bResult
     |> Result.andThen combine

But again, reconsider, does the result really need a and b separately to c? If so, would it make sense for f to return the full record?

For those wondering, a quick reminder about the do notation :wink: GitHub - laurentpayot/purescript-for-elm-developers: PureScript crash course targeted at Elm developers

Here is my code that merges in the ideas from elm-procedure. Its still a work in progress and I need to write some tests to check it works fully as expected. The last experiment was called ‘Imp’ and this one is called ‘Proc’. The Imp code only works over Task, but the new code lets you write task-ports using a pair of Cmd+Sub for the request and response. It was not straightforward at all to wrap elm-procedure like I did with Task in Imp, it required fully integrating the two ideas together.

With this you can write procedural code that threads state throughout, gives you success and error railways, and lets you run built in Tasks or write your own new tasks using ports. Since any new task you write is also a Proc, all of these capabilities are available to help write those tasks so if necessary any degree of complexity can be built around a set of ports and wrapped up and presented as a task. Think of it as Elm effects modules in user space.

Next I am going to look at elm-concurrent and see if there are ideas there that I need to consider when executing many tasks simultaneously. Also build up some more examples, and think about whether helper functions should be provided for building loops and so on.

I didn’t see anything called elm-concurrent, but I think elm-concurrent-task should also be on your radar if that’s not what what you were referring to

I’m curious as to how this compares to say elm-pages scripts. Partly because elm-concurrent is based on that. I have’t written particularly large scripts in it but the small ones I write (less than 500 loc) are usually fairly straight forward and not terribly nested

Yeah that is what I meant, elm-concurrent-task. I am a bit confused why it needs it own re-implementation of all the Elm tasks, since Elm Tasks can already run concurrently? The non-standard port definitions and use of “port funnel” technique make it unnapealing to me, but I see it does some funky things to avoid overflowing the stack. So I am curious if it can all be done with regular Elm Tasks and elm-procedure-like task ports with Cmd+Sub but still be possible to be highly concurrent.