The third post in the series is ready, this time finally tackling Monads!
I really appreciate how friendly and welcoming the Elm community always is, thank you all in advance for all your positive and constructive feedback and please let me know if something is unclear or you would have expressed it in a better way!
Iāve really enjoyed reading all 3 of these articles, theyāre very clear and build up the concepts gradually, thanks for writing them. Iāve read about most of the concepts before, but they havenāt stuck because I didnāt understand why these properties were important. The Elm examples gave me a bit more context, so hopefully theyāll stick better this time.
I have an esoteric question regarding Monad definition: you need >> to define the monad, but if >> can be defined in terms of >>=, why do you need to define >>? Why do we even care about Mr Pointy at all, isnāt it just a specific case of bind?
Actually that is a great question! As you mentioned, since defining >>= we get >> why should we care? Well we could say it is just an utility function in case we just want to perform some computation and discard the value out of it, for example in the following code:
We have not covered yet the IO monad in the series, but what this is doing is performing side effects (this is like a console.log in Haskell) one after the other and discarding their values because we do not care about them, we just want to perform the action of logging to the console.
Hope it is more clear now but do not worry, the important part about Monads is just the >>=!
Canāt wait for more! Iād love to see something related about state handling. State, Writer and Reader monads in Haskell seem over-engineered sometimes
Sometimes, specialized >> can have better performance characteristics than bind with nullary function. If you donāt use results of an action, you can run those actions in parallel instead of sequencing them!
Also there was this idea that using class constraints we could reuse *> from Applicative. Some nasty bugs happen with legacy code and decision was reverted. We didnāt have Applicative class at the start!
Sure, and in Elm it seems so intuitive to me. I mean, I love Haskell as much as Elm, but I think it has too many ways of doing things, and for beginners (like me) implementing state can become tediousā¦
My point is, Iām not brilliant, I like to work with things that are easy to reason about, and I think something like Elm state handling would fit very well in Haskell. But if this way to do it is not the norm, then I guess itās because itās not needed, or itās unviable in Haskell, so who knowsā¦
Wow, itās just there for *gulp*ā¦ practical reasons? This is a revelation to me, I always assume with Haskell that everything is there for some complex reason I donāt understand, probably something to do with type theory. This combined with the āpureā vs āreturnā history and the simple explanation of forall in the article that was linked in the post, both of which confused me, has taken some of Haskellās mystique away.
In fact, as Haskell was born first as a research language, it did not even have a way to perform side effects at the beginning! The IO monad was ādiscovered/inventedā later to be able to do useful stuff with Haskell, and the hierarchy described in the series of blogpost wasnāt like that from the beginning!
Thatās why we have sometimes duplicity or redundant functions and because of backwards compatibility the String type sucks, for example! Definitely not mysterious but a language in constant evolution!
The only thing the Mr. Pointy operator does is sequencing two actions while discarding any result value of the first action.
I think there is a subtle error in that statement.
If m is a Monad then the statement follows but by the type of >> alone you cannot make that statement. The implementer of >>= has to ensure that the instance satisfies the Monad laws so that >> will work in the way we want it to work. All this to say that the Monad laws are a vital part of it all.
Why I stress this is because Iāve seen people write andThen functions in Elm that donāt satisfy the monad laws. In practice it means you canāt make certain transformations on your code because the reasoning will be flawed.
Thanks for your input, actually, that line is almost identical to this one in the implementation in GHC.Base:
-- | Sequentially compose two actions, discarding any value produced
-- by the first, like sequencing operators (such as the semicolon)
-- in imperative languages.
just a bit of warning about the āparallelā - you donāt want this with monads at all - indeed monads are one way of making sure that you āsequenceā the operations (order matters) - take this example here - you donāt want to run both prints in parallel (you really want one thing to write to stdout at the same time) and you want to keep the order.
It might be a bit hidden, but this sequencing is build into the monad via the type of >>= - as with generic types and a >>= b >>= c you really can only evaluate b (with itās side effects) once youāve got an a-Result and similar with c.
(Semi-Groups)/Monoids are where youāll want to look for parallel opportunities as these are pure but still come with an associativity law - so you can rearange
a <> b <> c <> d
into
(a <> b) <> (c <> d)
and do a <> b and c <> d in parallel before combining the results of those
That depends on the monad instance. There are no monad laws about >> operator because it is really not an operator for categorical monad. So it is really a choice dictated by semantics of specific monad instance.
Some instances treat >> as a shortcut to Applicative. Itās kinda iffy because of all that backwards compatibility and lack of constraint in monad class on Applicative.
Your example is absolutely valid. But thatās result of IO semantics. Itās not a coincidence that IO has no Applicative instance.
What? I think we are talking about different things here - assuming you talk about Haskells IO than this is wrong (see here for example) and it has to be wrong for quite some time (Monad should always have Applicative as itās superclass - this is straight from math - and Haskell fixed this a while back)
PS: Iāve never heard of >> as anything other then sequencing - here is the description for it from Haskells prelude
Sequentially compose two actions, discarding any value produced by the first, like sequencing operators (such as the semicolon) in imperative languages.
Yup! I am wrong on this one, I jumped in my mind multiple times and lost context. Sorry for the confusion.
On the bigger picture I was thinking about old proposal where >> could be implemented by default as *> from Applicative. And possibilities of static analysis - you could āparalleliseā or āskipā some effects if you can ensure that theyāre not needed. I canāt share any production code here so letās say I step back from the argument and return with better examples.