Wondering about Typeclasses

I understand that omitting typeclasses from Elm was intentional and agree that it vastly simplifies the resulting code. However, sometime I am annoyed by things like this:

Every module seems to export its own map function! Coming from Haskell, I see this as completely excessive compared to Haskell’s generic fmap.

This really bothers me because it’s such an integral part of functional programming and it’s handled so (IMO) badly.

Of course, the Elm compiler could add another “magic type variable” (which I think are more confusing than type constraints, and less versatile) for functors… but why do that when they could allow us to add our own type variables.

The arguments against type constraints seem to mostly appeal to simplicity, but that seems pretty fickle to me. Who can honestly argue that tens of map implementations is simpler than one fmap that just works?

Anyway, I’m curious mainly why people in the community are against (or for) type constraints. Don’t get me wrong—I don’t want to make Elm into Haskell. However, this fixture of functional programming (from my viewpoint) being completely absent shocked me when I was first learning Elm, so I’m wondering why the decision was made to not include it.

7 Likes

I can’t speak for anyone but myself. I find the pattern of Type.map to be a lot better for refactoring than a generic fmap. If I decide to change from a List to a Set or Dict then not only is the syntax very clear but so is the error message. The same applies to Html and Svg and other view related types.

This also extends to not having a way to implement a typeclass for types you create. Instead we’re guided towards implementing our own Type.map. Similar to how we’re guided towards implementing our own Type.compare and Type.equal.

I’d also argue that there isn’t a single implementation of fmap, just a single definition. In Elm you have

map : (a -> b) -> MyType a -> MyType b
map fn myType = ...

while in Haskell you have

instance Functor a => M=yType a where
  fmap = ...

(please pardon my incomplete Haskell syntax). So as we can see here the difference isn’t in quantity of implementations but in the syntax for doing so and syntax for using the implementations.

But again, this is just my opinion from my experience of the 2 languages.

3 Likes

I don’t think Elm particularly needs typeclasses but I don’t agree with this reasoning.

The general idea is to make refactoring easier. You’re free to swap types as your program and requirements evolve and as long as they implement fmap your logic can stay essentially the same.

Changing from a List to a Set right now involves not only changing your Model and other types but also every single function that used List functions like map or filter. With typeclasses you can fearlessly refactor and play around with different data structures with very little busywork.

All errors suck until the author(s) invest the time into making them nice. Elm did this in other areas there’s no theoretical reason why error messages for typeclasses couldn’t be equally as nice.

I’m not sure what idea you’re expressing here, if we had a functor typeclass you still have to define map yourself; there’s no default implementation for something like that.

If anything typeclasses guide you towards following established patterns but enforcing naming. There’s nothing stopping someone implementing a Type.transform function that is operationally equivalent to Type.map but with a different name. This harms DX because instead of caring about what the type can do, we have to care about what the type names things.

And that’s not to say if you think Type.transform reads better for the API you’re writing that you then can’t use typeclasses, by implementing functor you can keep your API-specific terminology while also letting developers use Functor.map as global terminology for this operation.

8 Likes

fmap is simpler once you understand it. Not that it would be particularly complex, but Elm tries very hard to be beginner friendly: even if it takes more work, there is less thinking involved in refactoring with no risks rather than using moderately advanced tools to avoid refactoring in the first place.

I don’t think people in the community are against type classes (or other forms of polymorphism in general) is the correct interpretation, it’s mostly the author that doesn’t like them. I’ve read a few blog posts and watched a lot of talks from Evan but I don’t think he ever gave a real explanation beside “you don’t really need them”.

Since he has a very precise plan for the language and is not interested in feedback, over the years the developers that really wanted some kind of typeclass system were either alienated from the community or simply accepted that things are not going to change.

The typeclasses discussion on Elm has gone on for as long as Elm has been around (~9yrs), lots of angles have been covered and there is no indication this is going to change. You can find all the discussions by just searching for “typeclasses” on https://groups.google.com/g/elm-discuss/ or google searching for “elm type classes”.

People coming from Haskell are always surprised by a lack of type classes and want them added, and people coming from JS are relieved they can pick up the language without this barrier to entry.

The core team has shown they’re rather sick of repeatedly discussing it.

3 Likes

That sounds like you’ve written something like map f myList and you’ve triggered available code intentions for map? Are you annoyed by the long list of options to choose from? If so – the convention in Elm is to qualify imported functions, making it List.map f myList, which would avoid the “long list of import suggestions” problem.

4 Likes

For ease of refactoring I’m also thinking of things like reading code and knowing what data structures I’m working with. For me, seeing Dict.map or List.map is explicit about what I’m doing while fmap just tells me I’m mapping over something. I’m specifically thinking of code like

someData
    |> Dict.map someFn
    |> Dict.toList
    |> List.map otherFn
    |> List.head
    |> Maybe.map otherOtherFn

which in Haskell would read kind of like

someData
    |> fmap someFn
    |> Dict.toList
    |> fmap otherFn
    |> List.head
    |> fmap otherOtherFn

assuming you used |> in Haskell. The Elm code, for me, is more clear about what’s happening. In Elm I know on the first function that I’m working with a Dict, while with Haskell I don’t know till the Dict.toList is called. Also thinking about things like text to speech for blind programmer. They won’t know the type of data structure they’re working with until the Dict.toList either.

There’s also nothing stopping me from writing a typeclass in Haskell that’s equivalent to fmap called ftransform.

I do get that a lot of the error message stuff is about time and not necessarily typeclasses or any other feature. I do wonder if it’s it’s as easy to create the same quality of error messages. That I can’t answer.

But this is all very subjective in regards to how I experience coding.

Edit:
@pd-andy to your point about people implementing transform instead of map I have come across that before, for exmaple List.Nonempty - elm-nonempty-list 4.2.0 has fromElement : a -> Nonempty a when most other data structure packages have singleton : a -> DataStructure a.

2 Likes

That is not “what I’m doing” but “how I’m doing it”. For application it is irrelevant if you map over a list or dict. You map over collection of some objects. Implementation details leaks and muddle intent.

To be more on topic - type classes provides very minimal benefits without HKT and we can safely assume that no one is interested in implementing HKT in Elm.

I lost count how many times I had to write sequence/traverse like functions for my custom types, because Elm does not have any mechanisms for this kind of polymorphism.

2 Likes

I have to agree here. I think people come with the kind of polymorphism they get in Haskell and expect that “just implementing typeclasses” would give Elm code the same expressiveness. But Haskell has a lot more machinery to pull this style of programming off, HKT and GADTs being probably the most commonly missing. I even think that if you implemented those, you could get quite far in Elm with a scrap-your-typeclasses style without bothering with a language level type classes implementation.

That being said, you would be increasing the type level complexity significantly. Then you potentially (as in perhaps not, but significant effort would be required not to) start trading some of the desirable features that Elm has: super fast compilation speed, excellent error messages, full type inference, etc.

3 Likes

Yes, you’re correct.

It’s not so much the fact that I see a long list of imports; rather that all those functions exist, and more importantly need to exist because of the lack of typeclasses.

I really appreciate this. It gives a reason that I hadn’t really thought of. So thanks for that.

I don’t understand what mean. Though Elm doesn’t have the syntax for expressing HKTs, it certainly has them—just look at any type with a type parameter. As for GADTs, I’d love to hear where these are required. As a Haskell programmer I’ve never had to use them.

Not really. You need to be able to pass not fully applied type (* -> *) as type parameter.
It is impossible in Elm. You can’t define type List Maybe - note missing type parameter for Maybe.

GADTs are not required by any means. They are useful for stricter encoding information in types.

When specifically? Sorry, I’m having trouble imagining a situation where that would be necessary. Wouldn’t you just do List (Maybe a)?

Edit: as opposed to type A = List Maybe do type A a = List (Maybe a).

Found another. String.join is functionally equivalent to List.intercalate, though this arises from the fact that Strings are not List Char (as in Haskell) and are rather their own type, necessitating this code duplication. Of course strings are their own object in JS, but they act like arrays enough to pretend they are in Elm.

I agree that the Type.operation syntax is nice, except when you want to add a function to that module… whereas in Haskell I would just make an instance Typeclass Type for whatever typeclass I need a method from (map, filter, etc).

Edit: A concrete example would be when I had to define any and all for Arrays and ended up naming them arrayAny and arrayAll.

When you’re creating type class instance for your type at least. It is instance Functor Maybe where, not instance Functor Maybe a where after all.

1 Like

Yeah, that’s a good example. Thanks. However I don’t see how kind is relevant. Regardless of the number of type parameters it has, your instance must handle all of them.

This github issue provides more info.

The way I understand it, once you add a feature like this, it will be there for the long run. This is not some peripheral feature that can be easily ignored, this is the kind of feature that makes or breaks a language.

It is the kind of feature that makes the language more complex to learn. It also raises the complexity of the implementation and the compiler performance profile.

If you want to better understand why it was not included, you need to look at the downsides more carefully. If you only look at the good things that it enables you are not seeing the full picture.

6 Likes

You absolutely need higher-kinded types if you want type classes to be useful at all, otherwise you’re stuck with Ord and Semigroup and you cannot define Functor or Monad (for example).

Let’s look at the simplified definition of functor:

class Functor f where
    fmap :: (a -> b) -> f a -> f b

f is a higher-kinded type, its kind is (* -> *), and it means we can, for example, define an functor instance for List that is valid for all inhabitants of List. It’s an essential property because we need the ability to express the f a -> f b part of map, that is “the ‘outer’ type is unchanged”.

2 Likes

A large portion of the discussion assumes that if we had Functors that suddenly you cannot/would not write List.map or Maybe.map anymore.

This is entirely not the case. Where parts of your code are explicitly dependent on the data structure you use, it makes complete sense to use the qualified versions of such functions because they communicate an important detail of the implementation:

someData
    |> Dict.map someFn
    |> Dict.toList
    |> List.map otherFn
    |> List.head
    |> Maybe.map otherOtherFn

Rewriting this to use <$> or fmap as @wolfadex exemplifies is certainly possible (and perhaps idiomatic elsewhere, but we are discussing Elm, type classes don’t erase Elm’s idioms), but we would lose a lot of important context around the implementation of this transformation.

What type classes chiefly solve, is a way to abstractly express some computation where the implementation is not a primary concern.

Consider:

parseFloat : String -> Maybe Float
parseFloat s = ....

foo = parseFloat ".14" |> Maybe.map ((*) 2)

Let’s say we want to upgrade our parseFloat function to

parseFloat : String -> Result ParseError Float

We’re now forced to modify the code inside foo and update Maybe.map to Result.map. This sort of busywork is not really time meaningfully spent, nor has it changed how we understand foo in any significant way. foos operation remains the same regardless of the type returns by parseFloat, be it Maybe, Result, or some custom type.

It may seem like a minor thing but remember that Elm, and functional programming in general, are examples of declarative programming. The goal is to concern ourselves with implementation details as little as possible so we can focus on what our code does not how it does it.

If foo were instead defined as:

foo = parseFloat ".14" |> Functor.map ((*) 2)

This communicates a very different idea. Now as a reader of this function we know that the specific data structure used in this algorithm is inconsequential, and we can focus our attention on the map operation itself. Of course, such an example is trivial, but you can see how such an abstraction of types can make code easier to understand in scenarios where the type is not the main concern.

6 Likes