Domain Driven Type Narrowing

One thing that nags me about extensible records in their current implementation is that a record field name is not tied to a module, the way that a (potentially opaque) type constructor is, and thus lacks semantic import:

module Widget exposing (Widget, Doodad)

type Widget
= GoodWidget ...
| BadWidget ...

type alias Doodad =
{ length: Float
, ...
}

In this example, everything about “Widget”, “GoodWidget”, and “BadWidget” is implementation controlled and possibly made/kept opaque by the Widget module. It’s context and semantics can be well defined, and third parties have no footing with which to make incorrect assumptions about these semantics: the compiler can be made to defend them as the API evolves.

Doodad is controlled by Widget in principle, but the “length” field name (and any other field names) are not so much. This nags me because a Doodad’s “length” may mean something very different from a song’s length (different implied units and measurement dimensions), for example.

Thus having a function like:

doSomethingWithLength : {a | length: Float} -> {a | length: Float}
doSomethingWithLength input =
  { input | length : input.length + 2 }

can easily be made to confuse two semantic concepts of length that may have good reason to be distinct, lends itself to being invasive of implementation details, and IMHO can lead to many kinds of sloppy habits.

While I agree that flat models have many things to speak to their convenience, I also like the semantic certainty of types that can be made opaque. In particular I feel that if any function which needs to operate upon “part of” a larger value may be best expressed operating upon a type (and thus possibly being a method from that type’s module, or a helper module thereupon) and then said type can be an element of the larger aggregate object (which implies depth instead of flatness).

The justification here being that a function implies a behavior and a certain expectation upon the nature of the inputs, which in turn is best modeled as a semantic fact. “I make things longer” (by way of altering a float labeled “length” in a record somewhere) does a poor job encapsulating this idea, because it washes out all context of unit, dimension, why does the thing need to be longer, how much longer, what if our needs change, what if our understanding of the thing with length evolves over time (perhaps now it also has a width? Or a tempo?) etc.

It feels like extensible records also do a lot to encourage the leaky abstraction of primitive typing. Primitive types are (over)broad and thus more easily lend themselves to reuse and partial access through different functions that try to be everything to everybody.

Now I don’t mean to demonize extensible records or ask that they go away, but I just feel it is worth the warning not to allow that pattern to go to one’s head. In particular, if one needs to use a function on a portion of a data object I think it’s always worth at least considering making that portion of the data object into it’s own distinct type with explicit semantics.

2 Likes

Not convinced. I think problem is that you defined length as a Float when you are wanting a DoodadLength. If you did that the extensible records in the doSomethingWithLength function signature do not present such an issue.

doSomethingWithLength : {a | length: DoodadLength } -> {a | length: DoodadLength }
doSomethingWithLength input =
  { input | length : Doodad.plus input.length (Doodad.from 2) }

I wonder if so-called Hexagonal Architecture could provide some inspiration here:

The idea is that in the core of your software system you have the domain model, and around it other modules that do whatever is needed to interface that to the environment. So in a back-end service you might have a module to interface onto the database, another to interface onto the HTTP API, another to hook into the enterprise event bus, and so on. But in the middle you will have the domain model, and all data coming into that will be parsed (not validated) into a valid domain model that rules out illegal states.

I feel sure you could build an Elm application following this pattern, with modules to interact with the domain as a UI, and others to talk to back-end APIs and so on.

One question is to what extent would the core be a pure representation of the domain? For example the remote data pattern might intrude into it, where you need to represent that some part of the domain model has not yet been loaded from the back-end.

1 Like

… but in that circumstance one would be better suited to have the DoodadLength be the input instead of the parent object which happens to have a length property. That would also be easier to justify given the semantic import of DoodadLength, and lack of semantic import of “some record that has a length property”.

Since we are talking about Doodads we are already getting far away from talking about much that is even sensible so we probably should not split hairs over this example. Richard gives some more realistic example of extensible records in his talk.

What I was trying to point out is that records and custom types are orthogonal concepts that you can combine, its not a question of choosing one or the other.

1 Like

In saying this I mean that I agree entirely and that after watching Richard’s talk about Custom Types, Extensible Records, and narrowing the focus of function signatures, I understood the significance of these techniques for use in supporting Domain modeling as first class - but was no closer to knowing how to put it all together.

If data modelling is the expression of domain modelling, and data modelling also informs the domain model, this quote really helps to frame both and present the ‘why’ of Elm’s unique data modelling challenges.

Given our desire to design with TEA always in mind, as fundamental to data modelling in Elm, how can we mature that which Richard gave us a glimpse of in his talk Scaling Elm Apps - and, understand its implementation within the context of, and with a greater emphasis on, domain modelling? Does this make sense?

Domain modelling (specifically, DDD) gave me a way to semantically reason about data modelling, and frames my approach to programming. Programming isn’t a bottom up thing for me, which may be somewhat unusual.

It’s why Parsing data into a semantically (meaningfully) significant domain entity was such a revalation:

Which led me to read Alexis King’s Parse, Don’t Validate (I need to reread that again!).

Parsing data into Typed domain entities just sets us up for narrowing function signatures to focus usage of those Typed entities. Richard reasons in his talk as a way to simplify scaling but for me it supports the continued expression of domain modelling.

If we wanted to do the opposite of developing in an anaemic way, extending our domain entities into function signatures certainly helps.

Coming back to Extensible Records (Extensible Alias Types?), I’m still not sure about their use as a mainstream pattern to assist with narrowing types.

In Richard’s talk he seems to use them to avoid nesting records and a reliance on ‘annoying’ dot notation.

Either I have to watch it a few more times or it really is missing some crucial insights.

It’s not really clear if the model is flattened or if Extensible Records are a bit like lenses and make access to nesting simpler and more direct, or if Extensible Records enable the data modelling benefits of nesting without actually nesting the Model.

Yes, I think maybe you somehow got the idea that extensible records should be used to narrow the inputs to functions, or narrow what data is valid in a particular domain. Option 1 in @joelq’s explanation:

  1. A “narrower type” restricts potential inputs to a function

They are not for this, and I think they do not really have much (any?) use in domain modelling as such. Use custom types for this purpose.

Its a bit more than just avoiding annoying notation. He is promoting the idea of moving away from the OO idea of encapsulation, where data (and state) and code always exist together in objects (or classes) as the fundamental unit of program design. We don’t have to follow that pattern in Elm, the state in an Elm application is held in a single Model and you can split that up independently of how you split up the code.

He is also talking mostly about using extensible records with the Models of Elm applications specifically. Not for designing Elm packages where encapsulation may be more desirable (to hide implementation details, allowing the API to remain the same while the implementation is changed).

This desire for encapsulation leads to writing Elm applications that try to place state in nested Models and then only make changes to the state through update functions.

Suppose I am writing a mapping application, and the map data is split into two layers. On the bottom layer I have the physical map, and on a layer on top I have a drawing/annotation layer. The map data and the drawings are good candidates for domain modelling - they are the domain of this application, and they need to be structured to work in specific ways that will enable the users of the application to work with them easily, accurately, and to automate complex tasks over them as the application becomes more powerful and hence valuable to its users.

We start out with everything in one big file. But the file quickly gets very long and then we need to think, how can I scale out this application as it grows? The temptation might be to split up the map layer and drawing layer into their own (Model, Msg, init, update, view, subscriptions) as OO like components.

That is, we would turn this:

module Top exposing (..)

type alias Main = 
    {
    -- There are lots of map fields and growing.
      mapData : PolygonMesh -- Domain object
    , northEastCorner : LongLat -- Domain object
    , southWestCorner : LongLat  -- Domain object
    , -- ... and a tonne more map fields
  
    -- And there are lots of drawing fields and growing.  
    , drawing : Array Polygon -- Domain object
    , drawingOwner : Username -- Domain object 
    , version : Version  -- Domain object 
    , -- ... and a tonne more drawing fields.
    }

into this:

module Top exposing (..)

import Map exposing (Model)
import Drawing exposing (Model)

type alias Main = 
    { mapLayer : Map.Model
    , drawingLayer : Drawing.Model
    }

That will work and may even be a perfectly suited way of scaling this particular application.

The problem comes when the map and drawing layers start interacting. Maybe if some new data is added to the map layer, it needs to add some things to the drawing layer as a result? Perhaps features of the drawing layer can be pushed down to the map layer, like a planned building is constructed and now becomes a permanent feature of the map? and so on.

I find it is often the case that if we split the model up like this prematurely, driven by OO thinking, it makes things more complicated than they need to be in the long run. Specifically the need to handle interaction between the layers - to achieve this we might add ‘out messages’ to their update functions, and then have a top level update function that routes all the out messages where they need to go. Its messy and there is a lot of boilerplate involved, and we only did it because of insisting on the idea of encapsulation from the start.

If we keep the model flat, but restrict the implementation of functions that operate over it using extensible records, we can take any slice of the model that we need to work with. Now the updates to propagate changes to both layers can be simple functions over whatever mix of fields is needed to achieve that.

As the code is getting too big for one file, I can now split it out into 3 files. One for code that updates the map, one for code that updates the drawing, and one for code that makes updates accross the layers, and so on. This structure is simpler and more robust as the application evolves.

Yeah, it is a bit like a lens that selects a subset of fields from a record. But its a built in feature of the language, so saves you having to write a load of lens get and set functions to do it - if you implemented lenses like this for example: Monocle.Lens - elm-monocle 2.2.0

1 Like

In my own code, I don’t design Elm programs with TEA as the fundamental building block. Instead I design a series of domain types and functions (a coworker described this as the “library” approach to building an app). TEA is just a layer to map interactions with the outside world to my domain operations.

Consider this example from one of my old gamejam entries:

TEA logic is contained in Main.elm. The only thing update does is call domain functions from the Game module in response to external events. All the actual game logic is in Game.

You don’t have to use it with TEA. Instead you might have a deterministic script that starts with an initial game value and applies a hard-coded set of domain operations on it. You could even have an AI that calls a bunch of domain operations on a game until it wins or loses.

7 Likes

Would I be right in guessing that these Game functions all operate over extensible records, that provide the appropriate sub-set of model fields that they need (and nothing more)?

Another interesting aspect of this is that you do not have a Game.update function, but instead have specific functions for the updates that Game needs to make. I think it is a common mistake to assume that an update function is always needed, and that it must take this kind of shape:

module NotSoNicelyDesignedGame exposing (..)

type Msg = ...

update : Game a -> Msg -> (Game a, Cmd Msg)
update = ...

That is that we must always encode parameters to updates as some kind of Msg. Again, leads to thinking in terms of out messages, and routing messages between modules, as per the actor pattern.

The way you have done it is a much more natural fit to Elm.

Perhaps we could say that the benefit of extensible records in domain modelling is that they allow large domains to be split up easily, and on demand, without having to figure out in advance or commit to what chunks to split the domain up into - re-chunking is always possible as an application grows.

2 Likes

@rupert @joelq

I’m also very interested in the use of Extensible Records in this code. Most look like single-liners, single item.

Talking about TEA, how does the Game architecture/design compare with Richard’s revised Real World app? Are you both doing something similar? What would you call or describe it?

How can we use Modules, Custom Types, Record Types (Type Aliases), Extensible Records, dot notation, architecture/structure, and so on, to optimally model and use domain entities (the user related data) in Elm?

What does Domain Driven Type Design look like in Elm when we put it all together under a common language used to describe it and a language which emphasises the domain-first aspect as overarching all the techniques and mechanisms used in Elm to accomplish this?

I think this last question is because it seems that 99% of Elm people, who talk or mention domain modelling, do so in a way that presents the practice as secondary to the mechanics of a bottom-up, non-domain (perhaps Type Driven) reasoning.

In terms of describing domain modelling in Elm using DDD there is likely going to be a lot of that prescription which is definitely not applicable to Elm, because of scale or the incompatible OOP’ness of classic DDD.

I guess the best translation of DDD into a functional language and at smaller scale would be provided by Scott Wlaschin.

Maybe the Elmish community is some good material that presents his functional DDD in a Elm-like architecture.

Weird, I realise that I’ve been using Extensible Records as a way to update values in a record but not to narrow.

type ButtonStatus
    = EnabledButton
    | DisabledButton

type TaskStartTime
    = TaskStartTime Time.Posix

type Msg
    = ButtonClickedMsg
    | TaskStartTimeMsg Time.Posix

type alias Model =
    { taskStartTime : TaskStartTime
    , buttonStatus : ButtonStatus
    }

getTaskStartTime : Cmd Msg
getTaskStartTime =
    Task.perform TaskStartTimeMsg Time.now

update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
    case msg of
        ButtonClickedMsg ->
            ( { model
                | buttonStatus = DisabledButton
              }
            , getTaskStartTime
            )

        TaskStartTimeMsg intTime ->
            ( { model
                | taskStartTime = TaskStartTime intTime
              }
            , Cmd.none
            )

There are no extensible records in the example code you just posted. Only regular records. The record update syntax ({ model | field = value }) looks a lot like the extensible record syntax ({ a | field : Type }), but it operates on ordinary records.

Edit: I suppose the type of { model | field = value } is { model | field : a } -> { model | field : a }, so I suppose there are extensible records from that perspective.

2 Likes

I’m not using extensible records in this project (source code). Game is a big custom type handling the different states a game can be in:

type Game
    = Intro
    | Playing State
    | Lost State LossReason
    | Won State

Most of the data from an active game is part of a State record. This can be modified when in Playing state and is frozen in a Lost or Win state.

type alias State =
    { river : River
    , twinPosition : Coordinate.World
    , yDirection : YDirection
    , scores : List Feet
    }

A key insight I had was that I generally had two kinds of actions:

  1. State -> State which only changed the current playing state (e.g. moveTwinsDownstream)
  2. State -> Game which transition from one game status to another in response to a playing state change (e.g. checkLoseCondition)

I created Game.map and Game.andThen helpers to compose these two types of actions. In retrospect, I probably shouldn’t have used those names :sweat_smile:

I wrote a discourse thread about learnings from this project.

1 Like

In general, I’ve found that extensible records are a niche tool and particularly useful when it comes to modeling in Elm.

I wrote an article about different ways we can use Elm’s types to help create a richer model on gamjam projects. It includes an example where I initially used extensible records but then pivoted to a different design that was more flexible for my needs.

2 Likes

Extensible records are not so likely to help with domain modelling - here is why I think that.

Records are products, in the mathematical sense of cross products of sets. So if the type Int denotes the possible integers in Elm, of which there are 2^53, then the record { x : Int, y : Int } denotes the cross product of a pair of integers, of which there are 2^(53+53) = 2^106 in Elm. Records make for big sets of possible states.

The techniques for domain modelling as described by Scott Wlaschin’s book “Domain Modeling Made Functional: Tackle Software Complexity with Domain-Driven Design and F#” generally focus on tightening up the domain, to only admit some minimal set of possible states. So could have type alias UserRating = Int, but if users are only allowed to give 1 to 5 stars as a rating, I can avoid bad data by narrowing this down to some domain model that only admits 5 possible values, cutting down the state space from 2^53 to 5. Domain modelling is often about keeping the state space small.

Where do we often get large records/state spaces? In applications. This is because applications deal with a lot more than just the relatively clean domain model, they deal with a lot of other state too. One or more backend APIs, authentication tokens, user profiles, timeouts, view state (popups, menu states, page state, navigation), configuration, unsanitized inputs, and so on. So we tend to find that application state grows, and extensible records can often provide a way of slicing that up so that we can deal with it in more manageable chunks, and write functions over it that only see as much as they need to see. So its a great tool for managing application sprawl.

I guess there are domains with large flat state spaces too? So they might help there, but seems less likely.

2 Likes

The sense I got from watching Richard describe Extensible Records was that they can help support data modelling in the way most suited to Elm, which appears to mean without being forced to nest data, and (if I might add) to nicely complement domain modelling.

This support for domain modelling, while not using Extensible Records directly for data modelling, is what I have been wanting to explore in this thread.

Recently @joelq tweeted about the Phantom Builder Pattern and I was curious about it as another possible way to naturally support domain modelling in Elm.

It was mentioned that Extensible Records were used in the Phantom Builder Pattern though not sure if they are used specifically for supporting narrowing of Types in the way Richard presents.

Clearly, patterns and reasoning supporting both data and domain modelling in Elm are evolving.

Good point - and Richard used them quite a lot in the rtfeldman/elm-css package. The phantom type is used as an extra restriction on top of the base type, I think in that sense it counts as narrowing (of type 1 by @joelq’s defintion, restricting potential inputs to a function). So there is a way to benefit from them in domain modelling.

Been a while since I read the Scott Wlaschin Domain Modelling in F# book - does he describe the phantom builder pattern with extensible record types there? Or some equivalent in F#.

1 Like

Good question - I cannot tell. I’ll have to learn all about the builder pattern in Elm before I can answer that question.

Gong back to @nickwalt’s earlier question about a more DDD perspective on “narrower types” in Elm, here are 4 concepts I use all the time. Some of them are well-known guidelines viewed from a more domain-oriented point of view.

Avoiding primitives

Using narrower types in our functions usually means using more domain-specific types and avoiding primitives (OO would call this “avoiding primitive obsession”). One example from my own code is that I avoid using raw numbers and instead create custom types such as Dollar that represent different quantities. I have a whole talk on the topic

Restricting operations

Wrapping numbers in custom types is often seen as inconvenient because then you have to re-implement arithmetic operations for that type. This can actually be a feature because in many domains not all arithmetic operations make sense.

For example multiplying dollars times each other might be an invalid operation but multiplying distances could be valid. By creating a custom type, you are able to have a “narrower” set of functions available, all of which match the operations valid in your domain.

Your modeling should match your domain

“Making impossible states impossible” is a common refrain in the Elm community. People often think of it in terms of correctness. I tend to view at as making your your model (in the DDD sense, i.e. the types) accurately describes you domain.

You can think of your domain and types as sets. If your problem domain has 5 possible values but the types you use to model it have 10, then your model is not very accurate representation of reality.

Ideally, your domain and the types you use to model it are the same set. A type system like Elm or F#'s that allows you to both AND and OR types together is really powerful because it allows you to more easily reach this goal than a traditional system that only allows ANDing values.

Conversions to narrow types

One of the big realizations I had when reading Parse; don’t validate is that parsing isn’t just about strings. Instead, it’s about turning a broader type into a narrower type (with potential errors since not all values in the broad type can be converted). Parsing functions usually have a signature like:

parse : broaderType -> Result Error narrowerType

This is probably the most concise way of expressing the concept of “type narrowing” as a function signature. This generally happens near boundaries of sub-systems (DDD would call these contexts).

One could even transform data in several passes, with the parsed output at each step becoming the raw input of the next step. The types get narrower and narrower at every step and the pipeline acts as a funnel.

For example, a String could get turned into a Json.Decode.Value, which is a narrower type. This in turn might be turned into a UserSubmission value which is narrower still, and finally this could be converted into a User that is narrowest of all.

funnel-of-parsing

16 Likes

I’ve expanded some of the ideas above into a series of full-length articles:

13 Likes