Domain Driven Type Narrowing

Ever since I learned about Domain Driven Design from Scott Wlaschin, as he applies it to Type driven design in ML languages (specifically F#), and about Type narrowing from Richard Feldman in his talk, Scaling Elm Apps, I’ve wanted to learn more about combining these two approaches to form a first class discipline to develop apps in Elm.

Unfortunately Richard’s talk only introduces type narrowing and briefly explores the potential for Extensible Records to make Type narrowing viable - and strangely the Elm community hasn’t done much to develop this beyond what is presented in the talk.

Domain Driven Design is typically seen as a highly prescriptive structural discipline that is only used at scale and then only in the context of Micro-services.

Yet, Scott Wlaschin makes a compelling case for applying the essence of DDD at a significantly smaller scale and, it seems to me, ideal for use in the frontend (particularly applicable to Elm’s SPA remote data and Type focus).

But even here Scott only introduces DDD and it hasn’t been picked up by the Elm community.

So, for those of us that want to reason about Elm code using these two very complementary approaches there just hasn’t been anything to dive into.

I’ve read a few posts and articles that touch on one or the other but nothing comprehensive and nothing combining the two. On the topic of narrowing Types there isn’t a consensus on the use of Extensible Records as an excellent way to facilitate access. Generally the discussions present conflicting opinions and participants tend to loose sight of what it is that Richard sees as being valuable in their use.

It is highly frustrating that Type narrowing remains an outlier in Elm and that it was never understood or explored, beyond its use to simplify scaling, as a companion to any form of Domain Driven Design.


If you are an Elm developer that is practicing type narrowing from the ground up, and/or are using DDD from the ground up, and have good resources and/or people you follow who have helped you develop these skills and reasoning, please advise in a comment.

I am also interested in paying for 1-1 instruction on diving into Type narrowing, Extensible Records, and related data modelling.

Do you have any links to these talks, blog posts, etc you’re referring to? I’m interested but not sure I fully understand the context.

Yes, for sure.

Scaling Elm Apps (Richard Feldman):

Domain Driven Design using Types in functional languages (Scott Wlaschin):

Designing with capabilities is also a great talk by Scott:

I’ve skimmed through the talk, and it seems the Elm community is already doing all of those things mentioned by the first F# talk.

Yet, Scott Wlaschin makes a compelling case for applying the essence of DDD at a significantly smaller scale and, it seems to me, ideal for use in the frontend (particularly applicable to Elm’s SPA remote data and Type focus).

I’m not particularly versed in DDD so I don’t know where are the boundaries of this and where any further research might lead, nor what are the open questions to explore. But it seems to me that as a community we’re by and large already doing the things mentioned in the F# talk in Elm “by default”.

Type narrowing with extensible records is nice! Not to mention probably the only usage of extensible records that I’d recommend (read: use extensible records in function arguments, but perhaps not when designing your types - keep those “concrete”). Perhaps that gives me a partial answer to your note:

The thing is, when you start doing stuff like this for your core entities

type alias WithName r = { r | name : Name }
type alias WithAge r = { r | age : Age }
type alias MyUser = WithName (WithAge { email : Email })

instead of just doing

type alias MyUser =
  { name : Name
  , age : Age
  , email : Email

, I am not sure it gives you any benefits. I’ve always rather preferred to keep extensible records out of my core types, if only for code readability sake. Sorry I don’t have good words to describe why it’s problematic though.

My main question is: do you have some more specific questions with which we could explore this DDD + types space further?


I also wonder why it is problematic? It is completely valid Elm after all.

I think one problem is that looking at MyUser does not tell you what a user is, you have to go and look at 2 more definitions to understand the whole picture. Writing all the field out explicitly seems like extra work, but it is also more directly obvious what a user is that way.

I also think that if you have WithName a as a separate concept, it is because you are going to write some functions over names:

capitalizeName : WithName r -> WithName r

If you later change the definition of what a name is, on either User or WithName, the compiler will keep you right if this breaks anything:

type alias MyUser =
  { name : (Name, Name) -- Changed to first name + last name
  , age : Age
  , email : Email

This code will now fail to type check:

myUser : MyUser

capitalizeName myUser

So I think not combining extensible records when data modelling still gives you the benefits of type checking, without the indirection that comes from trying to create clever hierarchies of types as per object oriented programming.

I might have misunderstood the question, but narrowing types like Scott talks about can in elm look something like this:
Instead of…

type alias Name = String

Like, what if someone paste in a novel of 100k lines ? Should your model support those usernames?
No. a better way is to narrow down the type like this:

type Name = Name String

And you have one place in your code that “parses” a String into a Name… With your domain rules… Maybe name is maximum 20 characters in your domain. Can not be empty string and so on… Typically done on form validation for this one… That parseToName function returns a Result used by the form validation to give a nice feedback to the user or a proper validated Name for your application.
This way the rest of your codebase only works with valid usernames afterwards.
Same goes for many other fields, like email is not a String, age is not an Int…
(Age can not be negative or above 200)


Thanks for the input everyone.

I agree that the Elm community is using all of what Scott is presenting in his talk Domain Modelling made Functional, which is Type Driven Design. However, it feels as though an emphasis on domain modelling is not well established as a first class citizen in Elm’s TypeDD.

The significance of semantics in TypeDD only came to me after reading about DDD’s emphasis on semantic domain modelling. Domain modelling without good semantics is called an ‘anaemic’ model. This made so much sense and later I read this article which includes ‘parse, don’t validate’ in good semantic domain modelling:

Parse, Don’t Validate (by Alexis King) emphasises the need for strong domain modelling without actually stating as much.

My desire to work out how to narrow Types in Elm, in combination with domain modelling using Type Driven Design, came primarily from being introduced to DDD’s conceptual view of data as discrete Entities and Value Objects (Types), and operations on that data as Events (functions).

Domain Entities (and Value Objects) are parsed into existence and have a lifecycle that is significant to the domain. Type narrowing becomes important as a part of the Entity’s lifecycle as it moves through the app’s function composition/pipelines.

After re-watching Richard’s talk Scaling Elm Apps, I was still trying to understand how to combine Custom Types - as Atlewee describes - with Type Aliases and Extensible Records in order to make narrowing types really a efficient and effective way to implement domain Entities and Value Objects.

Every time I watch the video it seems like Richard has given incomplete guidance on Type narrowing and I get to the end still expecting clarification on exactly how it all works with Custom Types, Aliases and Extensible Records.

1 Like

A while back I adapted the example from Domain Modelling Made Functional into Elm (was also an experiment to see what using Elm as a “Domain Brain” on the backend might look like).

On a previous project I worked on that book was a goldmine. And because the examples and concepts translate so easily into Elm, as a team we felt we had all the resources we needed.


Part of the confusion here may be due to “narrowing types” being used in two different ways:

  1. A “narrower type” restricts potential inputs to a function
  2. A “narrower type” restricts the potential implementations of a function

Restricting inputs is typically done by having a more specific type via “parse; don’t validate”, smart constructors, etc. For example, instead of accepting a generic string you might want an Email. This limits what can be passed into your function, but you can safely make some assumptions you couldn’t make before.

On the other hand, restricting potential implementations is typically done by having a more generic type via some form of polymorphism. For example, a function that operates on a List a can’t do any string operations on the items because the implementation must work for any list.

Richard’s “Scaling Elm Apps” talk jumps between the two concepts. The first implementation passing a concrete Address type is restricting the potential inputs by passing in a type with fewer fields. The second implementation passing the Address r extensible records is restricting potential implementations by leaning on row polymorphism. Note that this actually broadens the potential inputs since all sorts of records can be passed in as long as they have the minimum fields.

:nerd_face: Time for some opinions! IMO: :nerd_face:

  1. “narrowing types” should probably only apply to the first definition and the second definition should probably be referred to as “narrowing possible implementations” or “genericizing the type”.
  2. it’s more helpful to think of extensible records as an “interface” or a form of polymorphism rather than as a domain modeling tool.

It’s also worth noting we can combine the two ideas of more concrete domain types and polymorphism in functions to express the rules of our system.

Consider attempting to convert a currency to another. We might have something like:

convert : Float -> Float -> Float
convert rate currency =

There are a lot of ways things could go wrong here (argument order errors, passing bad values like a number of seconds, having the wrong exchange rate or wrong currencies, etc). Contrast that with the following:

convert : Exchange from to -> Currency from -> Currency to
convert rate currency =

This introduces some domain types Exchange and Currency. The convert function uses these and leverages polymorphism to express that any currency can be converted as long as you have the proper exchange rate.

Example taken from Modeling Currency in Elm with Phantom Types

@andrewMacmurray @joelq
Thank you!

Indeed, my interest in Extensible Records is due to Richard’s indication that they can be used to simplify Type narrowing implemented primarily by wrapping data in Custom Types.

I got the impression that Extensible Records are the mechanism that allows functions to directly access data in the Model that have been wrapped (narrowed) with Custom Types.

This Extensible Records mechanism, in combination with Custom Types, is still not clear to me when trying to understand it as a fundamental pattern.

My appreciation of DDD is for its emphasis on establishing domain design as first class in the process of software development and for it as an overall vehicle for reasoning. More so than the prescriptive nature of the discipline.

So, is there a clear pattern for using Custom Types as the basis for narrowing and Entity creation (in a flat Model), and Extensible Records as the mechanism for direct access to these without the need for cumbersome hierarchical dot notation (if I understand Richard’s video correctly)?

Can we develop this narrowing into more than a motivation to simplify scaling? In to a fully conscious and well established pattern supporting Domain Driven Type Design?

Richard uses extensible records to help keep the model flat, but this has nothing to do with custom types. Its like @joelq described, the extensible records restrict the function implementations over such a flat model, no custom types are involved in this at all.

Custom types used as opaque types to wrap data that has passed some parsing/validation will actually make the model less flat, since you always have the constructor to unwrap and re-wrap. For example:

type Velocity = Velocity Float

{-| Speed up by a factor. -}
accelerate : Float -> Velocity Float -> Velocity Float
accelerate factor (Velocity v) = -- Had to unwrap it.
    Velocity (v * f) -- Had to re-wrap it.

So the two concepts do not work together to make flat models, they are complementary ideas. I suggest using extensible records to help work with larger flatter model at the top level, but fields within that model can make use of custom types to enforce correctness and invariants of the domain model.

1 Like

Maybe a little tangential, but if you are interested in domain modelling in Elm, there is this intriguing package: morphir-elm 12.0.0

@Janiczek and @rupert I think that any infelicities to do with using extensible records to construct top-level types is entirely syntactic. The nested parens and random type variables sitting around look bad and users are constantly confronted with the issue of “should I make this type alias extensible or concrete?” where the answer isn’t really obvious.

Imagine if record types could be appended together with some sort of type-level syntactic operator and the | syntax was completely obsoleted. In particular I think the crucial point is if you could extend a concrete type alias after it was defined, instead of having to decide when a type alias was created whether it was extensible or not, things would be so much nicer and you could confidently say “type aliases should always be concrete and never extensible.”

type alias BasicUser = { name : Name, age : Age }

type alias DeliveryInfo = { address : Address, preferredTime : Time }

type alias PackageInfo = { weight : Weight, numberOfItems : Int }

type alias Recipient = BasicUser ++ DeliveryInfo ++ PackageInfo

processPackage : PackageInfo ++ a -> ProcessedResult
processPackage = ...

-- Maybe you want to write out the Recipient fields in a long-form manner.
type alias RecipientLongForm =
    { name : Name
    , age : Age
    , address : Address,
    , preferredTime : Time
    , weight : Weight
    , numberOfItems : Int

-- You can have the compiler check that RecipientLongForm stays up-to-date
-- with Recipient!
recipientAndRecipientLongFormAreSameProof : Recipient -> RecipientLongForm
recipientAndRecipientLongFormAreSameProof = identity

Let’s say I want to create functions that have extensible inputs. processPackage was one but we can do all sorts of others.

processUser : BasicUser ++ a -> ProcessedResult
processUser = ...

And the refactoring process if you ever decide you want a concrete record input to be extensible is trivial. You don’t have to go around changing top-level type alias definitions.

-- Whoops I forgot I want to make this extensible!
-- Currently I either have to inline BasicUser or change BasicUser into an extensible type alias
-- Here I can just add a `++ a`
oldProcessUser : BasicUser -> ProcessedResult
oldProcessUser = ...

And this is really purely a syntactic change. There is a mechanical (but tedious) translation from all of these examples to the current | syntax.

I think this would also go a long long way to helping users get in the habit of making flat models instead of deeply nested ones. The problem is that when users create concrete type aliases, the only way of combining them is to nest them inside larger records, and people generally don’t reach for an extensible type alias right away. If you instead make it easy and painless to combine concrete type aliases “horizontally” that problem goes away and making wide, flat models becomes just as easy and as organized as making narrow, nested models.

Thanks @rupert

This quote describes exactly the techniques I’m hoping to develop into an implementation for domain-driven type modelling in Elm.

Type narrowing in function signatures can be seen as an expression of the Principle Of Least Authority (POLA, as discussed in Scott Wlaschin’s talk titled “Designing with Capabilities”) and also seems to be an aspect of less-is-more aspect in good design distillation.

Type narrowing in function signatures can also be seen as an expression of good domain modelling.

Type narrowing in function signatures restricts function access to a wide open, flat Model. Which is another way of saying it is a great technique to help reason about a broad and complex domain.

What Type narrowing also seems to do is provide a nice implementation to help reason about data as being domain entities flowing through function composition.

The complication in all this is that using Custom Types seems to introduce some form of nesting into the Model, which reduces ease of access, and I can’t find a good breakdown of how Extensible Records actually do what they do to get around this.

Richard touches on this in his talk but there is something missing. That’s partly why this thread exists.

The complication in all this is that using Custom Types seems to introduce some form of nesting into the Model, which reduces ease of access, and I can’t find a good breakdown of how Extensible Records actually do what they do to get around this.

Extensible records do precisely nothing with regards to custom types. They are two unrelated concepts.

Striving for a flat model seems incredibly antithetical to good data modelling. The world isn’t flat, after all.

1 Like

Do you have a concrete example domain you’re working on / thinking of? Maybe it would be interesting to post it and see if we can collectively refactor it?

These two points illustrate the problem well, I think.

What I’m looking for in all this is to find an approach to domain modelling in Elm that, like DDD, places the domain very deliberately in the driver’s seat.

The principal aim should be to create an accurate data model, one that faithfully represents the domain, and one that makes illegal states impossible.

The flatness is always secondary to this aim, and really based on the observation that putting a record inside a record is essentially the same thing as just putting all the fields in a single record. For example:

type alias Model = 
    { pageModel : Page.Model
    , username : Username

-- In the Page module
type alias Model = 
    { title : String 
    , data : Array Float

The overall Model is a product of Model and Page.Model, which can be represented by the equivalent flattened version, with no loss of precision:

type alias Model = 
    { username : Username
    , title : String 
    , data : Array Float

Of course if the Page were a custom type with many variants, or a custom type used to enforce some invariant in the domain, I would not pull all the fields together into the top level module, since that would represent a different data model, and one with many illegal states too.

So I think insisting on keeping things as flat as possible at all costs is antithetical to good data modelling. But this is not the aim - the aim is to prefer flatness where possible because it is more convenient to work with, and avoids the use of a nested TEA structure as much as possible, which is frequently adopted by those coming from an OO background and thinking in terms of components.


A situation where you could flatten things but doing so might make the data model worse:

type alias Pos = 
    { x : Float
    , y : Float

type alias Model = 
    { mousePos : Pos
    , firstClickPos : Pos

This is an equivalent flat version but maybe not better:

type alias Model = 
    { mousePosX : Float
    , mousePosY : Float
    , firstClickPosX : Float
    , firstClickPosY : Float

Now I cannot write functions over Pos that can be made use of in multiple situations. It was beneficial to recognize Pos as a re-usable concept.