Starter conversation from one that arose at work.
I have a tendency to want to narrow parameters as much as possible, having picked this concept up from talks by Richard Feldman such as I think his 2017 Elm Europe talk or this one Learn Narrowing Types – Advanced Elm . Basically if a function only takes in what it deals with it is easier to reason about and debug.
However, what I see in a real life large codebase is the desire to group functions by entity - a bit like in OO modules and in Domain Driven Design, and use of the key model of the module as a consistent parameter.
An example may be a person and a set of functions related to that person’s data (edit - and subdata) - taking person as the only or main parameter.
In my attempt to moderate my first instinct to just pass the minimum, I did propose a sub-entity of a person that I thought was also big enough to “pass around” too.
However that faced the argument that since the sub-entity only exists on the person then abstraction information is lost in breaking it out. Is this “abstraction connection” something of real worth though?
Is it a question of balance and nuance identifying entities that make sense?
Is it best to have entity-based functions with parameters in common?
If so what are some rules of thumb about when and where?
Or is it a valid path of always go to the extreme and narrow functions to only the parameters they need regardless?