This approach is nice and simple and easy to understand, but introduces a fair bit of coupling. Each module is now tied to a particular data format (JSON), and would depend on third-party packages if this pattern is extended to things like fuzzers.
Option 2: In their own modules
module Geometry.Encode
point3d : Point3d -> Value
triangle3d : Triangle3d -> Value
This approach is a bit more complex and the functionality is arguably a bit buried, but it decouples things nicely. These modules could be in their own separate packages with their own dependencies if necessary, which would make it easy to add things like a Geometry.Fuzz module later.
In most cases I go with Option 1 and place them in the same module as the type. The only other thing Iāve done is create an Encodermodule that has every encoder in it, etc etc.
Definitely option 1. Also I include the type thats getting encoded / decoded in the module as well.
Its impossible to encode a type, without also having access to that type, so in practice you will always have the encoder and the type itself in the same space. That kind of sets the foundation for type centric modules (type X, the X decoder, the X encoder, and the X helpers, all together in one module) which leads to encoders and decoders in the same space.
I havent done option 2, so I guess I cant speak from experience. But I would think stuff should be together if it gets used together. That way, for any particular part of your code, you can see it and also see its related parts hanging around nearby. That doesnt seem to be what option 2 encourages, since for any given X and Y, I couldnt assume that the X encoder and the Y encoder get used together.
True, but that doesnāt actually mean the encoders have to be in the same module as the type (unless encoding relies on private implementation details). In general it should be possible to implement encoders in a separate module in the same package, or even in a separate package that depends on the package defining the type.
I prefer #1. I often find that I need to write several helper functions for a particular type e.g. encode, decode, a fn to update an attr, fn to return an attr as human label, etc. Having all these in a module for a particular type makes them easier to find and groups them nicely.
We started with option 2 in our app. The code base felt way less organised that way. e.g. we didnāt know where to put an associated function for a type as it didnāt fit in the Encoders or Decoders modules we had.
In case you go for (1) you could probably put your fuzzers inside a dedicated module in tests. By adding ../src in tests description you still have access even to private modules. The issue then is that you canāt access them in tests of other packages built on top of it.
It would be nice to have a way to declare modules that are exposed as tests dependencies.
Yet we have Json.Encode.list instead of List.encode so it can probably make sense some times to pack together your encoders/decoders.
The encoders and decoders for builtin types are in the json modules so that there will be less upgrade friction if the json modules are moved into a separate package at some point. I think this particular sort of API planning strategy factors in less when writing modules for an application.
@luke That makes sense. It could also be interpreted as building modules around protocols. And I guess (let me know Ian if Iām wrong) Ian is probably thinking of adding multiple protocols at some point, like protobuf since 3D data tends to grow fast and JSON might not be appropriate for some related projects.
Thatās a very interesting question, thanks Ian for asking!
I think option 2 makes more sense here. In my mind opensolid/geometry is a package about geometry - I expect the types and functions to be about doing geometrical operations. Encoding/decoding/generating/fuzzing/etc are separate concerns that will often depended on elsewhere.
Thanks all for the thoughtful responses! I should have mentioned up front that I was personally interested in library design - I think that for an app, putting encoders/decoders/generators with the type probably is the clear choice. But for packages where you have to worry a bit more about how many dependencies you pull in, I donāt think itās quite as obvious.
All that said, after some discussions on Slack (thanks @ilias!) it looks like the answer for now might be ājust leave JSON support out and see if itās actually neededā. (If anyone is currently relying on the encoding/decoding functionality built into opensolid/geometry right now and is keen to see that continue being supported in the upcoming ianmackenzie/elm-geometry, let me know!)
In the last step of a previous user study (shortened demo with 6 steps here: OTIS - User Study), Iām converting all annotations to json and sending them to my server. For simplicity, I was using the encoders already baked in opensolid/geometry. Building my own encoders is quite easy though so I could have gone this way and I probably should have, encoding integers instead of floats, etc.
So Iām good with removal of JSON support from elm-geometry but it can save some time to have some ready-to-use JSON interop elsewhere like in an elm-geometry-interop or similar package.