I have published the initial (minimal) release of json2elm. You can use it to generate elm/json decoders & encoders (+ type definitions) from a JSON sample. It supports nested objects and arrays, but doesn’t de-duplicate array items for now.
I’m keen to get feedback on it. In particular, some questions I’m considering:
Should it detect integers? Currently it treats all numbers as Float.
Should it generate imports as well (this will be perhaps more relevant with more options for generation such as pipeline decoders)?
Should it support miniBill/elm-codec?
There are things I’m already considering for future development:
Adding settings to customise imports (eg. to generate Decode.string instead of Json.Decode.string)
Very cool! JSON decoders is one of the harder things when learning Elm, so having some tools to help out with this is great!
I imagine that you should treat all numbers as float by default, since you can’t infer from a JSON sample whether the data at that specific location will always be an integer. Users can always change it to an integer using their domain knowledge.
There’s always going to be ambiguity when generating Elm code from a single JSON sample, so I decided to add integer support. I think the easiest thing is for people to tweak their sample to reflect the expected data types.
I don’t know if you imagined a different use-case, but the way I imagine this tool to be used is that someone pastes JSON code that they get from a HTTP response without inspecting it too much, generate a decoder and then start working with the data in Elm. For this scenario to work best, the JSON should be as permissive as possible, and then they user can change the types and the decoders to be more restrictive when necessary.
For instance, this tool will not generate custom types for strings because it doesn’t know what specific strings will be allowed, and will generate a Decoder String. The user can then change it to use a custom type and custom type decoder when they get to that stage.
If integers are used by default, then there’s a much higher risk of the decoder failing, which would not be hard to fix for someone who’s learning about JSON decoders. That’s why I think you should use floats, and they will later edit the types/decoders once they learn more about the domain and/or decoders.
I agree wrt strings and other things like that, but I think integers are a lot more common than floats (eg. IDs, image dimensions, colour components, counts, street numbers etc.) so I’d like to have them work sensibly out of the box. Floats that look like floats in the sample will still be turned into Float.
As for the use case, I think of this tool as a convenience rather than a replacement for understanding decoders and encoders. Understanding how these things work is still required, eg. you’d still need to modify the code to allow optional fields, or to decode objects from different places in the sample to the same type, or to support recursive nesting – all fairly common situations I imagine. I had a chat to Dillon about it, and we agreed that this is meant to be a scaffolding tool that gives you some code which you can then tweak to your specific requirements in your IDE with autocomplete, tests, hot reloading etc. So this is the stage where you’d adjust between Int and Float.
Thanks for the feedback. I don’t think that I’ll add such directives, for a few reasons:
Instead of people having to learn these directives, they might as well just learn the appropriate decoder/encoder functions
The tool cannot be a replacement for understanding how decoders/encoders work, which is another reason people have to learn the functions
Annotating JSON isn’t too far off in terms of effort from just changing the generated code to your needs
Finally, these annotations make the JSON invalid which would significantly complicate the project – I’d have to write my own parser, which I’m not keen on.