Json2elm: generate elm/json decoders & encoders from a JSON sample

I have published the initial (minimal) release of json2elm. You can use it to generate elm/json decoders & encoders (+ type definitions) from a JSON sample. It supports nested objects and arrays, but doesn’t de-duplicate array items for now.

I’m keen to get feedback on it. In particular, some questions I’m considering:

  • Should it detect integers? Currently it treats all numbers as Float.
  • Should it generate imports as well (this will be perhaps more relevant with more options for generation such as pipeline decoders)?
  • Should it support miniBill/elm-codec?

There are things I’m already considering for future development:

  • Adding settings to customise imports (eg. to generate Decode.string instead of Json.Decode.string)
  • Adding an option for applicative style decoders (About the Ergonomics of Applicative JSON Decoding - #19 by dillonkearns)
  • Adding pipeline decoders
  • An elm-review rule to convert a string into decoders/encoders, similarly to dillonkearns/elm-review-html-to-elm
  • A CLI to complement the web UI
  • De-duplicating type definitions (particularly from array items of the same type, which is very likely in copy-pasted JSON samples)
  • Using JSON schemas as input in addition to regular JSON documents.
  • Getting a JSON sample from a URL.

Please let me know if any of these are of particular interest.

19 Likes

I’ve added support for integers, so that question is resolved at least :grinning:

2 Likes

Very cool! JSON decoders is one of the harder things when learning Elm, so having some tools to help out with this is great!

I imagine that you should treat all numbers as float by default, since you can’t infer from a JSON sample whether the data at that specific location will always be an integer. Users can always change it to an integer using their domain knowledge.

There’s always going to be ambiguity when generating Elm code from a single JSON sample, so I decided to add integer support. I think the easiest thing is for people to tweak their sample to reflect the expected data types.

I don’t know if you imagined a different use-case, but the way I imagine this tool to be used is that someone pastes JSON code that they get from a HTTP response without inspecting it too much, generate a decoder and then start working with the data in Elm. For this scenario to work best, the JSON should be as permissive as possible, and then they user can change the types and the decoders to be more restrictive when necessary.

For instance, this tool will not generate custom types for strings because it doesn’t know what specific strings will be allowed, and will generate a Decoder String. The user can then change it to use a custom type and custom type decoder when they get to that stage.

If integers are used by default, then there’s a much higher risk of the decoder failing, which would not be hard to fix for someone who’s learning about JSON decoders. That’s why I think you should use floats, and they will later edit the types/decoders once they learn more about the domain and/or decoders.

I agree wrt strings and other things like that, but I think integers are a lot more common than floats (eg. IDs, image dimensions, colour components, counts, street numbers etc.) so I’d like to have them work sensibly out of the box. Floats that look like floats in the sample will still be turned into Float.

As for the use case, I think of this tool as a convenience rather than a replacement for understanding decoders and encoders. Understanding how these things work is still required, eg. you’d still need to modify the code to allow optional fields, or to decode objects from different places in the sample to the same type, or to support recursive nesting – all fairly common situations I imagine. I had a chat to Dillon about it, and we agreed that this is meant to be a scaffolding tool that gives you some code which you can then tweak to your specific requirements in your IDE with autocomplete, tests, hot reloading etc. So this is the stage where you’d adjust between Int and Float.

1 Like

I think it is a super cool tool.
And your suggested improvements are useful, too.

What do you think about some kind of directives similar to Golang JSON marshaling ones?
Something like:

{
	"name": "John Doe", //  `elm:"optional"`
	"email": "j.doe@unknown.name", // `elm:"mandatory"`
	"authtoken": "sfdsdasda234esaqdasd", // `elm:"maybe,mandatory"`
}

Ie: “maybe” directive could decode to a Maybe String (a string should be provided to guess the type or some more complex directives could be added).

Thanks for the feedback. I don’t think that I’ll add such directives, for a few reasons:

  • Instead of people having to learn these directives, they might as well just learn the appropriate decoder/encoder functions
  • The tool cannot be a replacement for understanding how decoders/encoders work, which is another reason people have to learn the functions
  • Annotating JSON isn’t too far off in terms of effort from just changing the generated code to your needs
  • Finally, these annotations make the JSON invalid which would significantly complicate the project – I’d have to write my own parser, which I’m not keen on.

Add my vote for pipeline decoders.

Looking forward to seeing this evolve.

P.S. Do you have a link to the source code for json2elm?

I haven’t published the source code so far.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.