[Help] - Generating Decoders Based Off TypeScript Types

Hey friends,

I’m looking for advice on how go about programmatically generating decoders based off of a back-end written in TypeScript.

I have a thoroughly well-typed http server written in typescript that has two functions that would allow me to generate decoders:

here I define a publicly accessible route that has a type argument <T> that specifies what the server will respond with if all goes well.

here I define a private / protected route that also does the same thing as the aforementioned.

here is an example of their usage.

My Current Plan (Seeking Thoughts / Feedback Here):

I have never really worked with AST’s so I may be way off the mark here …

I haven’t announced it officially yet, but I’m working on a project that may work nicely with this use case! I’ll give a preview of it here in case it might be a good fit.


I’m looking for advice on how go about programmatically generating decoders based off of a back-end written in TypeScript.

I’ve thought about the ideal approach to this sort of thing a lot. If you have a type in one space, what’s the best way to transfer that typed data between runtimes/languages? GraphQL, for example, has a way to describe a typed, fairly expressive serialization format. So you translate from TypeScript, Haskell, Rust, JavaScript, Postgres, or other languages into this GraphQL serialization format. Then on the client, you have that well-typed data.

If you try to consume arbitrary TypeScript types, then you also need to define some sort of intermediary serialization format, since there are some typescript types that cannot be directly serialized (functions, classes, etc.).

The use case you’re describing isn’t actually the intended use case for the project I’ve been working on, but I think it fits well, and in part because of the underlying principle. TypeScript allows you to put expressive types onto pure JSON data. JSON is already a serializable/de-serializable format. So you can directly send and decode it, and just describe your domain through putting constraints on that JSON. For example, you can use literals, like type severity = 'info' | 'warning' | 'error'. The data format is just plain JSON, but you can put constraints on it. You can also use the discriminated union technique to express the equivalent of a custom type.

So rather than going to the trouble of generating TypeScript code to serialize, as well as generating Elm decoders based on your TypeScript types (a very complex and messy problem), you can approach it from the other side. Write an Elm decoder, and then generate a TypeScript type to describe the valid JSON values that the Elm decoder will succeed in decoding. It actually turns out to be extremely expressive because of how expressive Decoders are in Elm, and how expressive TypeScript is for JSON with literal types and untagged unions.


    type TestResult
        = Pass
        | Fail String

    testCaseDecoder : InteropDecoder TestResult
    testCaseDecoder =
        oneOf [
            field "tag" (literal Pass (Json.Encode.string "pass"))
          , map2 (\() message -> Fail message)
              ( field "tag" (literal () (Json.Encode.string "fail")) )
              ( field "message" string )

    oneOrMore (::) testCaseDecoder
        |> runExample """[ { "tag": "pass" } ]"""
    --> { decoded = Ok [ Pass ]
    --> , tsType = """
 { tag : "pass" } | { tag : "fail"; message : string }
   ...({ tag : "pass" } | { tag : "fail"; message : string })[]
    --> }

I’ll be announcing the library I’m working on soon. It’s a rewrite and new approach to elm-typescript-interop. I had a similar realization there. The previous version looked at the Elm AST and determined the names and types of all the ports that could be used. But this eventually runs up against limitations because the types can’t be as expressive, so for example you can’t send Custom Types, or get TypeScript annotations with more sophisticated types like Unions, Intersections, etc. Approaching the problem from the other end, and starting by writing a decoder then deriving the type by how the decoder runs, surprisingly yields a lot more expressive power. It works nicely for encoding as well, though it required a few clever designs to get everything lining up properly!

Anyway, I think this could be a really powerful technique in general for keeping data in sync between TypeScript and Elm codebases. I’d be very interested to hear your thoughts if that sounds interesting!


As an elm-typescript-interop user, I’m super excited by those news! Are you also thinking about the current performance projects for large projects? Would that approach also tackle this concern somehow?

Thanks for the hard work, Dillon!

1 Like

For a little game I wrote, I created a library that uses a JSON config to generate typescript and elm types, codecs and ports. Works great for my use case:

1 Like

We’ve been using the OpenAPI generator to go from a definition to Elm code including encoders & decoders, works well so far.

1 Like

Hey @dillonkearns,

Thanks for sharing your project. I’ll have to give it some thought, although I feel a bit of reluctance to have my front end define the data coming from the back end - at least for situations where the endpoint hasn’t yet been made.

I’ll have to give it some thought, although I feel a bit of reluctance to have my front end define the data coming from the back end

Yeah, it is an interesting flow. Although you wouldn’t necessarily have to approach it in that order. You could write your endpoint with the TypeScript type for the JSON it returns. Then you could use that as the target for your Elm decoder, and build up the Decoder code until the types match.

It does require writing Elm decoders by hand, but I think that’s a good thing because you can write expressive Decoders that build up exactly the types you need.

In general, I see a tradeoff between automatic decoding vs. expressivity (i.e. inferring and automatically generating types vs. building them up by hand). With elm-graphql, I chose to have a query builder API rather than generating decoders from GraphQL queries for that reason. You can expressively piece together data in a type-safe way, so you prevent possible mistakes there. So that’s the key is making sure the data transfer format is well-typed. But actually writing the decoder code isn’t the problem, it’s actually a good thing, because it gives you a lot of control over how to build up the types, and it also decouples the serialization format from the data types you use in your codebase.

That said, reasonable people can certainly disagree there, and there may be other nice approaches to this problem that work better for you! Either way I’ll be interested to hear what you end up with.

1 Like

Are you also thinking about the current performance projects for large projects?

Yes, it solves this problem! I’ll write a more detailed post about this and share it soon in a separate post!


I have a couple of packages that could help you with that.

The first is the-sett/elm-syntax-dsl, which is a DSL for generating Elm code. You would write some functions to transform your AST -> elm-syntax-dsl describing the decoders you need. Output that to .elm files and compile them.

The second is more experimental at this stage, so I cannot fully recommend it. But take a look and if it fits your use case, maybe it can do what you need. It is a data modelling language called Salix, and it lets you describe data models you can think of as being like the Elm data modelling capabilities, but without type variables or extensible records. Going this route, you would transform your AST -> Salix. There is already code to generate Elm decoders/encoders/miniBill codecs from Salix data models.

1 Like

Quick update: I’m working on a TypeScript type definitions to Elm type/decoder/encoder generator as part of the paid package in the elm-ts-interop library I’m working on. It generates to encoder/decoders with TS type information (not just vanilla Elm json API), which means that if you tweak the generated decoder or encoder in a way that effects the types that will be reflected. Anyway, I think it’s a pretty nice approach and best of both worlds in many ways.

I also noticed that https://app.quicktype.io/ can use TypeScript types as input, and can generate Elm types/decoders/encoders. It works for simple types, but doesn’t generate discriminated unions correctly. The code generation I’m working on uses the TypeScript compiler API to parse the type definitions, and it works quite well, though it still leaves a good deal of work to be done.


This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.