I was looking at Gary Bernhardt’s explanation of his full stack typescript application and how he guarantees type safety from the client all the way down to the database. He describes it in a blog post. But even better in this video he gives an example of changing the database and letting the ts compiler guide the required changes all the way up to the frontend components. Very appealing!
I’m wondering if something like this would be possible with Elm on the frontend instead of React (still using typescript on the backend). My initial thought was generating elm types and decoders/encoders from the types generated by schemats on the migrate step, but that approach seems to be discouraged in favor of defining them in Elm and generating the typescript types from that. But being new to both Elm and Typescript I’m having a hard time figuring out how to get the level of guarantees and compiler guidance that Gary has with that middle-out approach…
Has anyone thought about or tried something like this?
It doesn’t handle routes yet and isn’t useful for a TypeScript backend, but I wrote a Rust library (not released yet) jalava to generate Elm types and encoders/decoders from my backend types. I think this approach has worked out really well so far, but I can imagine that trying to apply it to TS may not be as easy. Another approach I considered was to have a separate specification, something like OpenAPI that describes the types and generating both the front- and backend types from that, but I didn’t spend too much time looking into that idea.
This looks great, and I’m not married to using Typescript on the backend. I was more curious about the method than the specific backend language. I’ve played with rust and thought about it for this type of thing too so I’m definitely going to give this a try!
I have done this using GraphQL. But I believe that OpenAPI will offer the same guarantees.
What is important is to generate Elm types from the API endpoints, not the DB schema.
Many BE languages to choose from, as long as you can implement type safety from your DB to the edges of your API. I’m using Rust, but it seems this is possible too in Typescript based on that blog post.
Our apps have the backend implemented in go and there are scripts in place that automatically generate a schema.graphql file based on the DB schema and go resolvers. The frontend always refreshes the elm-graphql generated files. This means that a change in the DB schema propagates automatically to the frontend.
I gave a presentation 2 (3?) years ago at a small local conference on exactly this topic. It was essentially a demo/code tour of an application I worked on with one of my clients that was designed to be type safe from the DB to the browser. I just tried to find the video but seems it’s been lost .
It basically worked on this stack:
postgres for type-safe data. (More so than MySQL at any rate.)
elm-css, a 100 line or so node script we wrote to generate Elm types for all the classes in our CSS files. (We couldn’t use any of the css-in-elm techniques for various reasons.)
Upside of this system is removing a column or changing its type or name in the database produced compile errors in server & changes to the server API produced compile errors in the client. Pretty nice to know which CSS classes were used or not.
Overall it was probably the most stable application I’ve ever worked on.
That said, the Rust compile times were pretty bad. 20-30+ seconds on every change by the time I left the project and getting worse. When the rust server was written the Juniper library used macros to define everything which clobbered the compile times. Now I think it uses a codegen extension, so maybe performance has been improved?
One thing we discovered was that encoding Result-esque errors into the GraphQL API wasn’t very much fun. It was doable, and we did it, but because GraphQL doesn’t support parametric polymorphism like Elm & Rust it produced a lot of boiler-plate types in the GraphQL API. (AddUserResult, AddUserErrors, AddUserSuccess, UpdateUserResult, UpdateUserErrors, UpdateUserSuccess, etc.) Interfaces made this a bit better on the client side, but they still require having concrete GraphQL types for every variant. We had a few macros that generated most of those types, but they didn’t exactly help our compile times either. In the end a small 500 or so line .rs file would produce several thousand lines of Rust source during compile which were also macros used by Juniper to define the API. More than 10,000 or 15,000 total in one or two cases I think, though that might have been after running the macro results through rustfmt. Multiply that by 40+ endpoints and you can see where the compile time went.
My thoughts from this experience in no particular order:
I would love to work on a fully type-safe system again. I only recall two or three (logic) bugs across the entire software project by the time I left. 100% of the issue tracker being feature requests or UX improvements is great.
The only reason I would try this again with GraphQL is because of existing GraphQL tooling like GraphiQL and elm-graphql. It really isn’t a good fit in my opinion because its type system just isn’t powerful in the ways it needs to be to capture proper error handling in mutations.
Diesel’s error messages are atrocious. Rust’s in general are pretty good, but Diesel’s are awful. Joining to multiple tables produces types like this: LeftJoin<LeftJoin<LeftJoin<Accounts, AppearsOnce, Accounts::Id>, Invoices, AppearsOnce, ... except with every type fully-qualified. They take a lot of practice to read & you usually need to manually format them into something readable before you can find the actual error. (Maybe they’ve improved?)
Rocket is great. If it weren’t for Rust’s compile times I’d use it way more often.
Rust’s compiler is optimized for executable performance and language power. I think this is the wrong trade-off for most back-end API software. I’d sacrifice runtime performance in exchange for faster compile times if I had to. I’d sacrifice language expressiveness too if I had to. I can’t overemphasize how important rapid iteration in the midst of programming is - compile times longer than a few seconds really hurt. It’s the only reason I still use PHP regularly even though I’m more of an ML person at heart.
Put an elm worker-program into AWS Lambda, taking requests and environment-variables as input-flags and json-config for Dynamodb as output through a port. Build your frontend using the same data-structures, and BaDaBoom - now your Elm data-structures and their Json-parsers dictate business-logic across the entire stack migrating a dynamodb table to a new shape is a matter of saving parsers from previous versions, composed using decode.oneOf.
Lamdera is awesome and definitely a 100% type safe full stack system. There isn’t really a database though. If I understand correctly, the persistence mechanism of your backend model is considered a private implementation detail.
Love the suggestions here. Lamdera definitely seems like the obvious choice if you don’t need or want a real database or have extensive integration needs. Otherwise elm-graphql + hasura looks great.
Because I really wanted a way to do this with a more traditional api setup, I’ve been poking around and found some additional ideas that weren’t mentioned here: