Could ES6 modules improve caching?

I was perusing List of all Elm kernel functions and saw some discussion on ES6 modules. I got and agreed with the general consensus that it would not improve Elm today as Elm has its own module system, and that there are other performance reasons to stick with ES5.

But what if it didn’t? I know Elm is already aggressively optimized in terms of size and speed, and frankly I have no complaints, but is it a theoretical advantage?

I’m a noob in this area but I thought of this after coming across the Rails World 2024 Opening Keynote. A portion of that talk was dedicated to the caching advantages of using ES6 modules directly. The idea presented is that if you bundle all your code in one JS file, every change creates a new bundle that needs to be redownloaded by your client. But if they’re split up, then only changed files need to be downloaded.

Intuitively this makes sense. I suppose though it’s possible that Elm’s compiler makes that sort of thing unfeasible. I’m not an expert, but one thing that comes to mind is I’ve read that Haskell tends toward static linking is that its efficiency depends quite a bit on inlining. If Elm does similar magic, maybe modules become harder to define.

1 Like

Maybe google for this idea, because there was at some point in time maybe around when 0.19 came out, that code splitting was a feature that was talked about as being planned to implementation in Elm. I think maybe some more investigation was done, and it was decided that it probably would not be worth it, given that Elm is already producing very small bundles - downloading them in pieces might actually take longer.

1 Like

The List of all Elm kernel functions topic doesn’t discuss ES modules much: There’s one comment that says that ES modules leads to better tree shaking and therefore less code shipped to clients, and another that says that Elm already has very good tree shaking so it doesn’t matter.

The topic does not mention caching, though. I agree, it intuitively feels like it would be a good thing if you didn’t need to download all the compiled Elm JS just because there was a tiny change on just one page.

In --optimize mode, Elm shortens all record fields to single letters. This means that just by adding one new field on one page, all other fields in the app might get new one-letter names. Then you’d need to download new versions of all modules anyway. So in a world where the compiled Elm JS can be split into multiple modules for caching, that optimization might not make sense anymore. The Lamdera compiler has a --optimize-legible flag that skips the field renaming.

Then you’d need to work out where to split the code. Does the compiler do it automatically somehow? Or do you annotate the code to show where you’d like the splits?

Finally, if this happens people might want modules to be loaded as needed as well. In the React world, code splitting allows downloading more code when you navigate to a new page of an SPA, instead of downloading it all at once on the first page load. Then we have a new dimension of things to work out.

And as Rupert said, measurements would be needed to check that it actually improves things as well. It might depend on the website: If you have a website that people visit daily you might benefit from the more granular caching. But if they visit monthly, maybe all cache is stale or purged anyway.

1 Like

I think this is the discussion that Rupert is referring to, which includes a link to Evan’s exploration.

I also found this interesting project for the specific case of multiple applications compiled from the same code base.

1 Like

Thanks everyone for the thoughtful comments and links. Looks like my naive thought to do it at the Elm module level obviously wouldn’t work here, and I can’t find a clear path to making it work in a general setup.

With HTTP/3s better multiplexing, DHH actually makes an argument for “no build” and using import maps to import the files you need when you need them.

Perhaps if Elm did tree shaking and then compiled down to individual files per module and then did lazy loading… that might be a win.

Not experienced enough in this to do more than speculate.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.