I guess the solution for a strategy like this, for now, would be to run multiple elm compiles to get different bundles?
That would be the approach, especially if the sections are particularly distinct.
I will weigh in on the topic a little bit, because balancing JS asset sizes is one of the areas I have focused on most at work.
A good budget for JS loading, assuming developing 3G speeds and parsing costs, seems to be around 170kB minified and gzipped (Can You Afford It?: Real-world Web Performance Budgets - Infrequently Noted). In general, computing has gotten cheaper, not necessarily faster in the budget space, so it is good to keep track of asset sizes. Bear in mind that 170kB min/gzipped can often still be large when decompressed, which is ultimately what the browser parses and executes.
Which brings us to two important things to track:
-
The size of the “core”; what size of that budget is essential and intrinsic to your application? The lower you can get this number, the more breathing room you have for features! People often think of this “core” as “the runtime”, but it is often not quite that. For example, at work we use Immutable.js pervasively in our app. No matter which way we slice the app, we cannot not have it in a chunk! Elm does really well in keeping this low!
-
The size increase as features get added (depending on the thing you build, this might be “functionality” or “pages”, which might be shared or only conditionally triggered).
If (2) can be volatile (for example, some pages bringing in heavy dependencies such as Leaflet, or a slice of functionality involving PouchDB), then conditionally splitting off things can help get back in the budget. Similarly, it can act as insurance against accidental increase. Something I would love to get metrics on, is the growth rate of Elm programs! I think that would be essential before making any choices about the importance of code splitting. My gut feeling is that the rate will be good, mostly because Elm compresses record field names (a surprising number in JS land), and its DCE is at a function level. Evan has a gist eliciting more data for this (I’ll dig up the link asap).
At what point you start to worry about these things is up to you and your application. In some cases (not Elm), we have had a “core” and “pervasive” libraries reaching 140kB! That meant that splitting the pages (90kB) would be critcal to getting in the budget, but still a bit annoying because of the extra complexity. Speaking of complexity…
Other factors: Latency, Gzip efficiency, Error states
There is another factor to consider: Gzip really likes larger chunks. In the example application above, us splitting the 90kB pages to get in the budget, meant that the total asset size ballooned to 300kB! This was not just gzip efficiency loss, but also Webpack duplication of chunks (something rather unavoidable, but tunable). Is that an issue? I’m not sure, but it’s another thing to consider.
There is also added latency. Without a prefetching strategy, or a strategy to mitigate this by parallelising data fetching and code fetching, you can end up with large loading chains on user interaction. Again, depends on your application, so I won’t get into too much detail here
And of course, there are error cases to handle and recover from, etc. When I have used libraries to handle code-splitting in React components, it felt like the error handling was ad-hoc, and sometimes frustrating to recover from (reload the page? hoist the error to Redux so that we can retry? something else?).
That sounds like a lot. I think it’s more a case of being aware of the problem, rather than having to sovlve everything. Similarly, in the spirit of Elm’s design, I would absolutely love to see a solution that takes into account the different failure points, and makes you design intentionally for them. I am not sure what it would look like, but a semblance of split points could have a nice representation!
Some takeaways imo
- I would measure if there is a problem. Splitting early can also be bad, but I don’t claim to know the thing you are building…
- If you have data on the growth of an Elm application, I think it would be super valuable for design decisions!
- If there is a heavy Javascript dependency you have (e.g. a custom element wrapping Leaflet), that is a good candidate for code-splitting, without depending on Elm. Using a dynamic
import
and webpack or Rollup’s handling of it would be a good bet there. I have done that before and it was good.
- If your app has distinct pages and pieces of functionality, that also would be a great point to split, as separate Elm applications.
I might have forgotten something, it’s getting late here and it is a Friday. I hope this helps out though, let me know if you have anything else in mind