Code splitting and dynamic importing

I’m not sure if this has been discussed but I’d like to explore the topic of code splitting as it relates to v0.19 and/or future release plans.

I’m most familiar with the practice of code splitting through the lens of webpack best practices. Similarly the advantages of coupling this alongside dynamic imports.

I’m being fairly vague because there are a bunch of different ways to approach the implementation depending on a number of different variables but nonetheless I keep coming back to one consistent question: Is it possible to isolate the elm runtime from the JavaScript output?

For example, instead of elm compiling one .js bundle, could the output produce a runtime.js and pageA.js?

I feel like this would be useful but would love to hear others’ thoughts.

1 Like

There’s no way of doing that today.

However. Elm produces veeeeery small assets. I believe todo mvc app, after minification and gzip, is 29kb. This is less than a hello world react app.

So is it really necessary to do code splitting? :man_shrugging:

9k, TodoMVC app it is 9k minified and gzipped.
The elm-spa-example app is 29kb gzipped. :slight_smile:

4 Likes

The actual runtime is very tiny in comparison with the code of the app and I doubt that you will see benefits when you take into account that you might have to do an extra call to the server.

The Counter example (which is one of the simplest interactive programs) compiles to something like 6k (minified & gzipped).

If an app gets too big, you can always split it into multiple sections and produce one elm app per section.

At a high level how would one create bundles for each page of an app (let’s say there are 10 pages in our app) and dynamically/lazily load them as user travels the app?

Or is this the wrong way to think about the current Elm program as of v0.19?

I would just load all. If you cannot do that just create 10 seperate pages/elm-apps?

1 Like

I guess I’m coming from the mentality that most web performance practitioners promote loading as little .js as possible. I’ve heard budgets of like 200KB of .js on initial page load. While elm does an incredible job getting a SPA bundle down to a super small size, I feel like the code-splittling story is a very compelling area of exploration. I haven’t heard many promote the concept of loading the entire SPA bundle in a single .js file on initial render.

I guess the solution for a strategy like this, for now, would be to run multiple elm compiles to get different bundles?

1 Like

I guess the solution for a strategy like this, for now, would be to run multiple elm compiles to get different bundles?

That would be the approach, especially if the sections are particularly distinct.

I will weigh in on the topic a little bit, because balancing JS asset sizes is one of the areas I have focused on most at work.

A good budget for JS loading, assuming developing 3G speeds and parsing costs, seems to be around 170kB minified and gzipped (Can You Afford It?: Real-world Web Performance Budgets - Infrequently Noted). In general, computing has gotten cheaper, not necessarily faster in the budget space, so it is good to keep track of asset sizes. Bear in mind that 170kB min/gzipped can often still be large when decompressed, which is ultimately what the browser parses and executes.

Which brings us to two important things to track:

  1. The size of the “core”; what size of that budget is essential and intrinsic to your application? The lower you can get this number, the more breathing room you have for features! People often think of this “core” as “the runtime”, but it is often not quite that. For example, at work we use Immutable.js pervasively in our app. No matter which way we slice the app, we cannot not have it in a chunk! Elm does really well in keeping this low!

  2. The size increase as features get added (depending on the thing you build, this might be “functionality” or “pages”, which might be shared or only conditionally triggered).

If (2) can be volatile (for example, some pages bringing in heavy dependencies such as Leaflet, or a slice of functionality involving PouchDB), then conditionally splitting off things can help get back in the budget. Similarly, it can act as insurance against accidental increase. Something I would love to get metrics on, is the growth rate of Elm programs! I think that would be essential before making any choices about the importance of code splitting. My gut feeling is that the rate will be good, mostly because Elm compresses record field names (a surprising number in JS land), and its DCE is at a function level. Evan has a gist eliciting more data for this (I’ll dig up the link asap).

At what point you start to worry about these things is up to you and your application. In some cases (not Elm), we have had a “core” and “pervasive” libraries reaching 140kB! That meant that splitting the pages (90kB) would be critcal to getting in the budget, but still a bit annoying because of the extra complexity. Speaking of complexity…

Other factors: Latency, Gzip efficiency, Error states

There is another factor to consider: Gzip really likes larger chunks. In the example application above, us splitting the 90kB pages to get in the budget, meant that the total asset size ballooned to 300kB! This was not just gzip efficiency loss, but also Webpack duplication of chunks (something rather unavoidable, but tunable). Is that an issue? I’m not sure, but it’s another thing to consider.

There is also added latency. Without a prefetching strategy, or a strategy to mitigate this by parallelising data fetching and code fetching, you can end up with large loading chains on user interaction. Again, depends on your application, so I won’t get into too much detail here :slight_smile:

And of course, there are error cases to handle and recover from, etc. When I have used libraries to handle code-splitting in React components, it felt like the error handling was ad-hoc, and sometimes frustrating to recover from (reload the page? hoist the error to Redux so that we can retry? something else?).

That sounds like a lot. I think it’s more a case of being aware of the problem, rather than having to sovlve everything. Similarly, in the spirit of Elm’s design, I would absolutely love to see a solution that takes into account the different failure points, and makes you design intentionally for them. I am not sure what it would look like, but a semblance of split points could have a nice representation!

Some takeaways imo

  • I would measure if there is a problem. Splitting early can also be bad, but I don’t claim to know the thing you are building…
  • If you have data on the growth of an Elm application, I think it would be super valuable for design decisions!
  • If there is a heavy Javascript dependency you have (e.g. a custom element wrapping Leaflet), that is a good candidate for code-splitting, without depending on Elm. Using a dynamic import and webpack or Rollup’s handling of it would be a good bet there. I have done that before and it was good.
  • If your app has distinct pages and pieces of functionality, that also would be a great point to split, as separate Elm applications.

I might have forgotten something, it’s getting late here and it is a Friday. I hope this helps out though, let me know if you have anything else in mind :slight_smile:

5 Likes

Here it is!

1 Like

There was a community member with ~50.000 sloc who got a bundle size of 113kb, so 200kb of Elm 0.19 code should give you a fairly large application.

https://elm-lang.org/blog/small-assets-without-the-headache

Where I work we just have a separate elm app per page, and that works very well.

1 Like

Oh wow, this is fantastic! I missed this post by Evan.

This sort of thinking is exactly what I was looking for! Excited to see the results. Thanks!

2 Likes

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.