Can the compiler skip virtual DOM?

I typically look at this benchmark to get a feeling for how different rendering systems compare. Try going to the interactive results and putting in:

  • vanillajs-keyed
  • elm-v0.19.1-3-keyed
  • svelte-v3.23.0-keyed
  • react-v16.8.6-keyed
  • react-redux-v16.8.6 + 7.1.0-keyed

The aggregated results I am seeing are as follows:

Performance

| vanillajs | elm  | svelte | react | react-redux |
|-----------|------|--------|-------|-------------|
| 1.00      | 1.25 | 1.28   | 2.09  | 2.18        |

It looks like Elm is a bit faster than Svelte on performing operations.

Start Up Time

| vanillajs | elm  | svelte | react | react-redux |
|-----------|------|--------|-------|-------------|
| 1.01      | 1.19 | 1.00   | 1.65  | 2.18        |

The start up time is a bit slower than Svelte, but it looks like that is mostly to do with code size. So I would not expect these numbers to look as favorable for a Svelte in project with a normal number of dependencies.

It’s good that a rendering library is small, but that gets washed out if your other dependencies end up being big. So rather than thinking of Elm vs Svelte, I think Elm vs JS is the more sensible comparison.

It is pretty easy to cut out tons of functions from dependencies in Elm, while it is generally not practical to get close to that with JS modules. I talk a bit more about why the language is important for this comparison in this post.

So I personally think choosing a virtual DOM implementation based on size alone only makes sense when comparing JS projects to JS projects. Maybe it’s possible to make the Elm implementation even smaller, but if the goal is to reduce code size in practice, I think focusing on code generation more generally would probably be more rewarding.

Allocation

| vanillajs | elm  | svelte | react | react-redux |
|-----------|------|--------|-------|-------------|
| 1.00      | 1.55 | 1.33   | 2.11  | 2.69        |

Elm also appears to allocate a bit more than Svelte. Perhaps that could be trimmed down.

One idea is to detect static Html msg values and move them out of functions to the top-level. That would mean they are allocated just once, whereas they may otherwise be allocated many times as different view functions are called. (The virtual DOM implementation detects when nodes are equal by reference, so this would skip diffing as well.)

The trade off with that idea is that (1) you do more work when the program starts and (2) the memory sits around for the whole duration of the program. These factors could be an issue in large enough programs, so it’d definitely take some special care to make sure this isn’t negative overall. (E.g. should there be a cache containing N kb of the most commonly used static nodes? Does that add too much overhead to be worth it? How do you set N? Etc.)

Thoughts

My sense is that projects significantly slower than Elm and Svelte are working fine for a lot of people. Even if we doubled the current performance somehow, I do not know if that is such a big deal to most people right now.

But yeah, if someone thought it would be a big deal, I would look into moving static Html msg values out of functions. That could definitely get the allocation numbers down, and maybe improve perf a bit as well. The hard part is finding a design that has predictable performance, without filling up the heap too much for people with very large programs. Maybe the naive design of just making them all top-level is fine! Someone would have to do a proof of concept to start collecting data on that!

22 Likes