Since Elm is compile to js code, is it possible to go svelte’s approach to directly compile to raw dom manipulation, and no vdom?
- react-redux-v16.8.6 + 7.1.0-keyed
The aggregated results I am seeing are as follows:
| vanillajs | elm | svelte | react | react-redux | |-----------|------|--------|-------|-------------| | 1.00 | 1.25 | 1.28 | 2.09 | 2.18 |
It looks like Elm is a bit faster than Svelte on performing operations.
Start Up Time
| vanillajs | elm | svelte | react | react-redux | |-----------|------|--------|-------|-------------| | 1.01 | 1.19 | 1.00 | 1.65 | 2.18 |
The start up time is a bit slower than Svelte, but it looks like that is mostly to do with code size. So I would not expect these numbers to look as favorable for a Svelte in project with a normal number of dependencies.
It’s good that a rendering library is small, but that gets washed out if your other dependencies end up being big. So rather than thinking of Elm vs Svelte, I think Elm vs JS is the more sensible comparison.
It is pretty easy to cut out tons of functions from dependencies in Elm, while it is generally not practical to get close to that with JS modules. I talk a bit more about why the language is important for this comparison in this post.
So I personally think choosing a virtual DOM implementation based on size alone only makes sense when comparing JS projects to JS projects. Maybe it’s possible to make the Elm implementation even smaller, but if the goal is to reduce code size in practice, I think focusing on code generation more generally would probably be more rewarding.
| vanillajs | elm | svelte | react | react-redux | |-----------|------|--------|-------|-------------| | 1.00 | 1.55 | 1.33 | 2.11 | 2.69 |
Elm also appears to allocate a bit more than Svelte. Perhaps that could be trimmed down.
One idea is to detect static
Html msg values and move them out of functions to the top-level. That would mean they are allocated just once, whereas they may otherwise be allocated many times as different
view functions are called. (The virtual DOM implementation detects when nodes are equal by reference, so this would skip diffing as well.)
The trade off with that idea is that (1) you do more work when the program starts and (2) the memory sits around for the whole duration of the program. These factors could be an issue in large enough programs, so it’d definitely take some special care to make sure this isn’t negative overall. (E.g. should there be a cache containing N kb of the most commonly used static nodes? Does that add too much overhead to be worth it? How do you set N? Etc.)
My sense is that projects significantly slower than Elm and Svelte are working fine for a lot of people. Even if we doubled the current performance somehow, I do not know if that is such a big deal to most people right now.
But yeah, if someone thought it would be a big deal, I would look into moving static
Html msg values out of functions. That could definitely get the allocation numbers down, and maybe improve perf a bit as well. The hard part is finding a design that has predictable performance, without filling up the heap too much for people with very large programs. Maybe the naive design of just making them all top-level is fine! Someone would have to do a proof of concept to start collecting data on that!
Hi, Evan, thanks for this detailed explanation, surprised me! I naively thought precise dom manipulation is better than virtual DOM diffing update. seems like svelte has some advantages over other freamwork, and it’s all due to the compile process. so this question just came to my head. I think you are right, performance is not a big deal to most people because the hardware nowadays is to good to let user perceive the difference between freamworks. as a developer, it’s easy for me to just think about something is maybe can bring performance benefit. You maybe thought about is too before. so hey, keep up your good work, I like Elm, hope it will become more widely used.
Although true, in order to sell Elm it really helps to claim Elm is faster!
No problem! I think Svelte was saying “faster than virtual DOM” in a lot of their public communication, so I think many people have this impression.
My understanding is that they knew Elm had similar perf numbers to them, but they assumed Elm must be using the same techniques. I told them that was not the case in June 2019 so hopefully they saw it and changed how they talk about things since then
I should add, I do not actually know what Svelte is doing specifically. I would personally be interested in seeing a little breakdown of their key techniques. Maybe there are lessons that could be applied in Elm. Would be very curious to understand more!
And maybe it’s something that could be explored in the style of
elm-optimize-level-2 so that it is not blocked on compiler releases or anything.
Rich here - Svelte creator (and Elm admirer), someone pointed me at this thread. I’ve written a bit more about why a Svelte eschews the virtual DOM at https://svelte.dev/blog/virtual-dom-is-pure-overhead - the tl;dr is that the real problem with VDOM isn’t the diffing, it’s the fact that you have to rerun lots of user code on every state change (which is very garbagey, as well as involving lots of unnecessary computation). I’m not very familiar with how Elm works but I’d imagine the guarantees provided by the language enable it to do this step a lot more efficiently than, say, React.
I won’t go into what techniques Svelte is using right now, partly because I’m typing on my phone but also because it’s possible that we’ll be making some substantial changes in the near future. Suffice it to say that the key is to try and do as little work as possible, and AFAICT Elm is already doing a fantastic job here, so there’s probably limited upside to Elm adopting Svelte-like techniques.
The mechanism Elm uses to avoid re-computing on every state change is
lazy2, and so on, if there are more args). If the inputs have not changed, there is no need to recompute, and a whole section of the DOM can be carried over from the previous state. Its something you have to explicitly add to your code, rather than an automatic compiler optimisation, but its usually pretty easy to do so if you need it.
I’m not seeing any
keyed version of the Elm benchmark. Did you mean
Aside from that, I would question how useful these benchmarks are. My understanding of Svelte is that these results require no extra work for the developer, which is not the case in Elm. Where I work, we avoid using
Html.Lazy on purpose, because it adds a non-trivial maintenance burden.
The reason is that
Lazy uses reference equality in its arguments to determine if anything has changed. This seems reasonable, but unfortunately it’s also different from everywhere else in Elm, which uses structural equality. So if you end up using
Lazy abundantly, you get two flavors of Elm: the normal one, and the one where you have to eschew idiomatic practices to preserve object references.
As an example, if you pass a
Lazy, it’s very important to not use e.g.
Maybe.map on it first, as that will create new references on every run, and undo all your performance gains.
Html.Lazy is one of those mechanisms that work great in benchmarks. Benchmarks are often small, written by a single person, and rarely change (like the one you linked to). By contrast, code bases like the one I work on are very complex, have lots of churn, and are changed by multiple developers with different levels of expertise. And though we only use
Lazy in one place in the application (IIRC), we’ve still seen it break because a developer accidentally did something otherwise idiomatic with some of the input.
So we prefer to pay the performance penalty, over trying to optimize our use of the VDOM. Without being an expert on Svelte, it still seems to me that Svelte has a solid performance advantage, in that their approach is applied throughout the code base, as opposed to how it is in Elm.
Html.Lazy is definitely tricky. It’s really easy to put in place, but also really hard to keep make it keep work and not fail on you silently because of “spooky actions at at distance”.
IMO, this could be solved through static analysis (probably through some other means too), and I proposed an
elm-review rule idea targeting this exact problem (link below).
Detecting the problem will not solve the problem entirely as you will have to re-think and re-structure how you architecture the parts of the code that lead to that lazy view, which might be more work than what Svelte gives you out of the box, but it would remove the biggest pain point around it.
Proper use of Html.lazy is indeed tricky. It is, however, also key to bringing down the rendering and diffing costs. One thing that Elm could do with no changes to the language or the library APIs is make the equality test in Html.lazy go a bit deeper. The most frequent errors I’ve seen with Html.lazy are people constructing records to pass view function parameters — often because the number of parameters supported by Html.lazy is limited — which thereby entirely defeats the optimizations provided by Html.lazy and instead increases the costs. Going one extra level deep on the comparison would fix this.
The more complicated place to put work would be in not rendering data that isn’t on the screen. To the extent that the DOM (and virtual DOM) size is proportional to the visible data as opposed to the entire model data, the costs of rendering and diffing should be insignificant for most uses.
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.