Curious if anyone has thoughts about this. I’m working on an app where users can enter HTML in a text editor and it gets analyzed and then displayed on the screen. Basically, it goes from text to an intermediate data type like this
type Node
= Element Name (List Attribute) (List Node)
| TextNode String
| Comment String
and then back to text again. For big HTML files, this final rendering step back to HTML gets suuuper slow even if the actual change is small, and I’m having trouble figuring out why. I’ve tried adding some Html.Lazy
stuff with no clear improvement. I’m not sure how to use HTML.Keyed
in this case since the lists of HTML elements are genuinely arbitrary and it is possible for more than one of them to be identical in every way.
I took a look in the Firefox Performance tab and I’m not that versed in performance tools, but it looks like the slow part is doing a lot of garbage collection and getting a lot of “Nursery is full” messages.
The speed issues are completely solved by switching from using Elm to using the morphdom library for rendering, but I am interested in the rendering being done with Elm if it’s possible to accomplish that without this slowdown.
It was a little big to paste in discourse, but if anyone is curious, the file that does the rendering is here. Basically it’s just translating Node
s into Html.node
calls, plus some code to workaround Elm’s restrictions on script tags and such (since I do want people to be able to write script tags/bookmarklets/etc. in this app).
Looking for: (1) thoughts on what might cause a slowdown like this on large HTML files for Elm’s virtual dom but not for morphdom, (2) tools or techniques for investigating performance issues like this one in Elm in general.