I am working on an app which loads many resources when it first starts up. Elm handles this wonderfully, except that that the frame rate goes down. This seems to be because the elm runtime processes every message between each animation frame since no individual message takes long enough to process that a frame would be skipped.
I’ve made an example of what I’m talking about on Ellie. The app displays the frame delta in milliseconds and spawns a bunch of messges each frame. When
incrementsPerFrame is small, the page hums along at 60fps. When it is
10000, it runs at 7fps on my computer.
I would expect that the runtime would only process as many messages as it had time for each frame, keeping the unprocessed messages at the front of the queue for the next frame.
I would first like to point this out as a problem with an actual app that I have written and second to ask if anyone has any workarounds for this that have worked for them.
I think this is what I was looking for: Asynchronous parsing
The number of messages in the queue to be processed in an Elm app will usually be small. Messages represent user input events or callbacks from I/O (http requests etc.).
A 1000 messages per frame is a very strange use case, it doesn’t seem like there would be any value in optimizing the Elm runtime for that use case because users can’t input at that rate and if you’re doing 10,000 HTTP requests per second you’d struggle to make that fast anyway.
what are you doing to produce messages at this rate? It’s likely you don’t need to produce them at that rate.
I created a package called elm-queue. I created this package because I needed to make hundreds or thousands of requests and updates on initialization and the page was getting frozen.
It allows for two types of queues. Queues that need a pool, like HTTP requests, and queues that just need rate limiting, like lots of update operations that need to be performed sequentially.
I think it might help you the same way it helped me.
NOTE: I did some refactoring but I couldn’t fully test it yet. I published it in order to help you, so if you have any issues don’t hesitate to open a github issue, please.
My use case involves making many I/O requests when it first opens. I put in some logging, and found that when opened, the app:
loads 1577 resources
and receives 1624 messages
in 44 frames
over 2.452 seconds
I am able to process that many I/O operations in such a short time because they are all cached.
view function is not a bottleneck here.
Thanks, @francescortiz, that looks like it could be useful. How do you envision that library being used?
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.