How to continuously update model?


I want to take analytics on continuous data and do live visual analysis on it. I simulate playing a batch of games that have a chance component from emitting a Cmd Msg for a random value. After computing the analysis of the batch, I use that analysis to simulate more games, yielding another random Cmd Msg so that these games keep feeding into each other.

So in my update function, I have something like this:

update msg model =
    case (msg, model) of
        . . .
        (GameGenerated analytics, RunningSimulations gameParameters) ->
                newGameParams =
                     updateGameParametersBasedOnAnalysis analytics gameParameters
                nextMsg =
                      generateRandomCmdMsgFromGame GameGenerated newGameParams

            (RunningSimulations newGameParams, nextMsg)
        . . .

so update emits a GameGenerated Msg which triggers update to emit a GameGenerated Msg which triggers update . . . ad infinitum. I’ve identified that this section is causing an update loop that crashes the browser, perhaps because it runs out of memory.

How do I show the live progress of this computation? I don’t want to slow the computation to the pace of an onAnimationFrame subscription.

Thomas Kagan


That is unavoidable. If you do expensive computation on the main thread you will block it. Seems like using a web worker would be a great fit for your use case, given you don’t want to split or slow down.


Good suggestion-how would I approach implementing that? Do you have resources or tools you recommend?


And some time reading and experimenting is all you need.

There are other resources if you google but they are for older elm versions, and use JS tooling that complicates things.

Good luck! An OSS example of this would be a valuable contribution for future question askers.


Thank you for the resources, they are valuable and saved me a good deal of time exploring!

I may also look for an open source example, and post it here if I find one.


I started to implement the infrastructure stuff for webworker communication, the actual game never came to be but the plumbing is still there, if you want to have a look


Awesome, thanks for mentioning!


I have tried this approach myself, and found that I was able to run a continuous update loop and for the application to remain responsive. If the application is crashing, perhaps something in your computation is causing that? For example, if it is not tail recursive and causes the stack to overflow.

I thought I had saved my experiment with this to GitHub, but I cannot find it, so I shall just describe it here.

My ai-search package has a function which lets me run a search for a certain number of loops and then yield with a continuation to carry on the rest of the search later:

I set up a dummy search that just loops forever without ever finding a goal. Then I tried running this with between 1 iteration and 100,000 iterations per update. At the end of each update iteration a Cmd was returned that would continue the search in the next one.

At 1 iteration per update, the UI remained reasonably responsive, but the search progressed quite slowly, At 100,000 iterations, the UI was very unresponsive but the CPU got up much nearer to 100% and the search progress was fairly good. I came to the same conclusion as other responses to this thread, that it would be better to put the whole search in a web worker background thread.

Try simplifying or temporarily removing altogether your game logic from the update loop. Maybe just rpelace it with something that runs a simple counter and +1 on each update. Hopefully you will see that the UI remains responsive and it does not crash?


That is a good point, it would definitely make sense that it is slow because perhaps the view is rerendered to quickly, it is something I considered. But since the entire interface is SVG, I’m not sure how to optimize that. So it seems that indeed a web worker is the best route, feeding interspersed updates to the view application at onAnimationFrame intervals.

But I don’t really know the implementation details of Elm, this may not be an accurate theory so a web worker may not help anything.


Take a look at elm-queue. It is a simple pool queue I wrote. I wrote about it in this other post with an example also: Many messages causes slow rendering


I am working on library that would enable you to make large computation without blocking “main thread”.

Idea is that you define a step function and then offload computation in batches so that there are no dropped frames (or at least minimum dropped frames).

I am currently stranded on finishing some academic work, but I expect to finish it mid-summer.
If someone is interested in details and strategies used feel free to ping me, I would send you unpolished version with the idea how to finish it. :slight_smile:


This is exactly what I did. The problem is you can get high CPU usage and a very slow UI, or low CPU usage and a less slow UI, and there isn’t really a sweet spot to be found between the extremes. A framework for offloading to a webworker would be more useful.


@popara that may work, but the games are already batched. So i could use it to batch a batch of games? I think the rendering, not the computation, is probably the issue. But thanks for the suggestion!


@francescortiz That library seems like it could help my issue, especially in combination with web workers. Thanks for mentioning!


Yes, and this is hard part that I am working to nail right is to find that sweet spot on the fly. But still, I have never imagined this being used for gaming. My original intention is to provide a tool so you do interesting data processing stuff without blocking UI.

Now I would agree that offloading to service workers may be better way, there is a big price to pay, because you can only pass string value to service worker, which means that you have to serialize your 30k items array, and de-serialize on other end before you actually start working on that data.

Also there is a problem where you would be either writing JS on SW side, or you will have two Elm instances running, and (somehow efficiently) communicate between them.

I almost can smell that this is a thing that should be solved on platform level, eg. you point a function that you would like to promote to SW, and then the Elm compiler takes care of the rest. Similar to ports.


You can pass a JSON object no?

But yes, the encoders/decoders will add some overhead. You’re going to have to think about minimizing the amount of data passed on each use of it - like is there some delta you can pass each time rather than a full data set?

That would be ideal but unfortunately not possible. The service worker does not run in the same context as the main thread and a continuation typically bundles up some of the stack context. You can’t pass around functions sadly. Its a bit less like threads and a bit more like OS processes.


I totally don’t have time to actually do the experiment, but if I had what would be best way to test feasibility of the idea?

Maybe some benchmarking about bandwidth of data and frequency with which you can send data back and fourth?

closed #18

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.