Long-running computations in Elm that won't freeze the page

I just released mgree/trampoline 1.0.0 (github), a library for supporting long-running computations in Elm. Intensive, long-running computation can block the sending of messages, which makes the page unresponsive. My library offers support for writing such long-running computations and running them without blocking the UI or other parts of your program.

The key idea is cooperative multitasking: suspendable computations only work a few steps at a time and then “bounce off the trampoline” to restart by sending themselves a message. Other messages will then have a chance to be delivered and processed before update function sees that it should continue running the computation.

There are two ways to write suspendable computations. One is to simply define your computation as a stepper, which takes an input of type a and either yields another a (for more stepping) or a result of some (possibly different) type o. The other is to write your program in a monadic style, in what I’ve been calling the Fueled monad.

I’ve hacked up several examples, with two examples deployed on mgree.github.io/trampoline (github): one of a diverging, upwards-counting computation, and one of an STLC interpreter. They are, of course, hideously ugly. But in both examples, the UI remains responsive even when running long computations.

This is my first Elm library, so feedback would be appreciated. I’m hopeful that this technology could, e.g., make the lambda calculus REPL posted recently able to handle long-running and diverging programs gracefully.

4 Likes

I experimented with this idea previously:

https://package.elm-lang.org/packages/the-sett/ai-search/latest/Search#nextN

This is a graph search package, and you can compute the next N steps of a search, in order to then take a break and continue after processing some Cmds.

The technique does work. What I found though, is that if you make the steps too big the UI is still very clunky. If you make the steps too small, the UI is more responsive but not perfectly so, and the long running computation does not proceed very fast, since it is constantly being stopped and turned into a continuation. The continuation adds a lot of overhead. So the technique is viable for some use cases, but has limitations.

If you need a background computation to run faster without blocking the UI, I think the best technique is to use a Web Worker thread. This is more complex, and requires the UI to serialize the job to compute with JSON Encoders/Decoders to pass it to the worker thread and get the results back.

Still, good to see a package for this, thanks for making it. :+1:

2 Likes

Neat library!

I’ve used Web Workers before (to run Z3 to assign numbers to acrostics). I haven’t had need for elm-web-workers, but it seems to beg the question a little: how would you write a worker that (a) did lots of work, but (b) was interruptible? Simply having a worker avoids the UI stalling, but you might still need a way to say “stop”… at which point you have to structure your computation using the manual techniques like in your ai-search or using my framework or something.

As for the “tricky interplay”: I agree! My plan is to make it adaptive: given a target step time (say, 15ms), I should be able to tune the number of steps taken in each “bounce” of the trampoline. (‘Plan’ should be understood loosely here: this was a side project of a side project, and I don’t know when I’ll have time or attention for it again.)

I meant to link to Stopify, a JS-to-JS compiler that makes programs suspendable. It’s not quite comparable to what I’ve done, but it works on a roughly similar principle. (Instrument the program so that it has explicit continuations, then offer a small library for managing those). I’d be curious to see how well Stopify plays with Elm. In principle, it should be possible to integrate Stopify into the Elm runtime in general, which would obviate the need for my library.

The trickiest bit is combining both features: independent threads (à la web workers) and concurrency control (stop/go/reset messages). My original plan was to write an effect manager, but… here we are.

Yes, I think that is true. At least you could have a large computation step size, so it runs faster but would not stall the UI. It would have to pause periodically to check if a message has been sent to it to ask it to stop. So the stop might not be immediate, it may continue to run for a bit before it realizes it needs to stop.

Do Web Workers provide some way to set an ‘interrupt flag’ on the thread, from the main application thread? Even if they did, no idea how to check for interrupted status from within an Elm program.

That was my plan too. I sort of gave up when I realized that all step/pause sizes were not really satisfactory though.

1 Like

I did notice there is a terminate method on the worker threads:

“The worker thread is killed immediately.” A little draconian!

That was my plan too. I sort of gave up when I realized that all step/pause sizes were not really satisfactory though.

Oof. That may well be the case for interesting, long-running computations. :cry: Stopify definitely gets things to work—they use it to instrument students’ JS code to make it pausable/breakable for an intro course—but their methods are much more invasive.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.