Exploration for Server-side Rendering

Sure!
The things I couldn’t find in Spades when I’ve made that demo were:

  1. model (de)serialization,
  2. waiting for the view to settle.

My demo can wait for XHR and I think it should work with synchronous Cmds (like getting a random number or current time).
Spades could render a simple shell for your app, but not much more. I don’t know the current state of Spades.

Edit: looks like Spades only gives the app one “tick”, and then returns whatever got rendered: https://github.com/rogeriochaves/spades/blob/1c87b2ab7bfb2c865a43022268bb890df34b03de/boilerplate/server.js#L69

2 Likes

What if an app depends on some information that is only available on the client to do the rendering? Like checking the window width and height instead of using media queries (this is what you do with elm-ui for example), devicePixelRatio to decide which size to use for images, etc. As far as I know, there is no way you can get this info on the server, so SSR might not be possible depending on how you write your code – which means (correct me if I’m wrong) that it’s not possible to implement a generic solution that would work for every case (and this problem is not specific to Elm).

I believe you’re correct: there are many subtle trade-offs and limitations like this that must be dealt with when implementing SSR.

From this article on React SSR:

  1. Make sure you don’t reference window or document

Your React components will now be rendered by your node backend. And your node backend doesn’t have the window or document global variable. That could lead to errors like this on server side:

window is not defined

If you get a similar error, you can put the code referencing the missing variable in an if statement like this:

if (typeof(window) !== "undefined") {
    window.localStorage = ...
}

So an SSR approach for Elm would need to deal with the fact that some effect handlers would not be supported in the pre-rendering environment.

As to the specific question of elm-ui, if I were implementing an app that I wanted to have SSR, I would be happy to accept the constraint that the content’s layout must work without JavaScript (perhaps doing progressive enhancement with JS for non-essential features).

In an ideal world, where Elm had first-class support for SSR, I would imagine that effect handlers that depended on the window object might support returning a result like NotSupportedOnServer, which you could handle as appropriate in your app.

Over the last year, I’ve been exploring the different approaches that people have taken to achieve SSR with Elm.

Rather than using JSDOM or Puppeteer, @eeue56’s elm-static-html-lib took a JavaScript object, decoded it (from a decode function referenced in Main) in Elm and then was passed into the Main’s view function to render as a HTML on the server.

However that library is now deprecated as of 0.19, although it was forked by Daniel Wehner: https://github.com/dawehner/elm-static-html-lib.

The tradeoff with the above, was that the Model needed to be identical to the JavaScript object.

With all of the different approaches, I haven’t seen a rehydration technique similar to what was used by @ktosiek. @ktosiek, I think if you can further develop your elm-ssr-demo exploration, so that it encompasses tasks and commands, that would be awesome.

Also as a side note, the elm-ssr-demo isn’t working for me now we have the 0.19.1 update – I’m getting a “Mangling failed on regexp” error.

1 Like

In the local JavaScript community here in Melbourne, Australia, the #1 thing that people cite as a reason for holding off exploring Elm in my conversations with them is lack of support for server-side rendering.

Do they ever say why SSR is so important?

If the concern is SEO, I know there have been discussions around rendering your app in puppeteer and serving up that response.

If the concern is time to first render, wouldn’t Elm’s tiny build size be of benefit over other frameworks? Also wouldn’t research around code splitting be of more use as you could get your build size to be even smaller?

From my point of view is more like we don’t have a way documented in the elm documentation.

See “reasons for wanting this” in my original post above. In short: performance, SEO and accessibility.

Spinning up a full (headless) browser on the server is needlessly resource-intensive, and not especially fast unless you’re maintaining a fleet of “hot” browsers waiting to render things.

Yes, but it’s a case of “why not both?” Rendering with Elm is faster than React (say), and doing the initial render on the server is faster than doing it on the client. Combine the two for maximum speed.

Again, these are all beneficial performance measures. One does not preclude the usefulness of another.

That said, there would be value in measuring their relative benefits in real-world scenarios to see which should be the priority if people need to choose one to focus efforts on.

1 Like

I can vouch that for this approach, it is a very hard problem to tackle. Securing and sandboxing the browsers (because they are running what could potentially be malicious user input, and the last thing you want is a compromised server), and doing the proper queueing and resource balancing is a very hard problem. Chromium and puppeteer under load can have very unpredicable behavior.

At work we have something like this for rendering PDFs from web content, and it was a long and challenging project to get sufficiently right for production usage. And now on the maintenance stage, keeping up with the chromium upgrades and the OS upgrades of the servers is also a non-trivial amount of ongoing work.

YMMV, but I advise caution with this approach for any non-trivial use cases.

1 Like

I believe stuff like prerender IO doesn’t have a hot browser doing the prerendering, while the request happens. Rather it is more like a cron job crawling the page and caching the pages periodically. You then have a server in front of the prerender io cache that merely serves the cached pages IF it detects you are a search engine bot. Regular users will be served the regular index.html which then starts loading the JS bundle.

I think it makes sense to categarize the prerender approaches into 1) solutions that crawl and cache periodically and 2) solutions that do actual SSR upon request.

For the latter I agree it would likely too slow to do it with a headless browser.

Server-side rendering of dynamic (user- or query-specific) responses is the gold standard that the React community here looks for in alternatives.

1 Like

One issue with static impressions of an Elm app is knowing when to do it - when to consider the page has settled. The one point it can be done deterministically is right after the init function; take the view that results from that first Model. Any other time could be non-deterministic since init could return a Cmd.batch and the order in which those hit the update cannot be guaranteed. Consider starting a timer and issuing an HTTP request, which will complete first?

There are SSR techniques that work with Elm already, I am trying to think of things it cannot do, and whether the Elm compiler or runtime could support them better.

Would having some event you can listen too, in order to know when init has completed (or even when update or view are being run) be something that needs to be added to the Elm runtime?

A technique I’m using is to notify a port when all the necessary effects have been processed by update. Once the JS (running JSDOM in my case) receives that, it needs to know when the next view function has completed. Unfortunately there is no callback for that, so I set up a mutation observer and assume that the next mutation is the VirtualDOM done.

This works OK without too much manual work (basically a single out port), but feels a bit brittle. A callback after view rendering would be much nicer.

1 Like

Yes, I think you are right, that is where it needs to go. Its not after init its after the first view.

Using a port to signal when all necessary effects are completed is one way to do it. Means each SSR framework needs to define which port to use. So I guess this could be another built-in that the Elm runtime could provide.

module SSR exposing (..)

{-| Once this command has been run the `onViewComplete` event 
will have the `settled` flag set to true.
-}
settled : Cmd msg

I think always taking settled as being on the first view has an advanatage though - it does not allow any effects to be run. This means that SSR doesn’t have to worry about say Browser effects that won’t work right in that context.

For example, often one of the first things my SVG apps do is to get the window size or an element size, to help set up an SVG canvas that is 1:1 with the pixels. In an SSR context there is no window size to know, since that doesn’t come in with the GET request for the static page. I might still render some framework for the page, and then do the SVG drawing after rehydration.

This would save on the need for some effects having to change their APIs to report that they are not available in an SSR context.

I’d say that’s an untenable restriction. It is very common for Elm apps to init into a “Loading” state, which then fetches content from the server. Only once that content is loaded and the view updated is the view content-complete, and it’s that view that we want to send from the server, so that search engines and clients without JavaScript can access the content.

If the Elm runtime were going to attempt to pick a sensible moment to declare the view “stable”, I’d say it would be the first view after all of the commands triggered by init have run, the resulting updates processed, and the first view after that rendered. But even that would be full of edge-cases.

Server-side rendering an empty shell with a loading spinner is not very useful.

@rupert the “SVG canvas” type of application you’re thinking of doesn’t strike me as a particularly good candidate for server-side rendering. One local business that depends heavily on SSR, for example, is a job search website, which has category pages that initially list the newest jobs in that category, but which provide a rich set of filtering controls that you can apply and have the search results update (with each filter state having its own bookmarkable/shareable URL). That company considers it vital to have those job listing pages (as well as the individual job pages) load very fast (because search engines rank fast-loading pages higher), with complete accessibility to search engines and people with disabilities.

As far as I see, there are 2 things that need to happen:

  1. A way to do the server sider render
  2. A new way to initialize the Elm app

The simplest model that comes to my mind for the server-side render is a Task that produces a String that is the rendered html. Alternatively it could produce the model and have the conversion to String happen automatically. All the commands that end up in init would have to be converted to a chain of tasks in order for this to work. I’m expecting that most of the init code could be converted to a Task.

The initialization part is a little bit trickier. I cannot figure out how to do it without adding yet another parameter to the Program type. In essence, the init needs to receive a new data type that would represent the payload to a message that would put the initial model in the “after the initial update” state.

2 Likes

In my demo I’m mocking requestAnimationFrame to know when the rendering is over. Maybe that would work for you too? You could wait for the “settled” event from your update, and then run all pending RAF callbacks.

1 Like

Yes, that would be the disadvantage of doing things that way.

You would have to load the data outside of the Elm app and pass it in as a flag.

That is how elm-pages is working, but it has all the initial data for a view as meta data associated with the content it wants to render - so already it is imposing restrictions on how you structure your app for SSR. I still feel this is the best way though.

Hi all, I’m the author of elm-pages and since it’s come up on this thread (and this thread has been discussed a bit in the #elm-pages channel in the Elm slack), I thought I’d chime in here.

If you want to see a live site with elm-pages (including Pre-Rendering and all), this is a good example page:

https://elm-pages.com/blog/static-http

The Pre-Rendering handoff to the client-side Elm seems to be working quite smoothly.

Summary

In a nutshell, right now elm-pages uses Server-Side Rendering and gives you all the SEO and performance benefits there.

Server-Side Rendering (SSR) vs. Pre-Rendering

elm-pages doesn’t technically do SSR, but the effect is very similar. It would be more accurate to say that it does Pre-Rendering. It simply uses Puppeteer via webpack under the hood to go through and render all the static routes in your app elm-pages app.

So it technically doesn’t hydrate the pre-rendered Elm app, but rather it serves up and renders the pre-rendered HTML, and once that’s done it fetches the Elm bundle and initializes a fresh Elm app which then takes over the DOM and replaces it with the same content. That’s more of an implementation detail, though. From the user’s perspective, I’ve found that there isn’t an observable difference.

Blemishes with current approach

There is one issue which has come up, but I think it’s more of a virtual-dom bug than an inherent shortcoming with the Pre-Rendering approach.

The issue is that <img> tags reload when the Elm code takes over the DOM from the Pre-Rendered HTML. I believe this is caused by https://github.com/elm/virtual-dom/issues/144, which @ktosiek mentioned in the https://github.com/ktosiek/elm-ssr-demo readme. It seems that you can work around this, though, by using Html.Attributes.attribute "src" rather than Html.Attributes.src, as @ktosiek does here: https://github.com/ktosiek/elm-ssr-demo/blob/c4cb2e270edf8e284da6316b7bffbe48ebb9dcae/src/Main.elm#L135. Also, it doesn’t even seem to be a problem with the way that modern browsers do in-memory caching (they reload the image, but it seems to be instant so there’s no flash unless you have cache turned off in dev tools).

Other than that, it appears that taking over the DOM has no disadvantages… you could have CSS animation keyframes or anything else and it seems that it’s all handled well and transitions over without being noticeable. At least I haven’t been able to find anything else that causes jankiness. If you know of anything else, I’d be curious to hear about it!

Pre-fetching data on the server

I think that elm-pages solves this problem pretty nicely with the StaticHttp API. This gives you a way to fetch data when the site is built, so you can display that data in your initial pre-rendered view (rather than loading spinners).

I know this isn’t necessarily something that would be universally helpful, but I think it’s been working well for elm-pages.

SEO for user-specific data

elm-pages also allows you to use any data that you fetch from your StaticHttp requests (which could include hitting a CMS with public user data) to build up both your views and your <head> tags for SEO. I know that a JAMstack approach isn’t the right solution for a lot of products, so of course if that’s not the right architecture for your app then you wouldn’t be able to leverage something like the elm-pages StaticHttp API.

Keeping Server and Client Renders Consistent

The approach that elm-pages takes is that it provides the StaticHttp API to let you feed initial data into your Pre-Rendered page. So you have StaticHttp data immediately available, without going through any update cycles at all. Then, you can get any other data from your update (like say a more real-time data feed, like a sports score… StaticHttp data is updated every time you build your site, which you can trigger as needed, but not every minute). But that update function doesn’t get called at all in the Pre-Rendering phase for elm-pages. I think this is sufficient to allow you to load in what you need so you have it on init.

There are certain types of data that you have access to in your elm-pages init function which will not be present when it is Pre-Rendered:

  • Initial URL fragment and query parameters (Pre-Rendering happens at build-time, not when a page is requested, so it doesn’t know which fragments or query parameters will be used in advance)
  • Flag values can be different between the server render and the client render (for example, the dimensions of the browser window)

So as long as you’re mindful of not depending on data which will differ between the Pre-Render and the client-side render, you can create a really seamless experience I think.

I don’t think that these considerations are very different from an SSR approach, except for the case of the URL fragment and query parameters. But this is just a design decision for the types of problems that elm-pages is trying to solve (sites that you can serve up ridiculously cheaply, securely, and performantly using a CDN rather than a traditional server).

If there was a different philosophy, you could imagine a framework that uses a similar approach but performs the Pre-Rendering step on-demand for each request from a server. That would allow you to pass the exact URL to the Pre-Rendering step.

Performance Tradeoffs with SSR/Pre-Rendering

In terms of the performance benefits, it’s worth considering the tradeoffs with Pre-Rendering and SSR. If the user is logged in, then they’re probably a repeat visitor, in which case using a service worker to cache the application shell seems like the appropriate optimization approach to me.

Pre-Rendering and SSR lead to a faster initial render (First Contentful Paint), but they lead to a larger amount of data being fetched (because you have to download the HTML and the JS bundle). And because more work has to be done overall (parsing and rendering the full HTML first, and then loading up the JS bundle), this tends to slow down the Time to Interactive (TTI).

elm-pages does some optimizations with service worker caching, too, but it’s a really tricky area that I’m still working on.

Takeaways

I hope that gives some food for thought! I think that SSR is something that has a lot of different potential design decisions that could be made, many of which are imperfect. So I think it’s really good to keep in mind specific use cases. I think it would be productive to hear some very specific use cases that people need SSR for. For example, if building something like Discourse and you need to fetch data from the server and then add some SEO tags to make sure the site is accessible to all web crawlers and performant on all web crawlers.

Would love to hear what types of problems people would like to solve using this type of SSR functionality, and how SSR would help them solve it, so we can drive the discussion based on real-world use cases.

10 Likes

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.