SEO, Google Lighthouse


I have successfully used elm for back office web applications in production, were SEO is not a concern. For my next project I’d like to start using elm for a consumer facing website too. So SEO is of great importance.

I have a prototype with all essential properties of the website included (js libs, fonts, css, images, dynamic loading of data from the server (graphql), aso). The view is implemented completely in elm.

When running google lighthouse I get 99/100/100/100 for performance/accessibility/best practices/SEO. So my question is: am I missing something? I always thought SEO and elm is a problem.

The hosting service I’m using offers free, but I haven’t enabled it yet, because the SEO-score is excellent.


I am no expert on this topic, but AFAIK Googlebot has no problem rendering JS based applications. Hence it can give a good score.

Unfortunately the same can’t be said of other search engines. So I think this may be a move by Google to make their moat against their competitors larger by making you feel like your website is going to work fine and actually only being findable on Google.

That said, given the relative market shares, you may not care.


We are running Elm SPA frontends on several sites where SEO is important (e-commerce) and we have had no problems with Google Indexing. We are not using any server side pre-rendering.

Bing didn’t work some time ago but it looks like it renders now when I test it in their test tool. It used to be blank.

Some of our sites running Elm:


As far as I remember, HTML indexing is done super quickly by Google, while a full run with JS is put on a queue and run as soon as their servers have time, which could be days. So if quick re-indexing of your site is important that can be worth keeping in mind (and looking up if it’s still the case!).


Interesting, bing finally seems to work. I remembered either elm-conf or elm-japan not showing up in duck duck go, but now seems fine :tada:

@dillonkearns I’d love to hear your opinion on this. Could elm-pages seo module be extracted so it can be used outside of elm-pages? As I understand it, elm-pages does ensure best practices are used for seo. But is there a reason why it couldn’t be dynamic instead of static?

I would think of Lighthouse similar to how I think of TypeScript errors. If it finds something, it’s likely a problem I want to fix. If it succeeds, that doesn’t necessarily mean that I won’t have any type-related issues. I would think of your Lighthouse score as the starting point to find easy to spot issues. But having a 100% SEO score on Lighthouse doesn’t mean there couldn’t be a lot of opportunities for improvement. In fact, I believe they have some recommended manual steps that they list out, which of course don’t effect your score.

SEO with/without Pre-Rendering

Jeroen and I discussed this question of SEO with pre-rendered vs. non-pre-rendered SPA apps in this episode of Elm Radio:

What I consistently hear from SEO experts is that Google supports it (though some other search engines don’t yet), but you can still pay a penalty where there is a lag before they index new pages. So it’s generally considered to be helpful to pre-render pages. Also, I haven’t tested it, but I would imagine certain services might not go to the trouble to execute JS in order to extract meta tags (OpenGraph, Twitter card tags, etc.) to present a nice link preview. Maybe my intuition is wrong there and every service consistently runs JS to extract meta tags, but I would guess that it’s more flaky without pre-rendering.

Also note that you’re talking about a delay for adding head tags beyond parsing and running JS. It sounds like you’ll have an additional delay of waiting for server responses before you can add your head tags. I’m not sure if this would cause any problems, but it’s worth doing some testing to make sure you don’t get flaky results because of this. elm-pages solves this with StaticHttp.

Extracting the SEO API from elm-pages

You’re more than welcome to use the SEO API and JS code, for personal use or in other frameworks. Here’s how it adds head tags:

And here’s where it serializes the Elm data types to JSON:

Note that it does have some things that are specific to elm-pages, like the canonicalSiteUrl, but you could find another way to set that.

1 Like

For Twitter cards, is possible to test them here:

Yeah, those testing tools are helpful.

There are some docs about the crawler that Facebook uses for Open Graph previews here. They don’t run JS in their preview crawlers at the moment.

You can simulate a crawler request with the following code if you need to troubleshoot your website:
curl -v --compressed -H "Range: bytes=0-524288" -H "Connection: close" -A "facebookexternalhit/1.1 (+" "$URL"

How did you make each page to have a different title and meta description?
Is it any trick like generate static html?
has Ekeby Möbler: Möbelvaruhus i Helsingborg Skåne - Ekeby Möbler
has Stolar på nätet - klassiska designstolar online - Ekeby Möbler

1 Like

I set them dynamically and optimize for crawlers with (as I mentioned above, it’s included in my PaaS

We render all meta-data in the header on the server. We also have a port to set the title from the app when changing page since it is visible in the browser tab.
We actually don’t update the other meta-tag (robots, description, og: etc) dynamically since they are only used by crawlers and crawlers always fetch the page from the server. Have haven’t yet encountered any crawler that simulates click on navigation elements and in that way triggers an urlChange event in the app. As far as I know, they fetch the URL from the server, some of them render the page and then they scrape the DOM for information. Ofc, at some point the crawlers will become “smarter” and we would need to update all meta-tags.

Hi @dillonkearns! I re-listened to the episode and it sound really great what you have built! It is still hard to wrap my head around the jam concepts, but I think I’m going to try elm-pags with the same project. I’m very curious how it compares with my current stack. Some feature are not comparable, because there is no such feature in my current stack, like secrets or check for broken links.

I have a few questions:

  • On route changes in the browser, does elm-pages fetch all js code generated for elm on the server, and does the hydration again or does it reuse the already fetched elm from the first page load (after the hydration has finished)?
  • As I understand elm-pages, every route has to be a file, right? But is it possible to generate such files? For example: I have a webshop with lots of products (leave pagination and filter out for simplicity reasons for now). Could I make a staticHttpRequest to get all the product ids and then generate corresponding files, like
    content/webshop/product-id-0.elm for this route webshop/product-id-0, and
    content/webshop/product-id-1.elm for this route webshop/product-id-1, aso?

elm-pages generates a single Elm JS bundle. It doesn’t do Elm code-splitting per route (Elm doesn’t have any first class support for code splitting at the moment). So it hydrates the Elm app once, and doesn’t do any subsequent loads of Elm code.

elm-pages does do data splitting per page, though, so it will only fetch the StaticHttp data that a particular page needs. And it pre-fetches that data when you hover over links so it can change pages seamlessly.

That’s right, in its current state, elm-pages doesn’t support creating static routes based on StaticHttp requests. I’m making some really good progress on an architecture that supports that! It will take some time to get all the pieces in place, but I’m actively working on it and have a design that supports it nicely. You can follow the progress in this PR which I’m using for the prototyping and implementation

The only way to generate these routes based on API data at the moment would be to create your own custom script outside of elm-pages (maybe in NodeJS, for example) to create files in the content/ folder to create the routes you need.

1 Like

Thanks you for the explanations!

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.