Elm Radio episode 28: GitHub Actions

:studio_microphone:Episode 028: GitHub Actions is out!

We discuss best practices to setup GitHub Actions to make sure everyone has the same source of truth for checking your Elm code and deploying to production.


In case other people are wondering how the GitHub cache works, I gathered some of the following info from docs and trial and error:

Setup / tear down

The cache action indeed has two steps, a setup step and a tear down step, as explained by @dillonkearns. The setup step read and extract the cache, before the following actions in your yaml file. The tear down part writes to the cache all that matches your config. This setup-teardown thing is indeed a general GitHub action mechanism that happens in a russian doll fashion. So if you have two actions with post scripts,

  • action 1
  • action 2

you will get

  • action 1 main
  • action 2 main
  • action 2 post
  • action 1 post

key and restore-keys

Each cache is stored with an associated key that is used when a new workflow is triggered that uses the cache action. That key should thus uniquely identify changes in your config that may require updating the cache. That is why we often use a hash of the elm.json file as a key.

If there is “a cache hit”, meaning the key was found in the cache, it will be pulled, and reused. Then the “post” part of the cache action will do nothing. If the key is new though, the “main” of the cache action does basically nothing, but the “post” part will write to the cache with the new key. This means your CI starts from scratch this time and will re-download stuff that it needs, but next time, if there is a cache hit it will be faster.

Sometimes, it is convenient to not start from scratch though, this is the role of “restore-keys”. If there is no exact cache hit, the cache action can try to match restore keys. If one match is found it download that as a starting point, and then in the “post” script will save the new key in cache. Typical situation would be to setup things like key: elm-${hash("elm.json")} and restore-keys: elm. Now let’s imagine that we add a new dependency to our project. The old key would be something like elm-a2c3d and the new key would be something like elm-55btc. The way the cache action would behave is the following:

  1. Does the key elm-55btc exist in cache? → no, it’s a cache miss
  2. Does the restore key elm has a match? → yes elm matches elm-a2c3d so use that cache as a starting point.
  3. Do the normal user actions stuff …
  4. There was a cache miss so save the new state of the cache with the new key elm-55btc.

So in the common case of new dependencies or new exposed modules, the restored cache will get you 90% of the way, and the elm compiler will only have to download the missing package.

Cache restrictions?

For security reasons, caches can only be accessed from the same branch or a parent branch. So if you only trigger actions in PRs and not in master/main, the first push in that PR will not have any cache to access and will always start from scratch. But you trigger that action also in pushes to main, that cache will be accessible in the PR branches. Otherwise your CI might get blocked on new PRs when the package website is down.


This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.