I’ve encountered the similar situation on GitHub-hosted runners, which only have 7GB of RAM unless you are opting in to run larger runners.
https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources
In our case, the app was somewhat large (around 400 modules) including many auto-generated code. We profiled the compilation using +RTS -s
.
After investigation,
- it was basically due to excessive GCs
- found out it could be mitigated by reducing interweaved extensible records (the situation described in the issue Compilation time is O(2^n) when composing ext. records · Issue #1897 · elm/compiler · GitHub)
- also, we are now enforcing
+RTS -s -H1G -M6G
in GHA for precaution-
-H
“suggests” suitable heap size, while-M
sets the maximum
-
On 2., it was hard to catch. By “interweaved extensible records” I mean records like this:
type alias PageModule urlParams model msg =
{ init : Shared -> urlParams -> ( HasShared model, Cmd msg )
, update : msg -> HasShared model -> ( HasShared model, Cmd msg )
...
}
as you can see it packages “page” module APIs into a single record for better code organization. (The pattern also found in elm-spa and such, but made for ourselves)
However as described in the linked issue, this pattern can accelerate heap memory usages.
We decided to remove these packaging records and modified the code generation and auto-wiring implementation. So that it directly exports init
, update
and other functions from page modules then imports/wires them from root app module.
At least as of now the heap usage is stable and compilation works well enough on average CI/Dev environments.
I cannot say the situation is the same for you @kanishka but our process to tackle the issue may help