Gathering feedback on elm-test-rs before 1.0 release

Hi there! Last December I announced the beta of elm-test-rs, an alternative to elm-test (node-test-runner) to run tests. There has been some very useful feedback and improvements such as working on debian, fixing a multiple import bug, fixing a utf8 parsing bug, adding nix packaging, and some additional features for my own needs.

I now want to improve on potential rough edges still left, so I’d like your feedback on this. If you tried it, what did you like, what didn’t you like? What made you want to switch from elm-test, what prevents you from doing it?


There error message for when elm is not found on the path could be improved (this was the result I got on my first attempt to run it, so it’s my first impression of the tool):

thread 'main' panicked at 'Command elm make failed to start: Os { code: 2, kind: NotFound, message: "No such file or directory" }', src/
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

In my case, elm was not on my path because it’s installed with npm in the parent folder: ../node_modules/.bin/elm

1 Like

In my test suite of 157 tests, there were 7 files I had to manual change as a result of the behavior differences with the current elm-test. In all cases, the duplicate describe names were a mistake, but it was also annoying to have to fix (but only took about 5 minutes).

Once that was fixed, it seemed to work fine. I was hoping for a bit of a performance improvement over the existing elm-test, but it appears to be pretty much equivalent in terms of elapsed time.

I guess since all my tests are passing, I didn’t actually get to see any of the new features :smiley:

1 Like

I’ll give some :heart: to all error messages! they are definitely a bit rough at the moment. I’m planning to track all usage of .unwrap() throughout the code and replace by something nicer.

Yes I’ll keep the behavior difference but I should try to help with the update. Maybe there is a way to write an elm-review rule to help with that? @jfmengels Do you think writing a rule to find duplicate descriptions in test modules is feasible?

Especially true since the release of 0.19.1-revision5/6 few days ago since the mechanism to find tests is now the same (removed usage of elmi-to-json). In most situations I’ve tried, 99% of the time is spent in the elm compiler + spawning Node + running the tests. Work distribution could be slightly improved but there isn’t much I can do to go faster, as long as the compiled JS is run with Node. Someone suggested using Deno. If someone is interested in trying that, I’ll gladly review a PR.

Thanks for all the feedback!

Just tried elm-test-rs. I had to rename some top level describes as elm-test was accepting duplicate names there. Apart from that it ran smoothly.

The difference in speed is substantial for us.

With elm-test the suite runs in 15 seconds
With elm-test-rs runs in 2 seconds

Is this difference expected? Or maybe we are doing something wrong with plain elm-test?

1 Like

Yeah, it would have been a bit nicer to have gotten a complete list of all the describes I would need to change instead of having to alternate between fixing a test and re-running elm-test-rs.

1 Like

Worked well for me. Similar experience with the describe blocks as everyone else; I would also love to get those as a full list in one run if possible. Also would love to have a flag to specify the folder to find elm.json in. I’m running elm tests on a projects where it’s in an assets directory, and would like to be able to run the elm-test-rs command from the project root instead of having to cd back and forth.

Love the smaller output on a 100% success.

1 Like

Oh! And just so I can be lazy, a way to install with a package manager (npm, homebrew, or whatever is most convenient) would be great.

1 Like

That would be feasible up to an extent. Detecting there are two describe "a" ... in different files is pretty easy, but when we get to more dynamic ones (describe ("should " ++ description) ...), it gets a lot trickier. I think we’d also have to properly handle and detect nesting of describes to avoid false positives.

One of the first OSS projects I worked on was AVA, a test runner for JavaScript, and I was mostly working on the dedicated static analysis rules with ESLint. We had (still do) a rule for detecting unique titles, but it was so hard to get that right because that’s kind of a harder thing in static analysis (even more so in JavaScript and the way tests could be generated with that test runner).

We at some point decided to enforce this by the tool itself because it was the only tool really able to tell whether there were duplicate titles. I proposed the same change to elm-test (on the same day as for the other proposal, didn’t remember that!), and we since then have that feature in elm-test. Sorry/not sorry for the pains that has caused people :smiley:

Anyway, that is to say that elm-review could help out a bit, but not perfectly. And I guess in this case it would only be useful for the migration, so the value is short-lived. Sorry for the maybe unneeded trip to memory/history lane :sweat_smile:

If that is the problem, I guess a nice solution would be for elm-test-rs to list all the problems instead of reporting them one by one, as @neurodynamic suggested too. Also, using the watch mode should help the flow here.

Also, I think it would be very nice if both elm-test-rs and rtfeldman/node-test-runner could agree on this behavior (one way or another), so that choosing one tool over the other is up to the user. I feel like they are so close to each other that it’d be a shame to differ on this tiny piece (but that could fail the tests when run when one tool and not the other).

I think the elm-tooling CLI intends to support it when it gets released.


Wow that is very unexpected. A few possible explanations are:

  • the first run is longer due to the elm compiler doing extra stuff both in elm-test and elm-test-rs, so maybe your elm-test run was a first run and not elm-test-rs.
  • maybe elm-test-rs is not picking all your tests and there is a bug I have to fix in the parser, is the tests number the same? if it is, it’s fine.
  • which version of elm-test is this? if it’s < revision-5 then the issue might come from elmi-to-json. If it’s >= revision-5 it could be a parser performance thing if you have huge files and exposing things with exposing (..). If it’s >= revision-5, I think @lydell would also be interested in your use case.

At GWI we have (so far) very good experience with elm-test-rs - it helped get our CI pipeline down to around 2 minutes, which is bonkers. (I haven’t tried the newest elm-test runner though.)

@mattpiz was very responsive to an elm-test-rs bug our codebase triggered (direct vs indirect deps), so huge thanks for that!

Now for two small warts:

:one: There is a little bit of lowered comfort after the switch to elm-test-rs since it doesn’t have its own npm package, and elm-tooling also doesn’t support it yet. In CI we curl the elm-test-rs binary to a PATH - that’s fine. But developers, in their local dev environments, were used to “after running yarn everything just works and is ready”. Now they have to download elm-test-rs manually for them to be able to run the tests. Nothing insurmountable, but we noticed the extra step.

:two: And one thing that’s sometimes happening in CI is a network timeout (which didn’t happen as often with elm-test):

elm-test-rs 0.6.1

thread ‘main’ panicked at 'called Result::unwrap() on an Err value: ErrorRetrievingDependencies { package: "ktonon/elm-test-extra", version: SemanticVersion { major: 2, minor: 0, patch: 1 }, source: Transport(Transport { kind: Io, message: None, url: Some(Url { scheme: "https", host: Some(Domain("")), port: None, path: "/packages/ktonon/elm-test-extra/2.0.1/elm.json", query: None, fragment: None }), source: Some(Custom { kind: TimedOut, error: "timed out reading response" }), response: None }) }', src/

note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

Now this might be totally out your hands, but perhaps some HTTP client settings are different in your Rust tool than in the Node tool, and that is making us see the timeouts more often?

1 Like

I’ll have to double check but I think this is a behavior of elm-explorations/test. I’m just reporting the error it’s giving me. Maybe I can do a PR on that repo but last time I check there are huge PRs in the waiting there with a major version update.

That should be doable, I’ll play with it and let you know when it’s ready.

That should come with elm-tooling when I release version 1.0.

Only same top level descriptions are reported as an error in elm-test-rs if it wasn’t already an error in elm-test. And I’d bet most of those are plain describe "a" ... type, so maybe it would solve 90% of transitions. Otherwise, I’ll see if I can make a PR in elm-explorations/test to show all duplicate names instead of only one.

But as you said it would be better to have the same behavior. I believe it’s better to be explicit about those descriptions that then appear in failures, but I’ll try to ask why it is not like that with elm-test, see if there is another reason than copy-paste commodity.

In that link, I saw this implementation proposal:

Yeah, I’m thinking this could be as simple as maintaining a Set (List String) of each “fully qualified” (in the sense of this being a list of ancestor descriptions for a particular test) and then erroring out if we try to add an entry to the set that’s already present.

If that’s indeed the implementation, it could explain why it’s erroring at the first duplicate instead of giving them all. I’ll investigate that.


Yes that’s unfortunate but I really don’t want to maintain npm stuff ^^. The current situation with elm-tooling for elm-test-rs is to wait for 1.0 release. This is because elm-tooling needs an update when a tool updates, and elm-test-rs versions changes rapidly for the time being. But this is coming :slight_smile:

Oh that’s interesting! Indeed that is a download done by the dependency resolution code, which is different. I can try to see if I can change the timeouts. A few notes on this issue:

  • elm-test previous to revision-5 has a dedicated (sometimes problematic) dependency solver. I don’t remember for sure but I don’t think it downloads anything so that explains less network errors.
  • elm-test >= revision5 uses elm-json for it’s dependency solver which downloads things similarly than elm-test-rs. It may have a different timeout/retry policy though.
  • if you cache your ~/.elm/ in CI, the dependency solver of elm-test-rs will not need any network call so that should solve your problem too.
1 Like

@Sebastian @mattpiz Yes, I’d be happy to chat about the 15s vs 2s run time! Not in this thread, though.

And just to confirm – yes, elm-tooling will eventually support elm-test-rs. Currently, elm-tooling is for stable tools, which is why I’m waiting. I like the slow and stable release pace of the Elm ecosystem, so I think it makes sense. But if this becomes a common pain-point I have plans on how to support more rapidly changing stuff in a flexible way. Time will tell if those plans need to be implemented or not.


I’ve made a PR that adds a --root option to the CLI. Here are the executable for linux, windows, mac (you need to be logged in to GitHub to download). Let me know if that would work for you.

We were using 0.19.1. I just changed to 0.19.1-revision6 and indeed the tests run faster.
Now the difference is 5 seconds (elm-test) vs 2 (elm-test-rs).

Thank you for all the feedback! I’ve been updating elm-test-rs in the last few days to try addressing all of it. I’m now ready to share links to a 1.0.0-alpha version of elm-test-rs, available from the pre-release page for Linux, Mac, Windows. The main changes since version 0.6.1 are the following:

  • All potential panics (I think) have been replaced by better error handling and error messages.
  • The --report json option is now equivalent to the one of elm-test, making the transition easier for tooling if they want to try elm-test-rs.
  • The --report junit option is now more complete, trying to follow the spec for the format used by Ant, with very few alterations: there is no “timestamp” attribute in “testsuite”, no “type” attribute in “failure”, and Debug.log calls are captured and reported in a “system-out” attribute in “testcase” instead of being a tag in “testsuite”.
  • The “billstclair/elm-xml-eeue56” dependency for the test runner has been vendored to prevent potential clashes.
  • A --project path/to/project/root option has been added to be able run elm-test-rs from outside of a project.
  • Http timeouts have been increased from 1s to 10s.

I also made a PR to improve the duplicate description failure output. If accepted, the next version of elm-explorations/test will list all duplicates of a group at once instead of 1 by 1. That will be useful both for elm-test and elm-test-rs.

Please note that I already spotted a mistake in the --help message that says there is --root option instead of being --project. :man_facepalming: That will be fixed in final release.

I hope this solves all your current issues with elm-test-rs and I will be gathering a final round of feedback for roughly a week before publishing 1.0.0. I guess we can wait until this thread is automatically closed.


Again thanks a lot for the feedback (that happened outside this thread)! It seems a lot needed to change so I’ve published a new 1.0.0-beta version. If no more substantial change is required, I’ll publish 1.0.0 in a week or so.

The main changes since 1.0.0-alpha are the following:

  • The CLI is now based on another library called clap instead of pico_args. This should improve all inconsistencies in the previous handling of CLI arguments.
  • The --connectivity flag has been replaced by a combination of two flags, --offline and --dependencies incompatible with each other.
  • A new --elm-home argument is available if preferred over using the ELM_HOME environment variable.
  • There is a new elm-test-rs make subcommand, if you don’t need to run the tests, just check that they compile.

There are also quite a few bug fixes, which are:

  • Fix some JUnit and JSON reports issues (typos and such).
  • Fix --compiler error when using relative paths.
  • Fix indentation and order of the generated elm.json.