I’ve been working on running elm benchmarks from the command line based on earlier work by @ilias.
Obviously the visuals could be improved, but I’m struggling most with how to distribute this.
Advantages of a CLI
Backing up a little: why run benchmarks on the command line? The primary advantage is integration with other tools. We can dump the performance data in various formats and store it for comparison over time, or present it with an external tool (e.g. benchmarks for snippets in your blog article)
Another possibility, that I haven’t explored in much detail yet but seems interesting, is hooking into the node/V8 profiling tools.
And this solves an annoying problem where elm benchmarks don’t run when the tab loses focus
Distribution
The way this works is that there is a small bit of JS that listens to a port and prints whatever comes out, and an elm application that runs the benchmarks and does the formatting.
To use this in a new project, you have to set up that application to find and run your benchmarks. It also then needs all the dependencies of your project installed. All tedious and error-prone stuff that I’d like to automate.
What would be the best approach here? A benchmark fork of node-test-runner
could work, but that seems complex. Are the other possibilities?
Also feel free to hack around with this, I’m very open to suggestions in general.