Elm-minithesis: shrinking without compromises

Ah that’s good to know and probably the reason indeed. I’ll wait for your “call for testing and feedback” before continuing but I was curious enough that I wanted to try now :wink:

@mattpiz Just FYI: I’ve started benchmarking elm-minithesis and one of the first benchmarks were various implementations of your “less empty string” fuzzer. See the code here: https://github.com/Janiczek/elm-minithesis/blob/master/benchmarks/fuzzer-implementations/less-empty-string/src/Main.elm

1 Like

I had not looked for the stringWith function, that’s really cool! One remark regarding its interface, an average length does not say much without knowing the associated distribution (or simply standard deviation if Gaussian distribution is assumed)

Thanks @mattpiz. Yeah it internally converts to a continue generating probability according to geometric distribution. Perhaps I should do

  { minLength : Maybe Int
  , maxLength : Maybe Int
  , continueProbability : Maybe Float

Or do you think a doc comment explaining the distribution would be enough?

Another benchmark done: this time comparing minithesis vs elm-test:


-- elm-minithesis:
Minithesis.Fuzz.int 0 10000
-- elm-test:
Fuzz.intRange 0 10000

Test function:

-- elm-minithesis:
(\i -> i < 5000)
-- elm-test:
(\i -> Expect.lessThan 5000 i)

And here are the results:

Looks like:

  • minithesis is taking ~3x longer when generating the value (we have additional overhead because we have to remember the PRNG choices; elm-test can just use the Random.Generator as is)
  • elm-test is taking ~10x longer when shrinking the values

All in all, that’s encouraging :tada:

Benchmark code: https://github.com/Janiczek/elm-minithesis/blob/master/benchmarks/vs-elm-test/int/src/Main.elm

1 Like

This distribution where the mean value is related to the probability of success with mean = (1-p) / p right? I don’t know what would be better, but simply looking at it now gives a better idea of what kind of values may appear. Contrary to what most people might think, the average value is not where the probability density function is max (which is always 0). And increasing the average value tends to a uniform distribution. That may be counter-intuitive and probably best to illustrate in the doc.

1 Like

For anybody who’s interested: I posted a call for testing and feedback: Elm-minithesis: gathering feedback and benchmarks

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.