[RFC] elm inline tests

I had a little idea and would love to hear an feedback or opinions :smile:

An idea: elm inline tests

Everybody’s second favourite programming language to be developed in the last decade has the following to say about unit tests:

The purpose of unit tests is to test each unit of code in isolation from the rest of the code to quickly pinpoint where code is and isn’t working as expected. You’ll put unit tests in the src directory in each file with the code that they’re testing.

To allow elm developers to put their unit tests in the same file as the source code, this document proposes a special type of comment with the syntax {-test- ... -} in elm source files.
These comments are test blocks, wrapping code that should only be run when testing.
The elm compiler ignores code in test blocks (as it ignores everything in comments) but an elm test runner could extract tests and supporting code from such test blocks and run them.

The format of elm inline tests might look something like this.
Note that this is a legal elm file which the compiler will happily accept.

module Complex exposing (Complex, complex)

import Internal.Complex
import Parser exposing ((|.), (|=), Parser)

{-test-util

import Expect exposing (Expectation)
import Fuzz exposing (Fuzzer)
import Test exposing (describe, fuzz, fuzz2, fuzz3, test)

-}

type Complex =
    Complex

{-| Construct a complex number from real and imaginary parts.
-}
complex : Float -> Float -> Complex
complex =
    Debug.todo "snip"

{-test-

testComplex : Test.Test
testComplex =
    [ describe "Build"
        [ describe "complex"
            [ -- snip 
            ]
        ]
    ]

-}

Implemention

The elm test runner would extract and run tests in the following stages.

  1. Duplicate source directory of an elm project into elm-stuff/generated-code/elm-explorations/test/inline/.

  2. Parse each elm file:

    1. Uncomment all -test-util comments.
    2. Uncomment all -test- comments and validate/take note of the single Test.Test contained within each comment.
    3. Add import Test.Test as ElmTestRunnerImplTest
    4. Add a snipet along the lines of
    
    amalgamatedTestsForMODULENAME : ElmTestRunnerImplTest.Test
    amalgamatedTestsForMODULENAME =
        ElmTestRunnerImplTest.describe "MODULENAME"
            [ firstTestFoundInStep2ii
            , secondTestFoundInStep2ii
            -- etc ...
            ]
    

    to the end of the elm file.
    5. Edit the module’s exposing to expose amalgamatedTestsForMODULENAME.

  3. Create a “main” elm file which imports all the tests exposed in step 5 and defines a Test.Runner.Node. The existing node test runner infrastructure can then run all unit tests.

Transpiled file

Following the steps above, the generated file should look something like the following.

module Complex exposing (Complex, complex, amalgamatedTestsForComplex)

import Test as ElmTestRunnerImplTest

import Internal.Complex
import Parser exposing ((|.), (|=), Parser)

{-test-util -}

import Expect exposing (Expectation)
import Fuzz exposing (Fuzzer)
import Test exposing (describe, fuzz, fuzz2, fuzz3, test)

{- -}

type alias Complex =
    Complex


{-| Construct a complex number from real and imaginary parts.
-}
complex : Float -> Float -> Complex
complex =
    Debug.todo "snip"

{-test- -}

testComplex : Test.Test
testComplex =
    [ describe "Build"
        [ describe "complex"
            [ -- snip 
            ]
        ]

{- -}

{-| Inserted by the test runner.
-}
amalgamatedTestsForComplex : ElmTestRunnerImplTest.Test
amalgamatedTestsForMODULENAME =
    ElmTestRunnerImplTest.describe "MODULENAME"
        [ testComplex
        ]

Downsides

Special comments are not as good as first class syntax, a better version of this proposal would add a new syntax (as rust does using #[cfg(test)] and #[test]). However, such syntax would require compiler support whereas this proposal can be implemented purely by the test runner.

Editors will give no syntax highlighting for test code (as it is commented out).

1 Like

You could move module tests to their respective module and look at the numbers after building your app. I assume dead-code-elimination will take care of the test code when building for production?

I assume this would accomplish your goal without more syntax? :sunny:

This already exists:

It’s not exactly what you’re proposing, but it might suit you well.

What does “numbers” mean here?

dead-code-elimination

To rely on dead-code-elimination requires putting test dependencies as normal dependencies.

Yup!

By “the numbers”, I mean a comparison of your production build with and without the test deps as regular deps + tests sharing modules with implementations. :sunny:

I’m increasingly confident that particularly applications (as opposed to packages) do need something like this. I think current elm-test works really well for testing packages (as you have more tools for designing boundaries and fewer effects), but is fairly bad for testing applications.

In particular I think the style of writing applications with modules exposing as little as possible and enforcing very strict boundaries between each other works really well for a neat and easy to work with architecture. But it makes testing nearly impossible. For example, I have the following module:

module Sensor exposing (Sensor, fetchAll, image)

import Http
import Json.Decode as Decode exposing (Decoder)
import Html exposing (Html, img)
import Html.Attributes exposing (src)

type Sensor
     = Sensor Int

fetchAll : (Result Http.Error (List Sensor) -> msg) -> Cmd msg
fetchAll tagger =
     Http.get 
         { url = "/api/sensors/"
         , expect = Http.expectJson tagger (Json.Decode.list decoder)
         }

decoder : Decoder Sensor
decoder =
      Decode.map Sensor (Decode.field "id" Decode.int)

image : Sensor -> Html msg
image (Sensor id) =
   img [ src ("/api/sensor/" ++ String.fromInt id ++ ".png") ] [ ]

This module is really nice to change in the codebase, since none of its implementation leaks. There are no opportunities for de-sync errors, since no Sensor can be created other than getting it from the server. These are all great boons for the safety of the codebase.

But it’s impossible to test. Since we can’t execute side-effects in test, we can’t grab a sensor from the server (and we wouldn’t want to even if we could, since it wouldn’t be much of a unit test). But we can’t test the decoding logic, nor the view logic in the file.

In short, I agree that some way of breaching module boundaries is very useful for unit testing in Elm. Now I am less enamored with the magic comment style here. Ideally this would get some first-class support, or we can all just start using DCE to get rid of test dependencies (perhaps we could as a first step make a little tool that verifies test dependencies have been successfully purged from the production build - for peach of mind).

3 Likes

By “the numbers”, I mean a comparison of your production build with and without the test deps as regular deps + tests sharing modules with implementations. :sunny:

Yes, this would be a really good next step. :+1:

In particular I think the style of writing applications with modules exposing as little as possible and enforcing very strict boundaries between each other works really well for a neat and easy to work with architecture.

This is what I was exactly the thought that drove me to write this.

Now I am less enamored with the magic comment style here.

I am not a fan either, this was the best I could think of that did not require changes to the compiler.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.