Question regarding Parser module's number function

Hi all, I have a question about the number function from the Parser module, specifically regarding the elmNumber example:

type Expr
  = Variable String
  | Int Int
  | Float Float
  | Apply Expr Expr

elmNumber : Parser Expr
elmNumber =
  number
    { int = Just Int
    , hex = Just Int    -- 0x001A is allowed
    , octal = Nothing   -- 0o0731 is not
    , binary = Nothing  -- 0b1101 is not
    , float = Just Float
    }

I know the properties in number expect a Maybe (Int -> a) (and Float instead of Int for the float property). I cannot work out how, for instance for the int property, Just Int translates to Maybe (Int -> Expr) (I get the Just part but not how the Int part resolves to (Int -> Expr).
Moreover, regarding the type Expr I really cannot see how this is used by the Parser. Especially Variable String and Apply Expr Expr I find difficult to relate to elmNumber 's actual behavior.

Would love and any help to better understand what I’m missing?

I think you’re confusing Int the type and Int the constructor/variant of Expr. We could re-write the example with slightly different names as:

type Expr
  = Variable String
  | IntExpr Int
  | FloatExpr Float
  | Apply Expr Expr

elmNumber : Parser Expr
elmNumber =
  number
    { int = Just IntExpr
    , hex = Just IntExpr    -- 0x001A is allowed
    , octal = Nothing       -- 0o0731 is not
    , binary = Nothing      -- 0b1101 is not
    , float = Just FloatExpr
    }

In here, IntExpr is a constructor with the type Int -> Expr.

Values like (IntExpr 123), (FloatExpr 123.4), and (Variable "foo") all have type Expr.

Moreover, regarding the type Expr I really cannot see how this is used by the Parser

Expr isn’t special here. Parsers exist to turn strings into some type that you define. The example is a parser to turn a string like "1234" into an Expr (e.g. Int 1234), but you can turn it into whatever you want. For example just wanted to parse an integer you could do

elmNumber : Parser Int
elmNumber =
  number
    { int = Just identity   -- identity is `Int -> Int`
    , hex = Just identity   -- hex numbers are just integers
    , octal = Nothing
    , binary = Nothing
    , float = Nothing
    }

if you wanted to allow both ints and floats you would need a type that reflects both possibilities:

type IntOrFloat
  = IntVal Int
  | FloatVal Float

elmNumber : Parser Int
elmNumber =
  number
    { int = Just IntVal      -- IntVal is `Int -> IntOrFloat`
    , hex = Just IntVal
    , octal = Nothing
    , binary = Nothing
    , float = Just FloatVal  -- FloatVal is `Float -> IntOrFloat`
    }

Aha thanks, that makes sense.

So what is the point of the Variable String and Apply Expr Expr variants of the Expr type? They do not seem to do anything here in this example.
And perhaps more generally, why would it be useful to have a recursive type variant like Apply Expr Expr?

In this particular example, they aren’t useful. If you’re parsing expressions more complex than just single numbers (say arithmetic) then it’s useful to parse values into an expression tree like:

type Expr
  = Value Int
  | Add Expr Expr
  | Multiply Expr Expr

So you could parse a string like "3 * (1 + 1)" into the Elm value:

Multiply (Value 3) (Add (Value 1) (Value 1))

The same thing happens when parsing a file of source code. It gets turned into a tree structure like that but much fancier called an “Abstract Syntax Tree”. Such a tree would have to capture concepts such as variable names, and applying functions to sub-expressions.

1 Like

Then one would need an additional parser because number would only extract (Value 3) right?
Perhaps a pipeline containing number, spaces, symbol "(" etc. where the results of the pipeline would be Multiply (Value 3) (Add (Value 1) (Value 1))

Yes, that is correct.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.