The Decoder functionality is implemented in what is called Kernel code - the javascript underlying elm.
The compiler cannot typecheck javascript code, but we still need types on the elm side.
For instance, a few lines below that definition, we see
string : Decoder String
string =
Elm.Kernel.Json.decodeString
The compiler cannot verify that we give the correct type to string: it will accept any type (and we could lie about the type). So, we create an empty type (there is one constructor, but it is never used and not exposed) to give types to the decoder primitives (like string and int but also map and andThen). These types then make sure that our elm code defines valid decoders.
But conceptually, the decoder could be defined as
type Decoder a = Decoder (String -> Result DecoderError a)
Yes, the left one is a type, the right one is a value. so this would be valid code
type A = A
a : A -- value `a` has type `A`
a = A -- value `a` is equal to `A`
That is really because the error is always a string. I hope this makes more sense given the elm-definition of Decoder, which which you can define
succeed : a -> Decoder a
succeed value = Decoder (\_ -> Ok value)
fail : String -> Decoder a
fail error = Decoder (\_ -> Err error)