I’m currently working on my Master thesis that will introduce a more general concept called refinement types. You can check out my first introduction here:
The Master thesis will be finished next year. I’m expecting to give an update on the current state either in june or in july.
Here’s a head start:
What you are proposing would come with more downsides than benefits. So I highly recommend the elm team to NOT implement anything going in this direction.
Example
Lets say we introduce your proposed definition:
type MinusTwoToTwo =
Int
& (n | n <= 2)
& (n | n >= -2)
type ZeroToFour =
Int
& (n | n <= 4)
& (n | n == 0)
Then the following example can not be compiled:
fun : MinusTwoToTwo -> ZeroToFour
fun x =
if (x * x) <= (x + x) then
x * x
else
x
(Taken from my thesis proposal)
Just looking at it, it should be easy to check that the types are valid. But for a Computer (a Compiler) comparing multiplication with addition is a hard task.
In fact Z3, the theorem prover that will be used for my implementation and that is also used in Liquid Haskell will fail trying to prove this.
Mathematically speaking we CAN NOT have a program that can solve a general formula using multiplication (and addition).
This has been proven.
So why do a Master thesis about it?
Because while in generally this is not solvable, for a very big range of problems we can actually find a solution. It might be that in some far future we can ensure that the compiler finds a proof for 90% of all typical problems found in the “real world”. The current provers(like Z3) are quite good at proving “normal” statements about indexes. For example something like off by one errors can easily be detected. But it will always be possible that your code - even if its correct - can not be compiled.
Going back to our example, if we just rewrite the code a bit, then the code will compile again.
fun : MinusTwoToTwo -> ZeroToFour
fun x =
if (x >= -2) && (x <= 2) then
x * x
else
x
(I’ve tested this exact example in Liquid Haskell)