Absolutely. I think you should handle the error in the closest possible place in the call stack.
I tend to use the recursive pattern when I am sure of something but could not make it impossible with data or API modeling, but I cringe every time I use it (which is not that often thankfully).
I think your way of handling it is quite nice, especially wrapping it in a custom type that will give more information about the error. But as you said, it’s painful to use.
its kind of like a secondary and more advanced type checker for Elm isn’t it?
Yes. The point is giving you more guarantees than what the Elm compiler does. In my mind, if you see compiler as assistants, then elm-review
is another assistant. It’s like having two people look over your shoulder to help you out.
I guess the downside of this approach is having to write more rules for other situations.
I haven’t written about it in the article, but I believe that you can make a single rule that would handle multiple usecases. In the rule I wrote, I have already made it kind of generic by making the target names variables, and the target function check can just be a boolean.
What I’m saying is: You could make the rule work for several functions. You could hardcode, or even take as the rule’s arguments (by changing rule
to rule : List TargetFunctions -> Rule
), a list of target functions that could look like
type alias TargetFunctions =
{ name : String
, moduleName : String -- Or List String
, goodUseCheck : String -> Bool
, errorMessage : String
, errorDetails : String
}
Then you’d loop over this list to handle all the functions. You’d need it to be more configurable if you want to target functions that work with number literals (like your percent
example), List literals or more than one functions. But the idea is the same.
If the values are not being obtained from static code… Then the Result
approach is likely better
Definitely. I can’t stress this enough: If it’s possible to have false negatives (not reporting something that will be unsuccessful), then don’t use this technique and especially don’t create an unsafe function.
A further downside is having to restrict to literal values - I know certain math operations will give me a valid result, but the review rules may not be so smart.
I wouldn’t call it a downside, since that means you need to resort to normal/unreviewed Elm, where you have less guarantees. The “safe unsafe” pattern just makes things simpler when you know things will be correct. Outside those boundaries, you’ll have to go back to Elm code that can fail, but which is the Elm code you know and love.
I think as more rules get built, we’ll end up with more and better tools, with type inference and computing of expressions (1 + 2
=> 3
). They may be tedious and CPU intensive, but they could make way for more and nicer rules.