[BEGIN DEVIL’S ADVOCATE]
I’ve been told by coworkers that situations like this have caused huge errors. Despite multiple senior devs reviewing the code too. No one understood the impact of the linting warning they got and so disabled it. This resulted in breaking the app.
@wolfadex it cuts both ways. I know of several bugs that shipped out to production because someone blindly followed a linting rule (e.g. one was a non-null linting rule that was turned on for a file interop-ing with a third-party library that expected nulls for certain behavior but could take any other type in its signature, causing it to silently accept a Maybe
-like type that did not trigger the null case). When it comes to non-semantics-preserving linting rules, there is no substitute for understanding the impact of your linting rules.
So yeah, disabling static analysis checks makes it kind of pointless.
@rupert This is a rather black and white take. Having some static analysis checks disabled in some places in your code does not make the other places it’s used less valuable. Looking at a piece of code and knowing that because it doesn’t have certain annotations, you have certain guarantees is very valuable (even if further down the callstack you don’t have those guarantees anymore). Take Elm’s promise of total functions for example. It’s not 100% there (equality checks, blowing the stack, and spinning in an endless loop are all ways to break it), but we all derive massive benefit from it. Even in languages with exceptions (e.g. Haskell or even more so Scala), an expectation of no runtime exceptions in pure code makes it more pleasant to work with than languages which use exceptions willy-nilly. You don’t need 100% of something for it to be valuable (although it is valuable to acknowledge non-linearities in the spectrum).
Even if you need to use constructs that are a bit more complex, why are there static analysis tool forbidding this?
…
Would the optimization require such an obscure or discouraged pattern? (you’ll probably tell me yes for your use-cases, and I’d argue that in an ideal language the designers would have made other choices). Could the rule not be less aggressive, or try to understand the specific optimization?
@jfmengels This assumes that whether or not a certain optimization is required is knowable from the code. That is not always true. Whether a certain piece of code needs to have some amount of performance is ultimately a business issue and it can be those business needs that dictate whether a certain piece of code needs to be optimized a certain way, or whether it doesn’t. Indeed often times I explicitly don’t want the constructs I’ve allowed for a very specific piece of code that is written a very specific way to be allowed anywhere else in my codebase! I want the static analysis rule to disallow it 99% of the time! It’s just because of a very specific business reason (e.g. a particular user interaction) that I need to write this very small snippet of code this particular way.
The reason this matters BTW if it’s something that doesn’t show up 99% of the time is the tyranny of large numbers. While individually rare, over the lifetime of a project among many devs they become quite common.
Moreover, even in the cases where this is knowable from the code, this goes far beyond having the ideal language. This requires having an ideal stack, all the way down to hardware (e.g. caching hierarchy). Optimizing your tools to deal with this ideal of a combined hardware-software stack seems like a painful and inadvisable tradeoff the overwhelming majority of the time.
You bring up ++
(another Elm one is the compare subtraction against 0
trick), but one common linting example where it is unlikely simple fixes to the language would ever be enough is long argument lists vs passing in a configuration object. How that ends up affecting JIT, caching, pointer chasing etc. is a sufficiently complex interaction that I doubt it’s possible to ever have a sufficiently flexible rule when push comes to shove in a hot path.
I haven’t yet but I’m thinking of making the rule configurable so that this rule won’t report this specific issue.
Configuration comes with its own costs. One of the great joys of working with the Elm ecosystem is that by and large there’s no configuration needed. You have a single integrated compiler/debugger/package manager that has very few knobs to turn. Just point and run. It is often more user-friendly to allow for specific local overrides, than to add additional levers and knobs in configuration.
Disable comments is not the only solution and there are often better ones. And that’s what I people to get out of my article.
This is a far more measured take than the title of the article suggests (which is that they are an unalloyed bad).
[END DEVIL’S ADVOCATE]
As I mentioned at the beginning of this, my own actual personal take is more nuanced and my own personal preference would actually be something in the middle, not a compromise, but a sum that’s greater than its parts.
As far as I understand it @jfmengels your objections to inline static analysis rule disabling boils down to one thing: it makes it too easy to disable static analysis rules. And I understand and sympathize with that.
And yet that’s not an intrinsic part of inline static analysis rules! You can make it arbitrarily harder to disable static analysis rules while keeping all the benefits of the locality that inline disables give you. The trick is just to make the user have to do a lot of (computer-checked, but otherwise unaided) work in the direction of disabling rules and then make it really easy to re-enable rules.
So e.g. what you can do is force all inline disable annotations to have an accompanying entry in the centralized config file. You can then in the centralized config file further have a meta-annotation on whether this is meant to be a permanent or temporary exception, and if it’s a temporary exception, require an end date that
E.g. (purposefully in non-Elm pseudo-code to show its generality across languages)
-- In code
@disableRulePermanent(
rule = "no-unsafe-sort-by-hashcode",
-- This is a tag name to refer to this specific exception instance later
-- on in the centralized configuration file, disambiguates among
-- multiple exceptions in one file
tag = "hot-path-branch-prediction",
reason =
"""It turns out that the safe way of writing this code ends up causing
massive slowdowns due to branch prediction errors and you need to
do a redundant sort by hash code beforehand. Note that the sort by
hash-code is okay here because the actual semantics of the code
aren't affected by the order, only the performance, so we just need
some order, any order."""
)
myExpression = ...
@disableRuleTemporary(
rule = "no-nulls",
tag = "waiting-on-external-library-version-bump",
reason =
"""`external-library-5.3.2` introduced a bug in `coolFunction` which
requires a null arg to get the same behavior that would have been
achieved by passing it 0 in version 5.3.1. This is tracked at
https://example.com/issuetracker/31 and they should have a new release
fixing this in October.
""",
-- Past 2021-10-01 our static analysis tool will throw an error
expirationDate = "2021-10-01"
)
myOtherExpression = externalLibrary.coolFunction(null)
-- And then later on separately in the central configuration file
inlineExceptions = List(
-- If any information here disagrees with the inline annotation the
-- static analysis tool reports an error
Temporary(
file = "src/MyFile",
rule = "no-nulls",
tag = "waiting-on-external-library-version-bump",
expirationDate = "2021-10-01"
),
Permanent(
file = "src/MyFile",
rule = "no-unsafe-sort-by-hashcode",
tag = "hot-path-branch-prediction"
)
)
You make users do redundant work, that is checked by the computer so that it’s in sync, but is burdensome enough that users aren’t tempted to reach for it willy-nilly (this also prevents the issue of users mindlessly copy-pasting code). When it comes to PRs, you also have a single file that is the authoritative truth of all exceptions to various static analysis rules. Then you make the other direction completely automated, so that there is a command to run that will delete all inline disable annotations if they have no accompanying entries in the config file and vice-versa. This tilts users towards a process that makes it hard to disable annotations, but easy to re-enable them.
You get all the benefits of the locality of inline annotations (any developer can see immediately that if coolFunction
is no longer used, you can get rid of that annotation), along with all the purposeful friction and single centralized listing of all exceptions used in your project that you get with the config file approach.