I just published a new elm-review
package and rule.
- Announcement article: https://jfmengels.net/cognitive-complexity/
- Package: elm-review-cognitive-complexity 1.0.0
I hope you find it useful!
I just published a new elm-review
package and rule.
I hope you find it useful!
Very useful! I made the following addition to scripts.yaml
the configuration file for velociraptor, the script runner I use:
scripts:
complexity: npx elm-review --template jfmengels/elm-review-cognitive-complexity/example --rules CognitiveComplexity
cloc: cloc --by-file src
Then I can just say vr complexity
.
I tried this on two projects β a parser project and my spreadsheet library. The first passed, the second failed on one function with a complexity of 23. I entirely agree with elm-reviewβs findings for the second project and look forward to some refactoring. I was suprised, but gratified, that the first project passed the complexity project.
I plan on using the complexity review on a regular basis.
Great!
I would recommend not using --template
in your scripts though. Under the hood itβs using the GitHub HTTP API to fetch the ruleβs source files, and unfortunately that gets rate limited, so you will at some point get temporarily blocked
Instead, Iβd recommend creating a review configuration
I ran this on our code today and we have at least one function with a complexity of over 300.
Nice addition!
I wonder how valuable would be to generate a report about the cognitive complexity of an entire project with some brake down data like, for example:
I have a proposal to make it possible for elm-review
rules to extract data from the project, in order to make reports or other cool things. So if youβd want this, I think this proposal would be the way to go about it.
Do you mean the complexity of all functions in a project? I imagine itβs going to be very low, because a lot of the functions have a complexity of less than two.
Or do you mean the average complexity of projects in the ecosystem? Harder to compute obvisouly, but again, I donβt see it as really useful. I doubt youβd be pushed towards anything actionable.
SonarSource does this (as you can see in this talk), where in their interface they allow sorting directories/files by total complexity (I donβt remember if they allow for sorting functions by complexity, but you can kind of do that by tweaking the limit).
I think this allows you to kind of know where in the codebase you could focus your attention to make code simpler, but I donβt know whether in practice this will be useful. You might very well look at a lot of functions that are complex because they somewhat need to be complex.
Maybe youβll be looking at a file with a complexity of 700 spread over 200 individual functions and focus on that, when there is a file with complexity 500 spread over 2 functions.
It could be useful, but I fear youβll spend your time looking at a lot of βfalse positivesβ: files that look like they should be refactored where in practice itβs not needed. But I might be wrong, Iβd be curious to know
I like the reports of [elm-coverage] (GitHub - zwilias/elm-coverage: Explorations) where the functions are sorted by cyclomatic complexity (which is also explained in the initial post) and you get metrics per-module.
It does not give overall metrics, though.
The goal is not to condense that information down into a single metric. It is too easy to write tests that donβt make meaningful assertions about your code and its behaviour, but only serve to increase the coverage.
I like scc, which allows you to get reports like this:
gampleman@MacBook-Pro elm-review-unused % scc --by-file src
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Language Files Lines Blanks Comments Code Complexity
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Elm 11 6992 1353 657 4982 327
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
src/NoUnused/Variables.elm 1683 301 86 1296 90
~ed/CustomTypeConstructors.elm 1263 231 110 922 72
src/NoUnused/Patterns.elm 710 136 54 520 30
src/NoUnused/Exports.elm 673 140 39 494 37
~src/NoUnused/Dependencies.elm 618 105 40 473 25
~CustomTypeConstructorArgs.elm 616 112 63 441 32
src/NoUnused/Parameters.elm 555 134 50 371 24
~used/Patterns/NameVisitor.elm 475 118 94 263 9
src/NoUnused/Modules.elm 228 47 38 143 7
~src/NoUnused/NonemptyList.elm 125 14 83 28 1
src/NoUnused/RangeDict.elm 46 15 0 31 0
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Total 11 6992 1353 657 4982 327
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Estimated Cost to Develop $145,837
Estimated Schedule Effort 6.617354 months
Estimated People Required 1.957946
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Processed 239910 bytes, 0.240 megabytes (SI)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Very useful indeed! Iβd love to see something like scc
but with both measures of complexity side-by-side.
On a more humorous note β how is the estimate of time and money arrived at? I just tried this out on an open-source project that Iβve spent about 7 full-time days on. scc
estimates that it would take 0.992694 people 4.364021 months to accomplish this at a cost of $48,762. LOL! I think that scc
might be padding the books
Yeah I would suggest that the time estimates might be fairly off, although for much larger projects Iβve seen more realistic numbers.
But I also suspect that the model it uses is probably devised for something like C, where perhaps you would spend a lot more time debugging a similarly complex piece of code.
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.