Pros and Cons of the LCOM4 metric in Sonar - codecentric AG Blog
In our projects, we use sonar to detect quality flaws in our sources as early as possible. An important metric is LCOM4: Lack of Cohesion of Methods IV. It measures how related the fields and methods in a class are. If everything is related within a class, that’s the best case. If LCOM4 is greater than 1, the class is suspicious to violate the Single Responsibility Principle. The class might be responsible for more than one thing, and is a candidate to be split into two ore more classes in a refactoring. At least in Theory…
In real live, there are a few difficulties. That LCOM4 is no easy metric to measure is underlined by the fact, that the 4 in LCOM4 means that this is the 4th attempt to get it right. Sonar has come to a point to handle issues like toString(), common logger fields and accessor methods correctly. Still, I get regular calls for help from our teams. that Sonar reports too high LCOM4 metric for their projects, but they can’t do anything about it. Pressure is building up when the average LCOM4 over all classes even causes the build to turn red.
When I go into our projects to analyse the problem and help with some design questions, we typically reach a common problem: the project uses a framework, that enforces a derived class to implement a handful of abstract methods. In the derived class itself, these methods are rather isolated. In the context of the framework, these methods make sense. But of course, LCOM4 doesn’t care about the super class. A popular example of that case is the web framework Vaadin.
So, the build turns red, because too many classes have an high LCOM4. The build should be back to green as quickly as possible. What’s the next step? It’d be wrong to introduce a dummy variable in the affected classes, just the make the metric look nice again. A metric always need interpretation and context. And actions, that purely target at getting a metric below a certain threshold, but effectively rotten your software design, are counter productive.
What would be a better reaction? Right, the team discusses, if the design of the suspicious class requires improvement, or if this is a false-positive. If the design needs to be improved, consequences are clear, I hope. If it’s a false-positive, things get a little rougher:
- Sonar measures LCOM4 with the internal tool Squid. While you could exclude single classes or rules, with other tools like PMD, Findbugs, etc. This is not possible with Squid.
- Also an exclusion with a line comment //NOSONAR only works for rules, that apply to a concrete line of code, not for metrics.
- You could now exclude the suspicious class from all measurements, but … well … that’s not really a solution. You could also shut down Sonar completely, then it cannot fail your build any longer.
- It’d be nice, if you could review the metric in Sonar. But this is also only possible with rules (or better: rule violations).
- … and a rule for a class with a too high LCOM4 doesn’t exist (yet).
This cannot be too difficult, I said to myself, and implemented the rule myself. Now the situation is as follows. Here you can see, how you can configure the rule:
When the LCOM4 threshold in the rule is violated by a class, a violation is generated. This can now be reviewed in a team, and eventually marked as “false-positive”.
I’d be honored, if my little fix will make it into the regular Sonar release soon. If you feel adventurous today, you can download the attached SNAPSHOT-version. For me, it worked nicely in the latest 2.14 sonar release. Have fun.