Chasing Code Smells with NDepend

Code reviews have always reminded me of so many of the minor virtues. Everyone praises them, but few really practice them. People agree that they are a good thing, but the costs of compliance never seem to match the benefits. Using an automated tool can make things easier, but (there’s always a “but”) other than disabling a particular rule, customization often requires custom coding. The ability to customize is nice, but the ability to interactively explore your codebase is golden. NDepend provides just that, with impressive speed as a bonus.

Disclosure: Patrick Smacchia, lead developer for NDepend, provided me with a free license with the hope that I might review the product. Nothing else of value was received or promised. No guarantee of a favorable review was given or asked for. All editorial decisions were mine regarding what aspects to review and how to do so.

Before I get into what I reviewed, a quick word is in order about what I didn’t. NDepend is a very comprehensive package with a GUI client, command line client and a Visual Studio add-in. All give access to a wide range of rules assessing various aspects of code quality, including changes from one build to another. The results of executing these rules can be presented in both text and graphic formats. Rather than attempting to cover the entire gamut of features, I concentrated on using the GUI client to explore a codebase for issues.

As I noted at the start, code reviews can be a tricky proposition. Reading over code can be time consuming, and quite frankly, tedious. Unless time has been invested in setting detailed standards, there is the danger that the reviews will be subjective. Inconsistency and inaccuracy diminish the value of the exercise and can have a detrimental effect on morale. In short, code reviews can be as painful for the reviewer as the person whose code is being reviewed.

Using an automated tool can help with many of the problems posed by manual review. Tools can quickly and consistently evaluate code without succumbing to boredom. The tool will neither try to establish dominance over the team nor overlook issues in favor of “live and let live”. A potential weakness, however, is flexibility. The stock set of rules can point to potential issues, but these must then be triaged to separate the real problems from the false positives. This is where NDepend shines.

NDepend comes with a large collection of rules that can be used to evaluate your code’s quality, design, layering, and naming conventions, etc. (there are 17 groups altogether, with anywhere from 4 to 30 queries per group). As is usual with this type of application, you can include or exclude individual rules or groups. Because of how the rules are implemented, however, you also have the ability to modify and/or add rules on the fly.

NDepend rules are implemented via the built-in query facility, Code Query over LINQ (CQLinq). Instead of a code, compile and test cycle, CQLinq provides for interactive querying of components using the familiar LINQ syntax. Incredibly, this power and flexibility is coupled with impressive speed – 7 seconds to analyze 567 types in 9 assemblies (almost 13.8k lines of code) and 5 milliseconds to execute a single query against that same codebase.

Getting started is simple enough:

  • Launch the Visual NDepend client application
  • From the start page, create a project
  • Click “Add Assemblies of VS Solutions” and select a solution
  • Add or Remove assemblies from the list and click “Run Analysis on Current Project”

The first stop is the dependency matrix view. The green squares in the upper right side of the matrix indicates that the assembly in the column is used by the assembly in that row, while the blue squares in the bottom left indicate that the assembly in the column is using the assembly on that row. The critical thing you don’t want to see are red squares, indicating circular dependencies. Other information that can be gleaned from from the matrix relates to pervasive a particular assembly is (for example, everything depends on mscorlib) and how focused a particular assembly is. Assemblies that have a lot of green squares in their row and blue squares in their column (meaning they’re using a lot of other assemblies), may have too many responsibilities.

After inspecting the high-level structure with the dependency matrix, much more information is available in the Query and Rules Explorer view. At a glance you can see the status of groups and rules: whether or not they are active and if so, run status (green ball for no hits, exclamation point inside a yellow triangle if the criteria is met, and a red indicator along with the yellow triangle if the rule violated is a critical one). As you click on a rule, the CQLinq code defining that rule and the output of the rule (along with a hit count) are displayed in the Queries and Rules Edit view. This is where the fun starts.

I’ll use a relatively trivial rule as an example in order to focus on the mechanics: “Instance fields should be prefixed with a ‘m_'” in the “Naming Conventions” group. Using one of my projects, there were 1,597 violations of this rule. However, this rule doesn’t match our standards, so I edit it to bring it in line. This:

// Instance fields should be prefixed with a 'm_'
warnif count > 0 from f in Application.Fields where
!f.NameLike (@"^m_") &&
!f.IsStatic &&
!f.IsLiteral &&
!f.IsGeneratedByCompiler &&
!f.IsSpecialName &&
!f.IsEventDelegateObject
select new { f, f.SizeOfInst }

Becomes this:

// Instance fields should be prefixed with a 'm'
warnif count > 0 from f in Application.Fields where
!f.NameLike (@"^m") &&
!f.IsStatic &&
!f.IsLiteral &&
!f.IsGeneratedByCompiler &&
!f.IsSpecialName &&
!f.IsEventDelegateObject
select new { f, f.SizeOfInst }

Now my rule matches, and in only 4 milliseconds I see the number of violations drop to 761 – better, but not good. In looking at the results, however, I see that many of them are controls on webforms. A brute force solution would be to remove the site project, but I don’t want to give up all analysis of it. Instead, let’s add another condition to ignore types from the System.Web.UI namespace.

// Instance fields should be prefixed with a 'm'
warnif count > 0 from f in Application.Fields where
!f.NameLike (@"^m") &&
!f.IsStatic &&
!f.IsLiteral &&
!f.IsGeneratedByCompiler &&
!f.IsSpecialName &&
!f.IsEventDelegateObject &&
!f.FieldType.ParentNamespace.NameLike(@"^System.Web.UI")
select new { f, f.SizeOfInst }

Four more milliseconds and we’re down to 181 violations – even better, but I’m still not done. Noticing that some of the fields belonged to types generated by setting a service reference, I decide to exclude those types, which yields:

// Instance fields should be prefixed with a 'm'
warnif count > 0 from f in Application.Fields where
!f.NameLike (@"^m") &&
!f.IsStatic &&
!f.IsLiteral &&
!f.IsGeneratedByCompiler &&
!f.IsSpecialName &&
!f.IsEventDelegateObject &&
!f.FieldType.ParentNamespace.NameLike(@"^System.Web.UI") &&
!f.ParentType.ParentNamespace.NameLike(@"^x.x.Data.xIntegration")
select new { f, f.SizeOfInst }

I’m now down to 135 violations. Quickly eliminating over 80% of the original hits as obvious false positives is a big advantage, but the tool can still provide additional help to analyze the remainder. A quick edit to the select clause:

// Instance fields should be prefixed with a 'm'
warnif count > 0 from f in Application.Fields where
!f.NameLike (@"^m") &&
!f.IsStatic &&
!f.IsLiteral &&
!f.IsGeneratedByCompiler &&
!f.IsSpecialName &&
!f.IsEventDelegateObject &&
!f.FieldType.ParentNamespace.NameLike(@"^System.Web.UI") &&
!f.ParentType.ParentNamespace.NameLike(@"^x.x.Data.xIntegration")
select new { f, f.ParentType }

I now have the list of field names and the type in which they’re declared. Armed with this, I can determine if I can add additional filter conditions to remove more false positives or if I need to dig deeper. The time savings this represents is huge compared to tying up others in long, painful meetings that may or may not provide value.

Even though I’ve barely scratched the surface, I’ve found NDepend to be extremely valuable. The few issues I had were relatively minor. Sometimes the intellisense and context menus fight in the edit window. The edit window would also benefit from a “Save As” button. The capabilities of the tool make it extremely useful. The speed with which it works is absolutely amazing. All in all, it’s a tool I will definitely be using.

2 thoughts on “Chasing Code Smells with NDepend

  1. I first discovered NDepend a couple of years ago, and I love the tool. I don’t have a license at my current company, so I’m a touch jealous. 😉

    As you dig into the tool, one thing I’d definitely suggest looking into is its capacity to do deltas in its analysis. I remember something along the lines of “Code Quality from Now” as a feature, and this being a way to say “Yeah, this legacy code isn’t the greatest, but instead of lots of warning noise, let’s just focus on getting it right from now on.” I thought this ability to acknowledge “reality on the ground” was one of its strongest features.

    Like

    • Erik,

      That feature is definitely on my list of things to look at. The out of the box features are very impressive and the possibilities it opens up by being so flexible are even more so. If I had to take over an existing app, it’s definitely a tool I’d reach for. For that matter, I plan on running one of my own apps through it as we gear up for the next version.

      No need to be jealous, BTW…just ask the boss to multiply team size by average hourly wage. I’m betting you’d get to the cost of a license before you reviewed more than a couple of classes. 😉

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.