Communicating Visually

In my last post, I noted some of the ways in which I use modeling (mostly UML) to capture and communicate aspects of a design. As I was finishing the post, Simon Brown and I shared the following exchange on Twitter:

I find this bemusing for a number of reasons, not least of which is my somewhat iconoclastic nature. It’s my belief that form follows function; that much is pretty hard to miss. Even more so, I believe that form should never trump function. Diagrams are a means of communication, not an end in themselves. Being afraid to use a communication tool just because you’re afraid of “breaking the rules” is as shortsighted as producing artifacts that are syntactically correct but infeasible in practice. Design artifacts, whether intended as blueprints for work in progress or documentation of the system’s current state, must enhance an understanding of the system. If they fail at that aspect, then any other consideration is moot.

the Udder diagram

The other side of the coin is that if the artifact serves to communicate effectively, whether it conforms to a particular modeling language or not is irrelevant. I used the diagram on the right (affectionately dubbed “the udder diagram”) a few years back to communicate the strategy of integrating four line of business customer portals with four back-end systems using a messaging platform. It served to illustrate that strategy to both business and technology, across all levels from development teams to channel presidents. Refusing to use it because it wasn’t part of some standard notation would have made no sense at all.

It is possible, however, to over-communicate. The stereotypical architect who spends time drawing pictures of systems that will never be has its roots in those who try to micro-manage the design. Even if it were possible to design every aspect of the system up front in detail, it would be inefficient. From the time wasted modeling duplicate (or nearly so) interactions to the bottleneck introduced by relying on the architect for all design, a useful tool can be turned into an impediment. Little wonder teams have thrown the baby out with the bath water. That being said, it is a waste to abandon useful techniques because of the potential for abuse. The key is to find a happy medium.

In “C4: context, containers, components and classes”, Simon Brown discusses a lightweight modeling process designed to capture architecturally significant information across different levels of detail using a Class Responsibility Collaboration (CRC) card metaphor. Context diagrams are used to picture the system under development in context with the rest of the technology environment (i.e. integrations) and those who will be using the system. Container diagrams capture the execution environment of the system: sites, services, databases, etc., visually depicting the major pieces of the system and their interconnections. Component diagrams illustrate the major groups of code that make up the containers. Class diagrams are used only when warranted. This system, which concentrates on the information and not the syntax or tool, makes perfect sense.

In the “Getting there from here” post, I mentioned a number of different UML diagrams I use during the architectural design process: use case diagrams to capture “who does what”, class diagrams to capture the key “things” the system deals with, package diagrams to illustrate system structure and activity diagrams to document the users’ interactions with the system in the course of a particular use case. All of these tend to be high level and relatively static aspects of the system. Likewise, they are aspects that are not readily reverse-engineered from code. As such, I maintain these as persistent artifacts that live from one release to the next for the life of the application.

When I’m doing lower-level design, class, activity, and sequence diagrams can be useful as well. These, however, I consider to be transient artifacts, useful for the moment, but not worth maintaining over the long-term. Particularly when you have access to tools that can synchronize with the code (I use both Visual Studio’s built-in features as well as Altova’s UModel), effort spent maintaining these types of diagrams is wasted. It makes far more sense to re-generate from the authoritative source: the source.

In my opinion, keeping a few simple principles in mind makes the difference between effective modeling versus just drawing pictures. First and foremost, is to focus on communication. Being understood is more important than being “right” (at least in terms of using the notation/tool). Next in importance is to know when to stop. Just enough detail provides value; too much extraneous information can cause confusion at worst and at best introduces delay. Lastly, know what to keep and what to throw away. Anything that illustrates existing code should be generated from that code rather than maintained manually.

Used correctly, modeling can be a powerful tool for communication. Communication and understanding are force-multipliers that contribute to success. A picture may well be worth a thousand words. It’s certainly quicker to put together (not to mention comprehend) a picture than a thousand words.

Getting there from here

About three weeks ago, Simon Brown issued a challenge: “Can you explain how you design software within the time it takes to finish your coffee break?”. I left a comment to the effect that I could give an overview in that time period, but an actual explanation would take far longer. He followed it up with another post last week, reiterating the need to be able to answer the question “How do you design software?”. As he noted, “But if we can’t articulate how we design software, how are we going to teach this skill to others?”. The next day we had the following exchange on Twitter:

Well thanks, Simon…won’t let me weasel my way out of that one, will you?

The truth is, explaining the design process is hard. It’s not a linear process; there is no defined set of ordered steps that yield a design. Additionally, design is about trade-offs. Any given decision has the ability to negatively impact one or more other concerns (security versus functionality, performance versus maintainability, etc.). Worst of all, roles get muddled. In spite of all of that, it’s something I truly love. Whether building a system from the ground up or evolving an existing system, it’s something I find truly rewarding.

Before proceeding, I should deal with a couple of caveats. First, is that the more detail I provide, the more the post will have an unavoidable bias towards the .Net platform, as that is what I work with. Second is that my practices are also influenced by my work environment: a corporate development team working on line of business applications, some for internal use, some customer facing. If you work on another platform or work for an ISV, I can’t guarantee that all of the post will apply to you. Much of what I say should be universal. When it’s not, see the preceding.

One last word before diving into the nuts and bolts: my ideal way to design is collaboratively. For reasons both noble and selfish, it’s the route I recommend. Communicate and collaborate as many of your stakeholders as possible: clients, developers, testers, support, and operations. Their input and feedback are priceless.

If you’re lucky, the process begins at the vision stage, whether that vision is of a new system or of changes to an existing one. You want to be in as early as possible for a number of reasons, but the primary one is that you want to partner with the customer in creating their system rather than be perceived as a roadblock. Early involvement means that you can explore options and suggest ways to accomplish what the users wants and need. This benefits both parties; I’ve seen customers ecstatic about what I could provide for them as often as I’ve headed off train wrecks from unrealistic expectations. The later in the process that you’re brought in, the more likely you’ll be faced with something that, no matter how untenable, someone is wedded to because of the effort they’ve put in. Then you either have to be the one to tell them that their “baby” is ugly or take it to raise yourself.

As the concept develops, I keep the following architectural drivers in mind:

  • Functionality
  • Data Profile
  • Audience
  • Usage Characteristics
  • Business Priority
  • Regulatory and Legal Obligations
  • Architectural Standards
  • Audit Requirements
  • Reporting Considerations
  • Dependencies and Integrations
  • Cost Constraints
  • Initial State

Each of these drivers influence multiple quality of service aspects. Understanding those influences allow you to present options to the customer so that their priorities shape your design. When the customer is an informed participant in making these trade-offs, their ownership of the results is increased and their level of frustration is decreased.

At this point in the process, things should still be high-level, concentrating on who (role, not individual) does what in an effort to determine scope. UML use case diagrams can be useful for capturing this rough outline of the system. Your focus should not be on the tool (white boards or paper can be put to as good a use as high-dollar packages) or on whether the diagram is syntactically correct. Your objective is capturing and communicating. If a drawing/diagram increases understanding, then that is the most important consideration.

Along with the use case diagram(s) noted above, I will generally use class diagrams to capture the major “things” that the system deals with (analysis classes) and their relationships. The idea is not to capture fine detail (attributes and minor associations), but to make sure the “big picture” is understood. Likewise, package diagrams can be used to capture the conceptual structure of the system. It’s not standard UML, but they’re easier to sketch on a white board and people (i.e. customers, developers, testers, support, and operations) can quickly visualize what you’re talking about.

One of the hardest things to master is learning not to jump in too early with a solution. You need to get the big picture (including features that will be very likely wanted in future releases) and address the drivers I listed above in order to determine the architecture. If at all possible, you want input and validation from customers, developers, testers, support, and operations. Throughout the design process, you should be defaulting to as simple a design as possible, only adding the complexity the drivers require.

Technologies, tools, or techniques that are new to the architect and/or the developers must be validated as well before being incorporated into a design. A proof of concept is vital. The one unforgivable sin for an architect is a design that is expensive to change. Insufficient analysis and failing to validate your assumptions are the quickest ways to an inadequate and inflexible design.

In my opinion, the most important skill to master is self-auditing. If you require an honest reason for all of your decisions, you will be far ahead of the game. Self-awareness is critical to avoid making thoughtless choices or going with a technology, not because it fits, but because it’s the current thing. If you wouldn’t accept “um…because” as a reason from someone else, you definitely shouldn’t accept it from yourself. Again, this is where your collaborators can be an extremely valuable sounding board.

For me, a layered design provides a flexible base architecture. Layered architectures are flexible enough to support a varying number of physical tiers depending on your needs: from all code layers on one tier for web applications, to separate front and back-end tiers for Click Once applications and SharePoint pages and web parts. Coupled with a message-oriented communication pattern, this style of architecture also allows you to expose services to other applications without duplicating functionality. For most applications, this either becomes my starting point for the software architecture or the desired to-be architecture.

My preferred structure is a base of three layers (User Interface, Business Process, and Data Access) plus a cross-cutting layer for messages and payloads (data transfer objects), exceptions, and common interfaces. By default, each layer would map to an assembly, but additional partitioning could make sense in some cases (for example, in order to isolate dependencies). The main considerations here are that responsibilities are uniquely assigned to a given layer and that a given layer only communicates with the layer below it.

Use cases map well to business process methods. I group related business process methods into business process classes. The analysis classes become the basis of the request and response message payloads for those methods. The use cases also provide the basis of a granular, task-based permission system. These permissions I then group into roles corresponding to the actors from the use case diagram(s).

My approach to communicating the architecture is to use a variety of techniques. The beauty of new development is that the diagrams noted above can be supplemented with the “skeleton” of the application: a Visual Studio Solution file with the various project files and key classes roughed out. Part of the “roughing out” process should be setting up the dependencies, both between the projects and for external components. Any proofs of concept should be considered as an architectural artifact. Sometimes text is the best mode of communication for a particular concept; it would be hard to diagram “favor chunky over chatty communication patterns”. The thing to remember is that “right” is defined by what clearly communicates the architectural design, not by what conforms to some arbitrary style.

The level of detail I’ve noted to this point is what I consider to be architectural in nature. This does not, however, mean that an architect’s job is finished upon communicating the structure of the application. It is an ongoing responsibility to ensure that the architecture is neither impeding the design and implementation of the application, nor become irrelevant due to poor alignment with the drivers listed above. Maintaining this connection is easiest if the architect has a design role as well. It is possible to federate the design role to the remainder of the development team provided that the architect maintains close contact with the project. What is not possible is showing up, dropping off a “fully defined” fill in the blanks design, and walking away, never to return.

Detailing how I do low-level design would probably triple the size of this post. For me, it entails mapping the details of the use cases to elements of the application working within the bounds of the architecture. The public structure of classes, forms, tables, stored procedures, etc. are all fleshed out according to the requirements and the details of the interactions decided. As with the architectural designs, the watch words should be “collaborative” and “iterative”.

Whether done for the designer or by the designer, activity diagrams using swim lanes for user actions and system reactions are a good tool for capturing the various paths through a given use case. Class diagrams can be used at this stage as well, now with more detail in terms of methods, properties, and associations. Likewise, code “skeletons” coupled with “ToDo” comments can be a powerful way to convey design decisions.

Regardless of the techniques used, it is important to remember that design documentation has a short shelf-life. Requirements should be living documents and an architectural overview may be well worth maintaining. However, design documentation will generally become stale very quickly. Tools that allow you to generate diagrams from the code typically make the effort to keep a static artifact synchronized unnecessary. Accordingly, substance should be preferred over style.

Having written over 1800 words to this point, I have little doubt that this is far from being the perfect explanation of how to design software. That’s appropriate. Software architects do not do “perfect”. Instead, the goal is an optimal solution given the constraints we have to work under that’s capable of changing with the needs of its users. Perfect does not get delivered. What does not get delivered never gets the opportunity to solve problems. Solving problems is what we should be doing.

To code or not to code, is really not the question

Whether or not application and solution architects should code is an old controversy that never seems to go away. Currently it’s a discussion topic on the “97 Things Every Software Architect Should Know” group on LinkedIn. Unfortunately, the question misses the point. Whether an architect codes or not on a given project is irrelevant. What is relevant is that the architect’s technical capability (knowledge and experience) should match the role(s) he or she is responsible for. In other words, not “does the architect code”, but “can the architect code”. Additionally, it is vitally important that architects not be over-committed, leaving them with a responsibility they lack the time to fulfill.

Simon Brown, on the blog Coding the Architecture”, has observed that “It’s a role, not a rank”. This role-based view is key. Simply put, the architect role architects, the designer role designs, and the coder role codes. It is the role(s) that are significant, not the title. It is fairly common to find individuals filling multiple roles. When the demands of those roles exceed either the capability or the bandwidth of an individual, trouble ensues.

In terms of technical knowledge, there is both breadth and depth. The requirements for those dimensions vary based on the role. Different individuals have different capacities for attaining both breadth and depth of technical knowledge. However, the limiting factor will be capacity for work. Even if someone has the ability to attain the greatest possible depth of knowledge across the broadest selection of domains, there are only so many hours in the day.

If we factor tasks per role, intensity of those tasks, number of roles, and number of applications, the workload on a given individual becomes clear. If that workload exceeds capacity, then some tailoring must take place to bring it back in line. Task intensity is generally a poor candidate for this type of tailoring, as reducing focus will generally result in reducing quality. Likewise, paring tasks from a given role will likely result in diminished performance. If an architect is charged with both architectural and detailed design across multiple applications simultaneously, chances are that architect will be over-committed. If this is resolved by rushing work and/or eliminating necessary tasks (communicating and collaborating with the development team; adapting to changed requirements, failed assumptions, better information; etc.), then the result is the “pigeon architect” who swoops in, deposits a mess, and flies away leaving the development team to clean up. A far better option is to reduce the number of roles and/or the number of applications that the architect is responsible for. Absent that, extending project timelines to reflect the architect’s capacity will work also.

Over the last sixteen years I have held roles ranging from developer for a single application to designer/developer for a single application to application architect/designer/developer for a family of two applications to solution architect for over a dozen applications. My current roles are solution architect for four applications, application architect for three, and designer/developer for one. Stepping away from the coding role for a number of years posed no problems; I was able to resume it easily since I had remained up to date via coding proofs of concept. I quickly learned, however, that unless I had the bandwidth to fulfill a particular role (such as detailed design) completely, that it was better to delegate that role to others than to try to get by performing it in part. I have no doubt that there are plenty of ivory tower architects, but I have a suspicion that there are even more over-committed ones.