Organizing an Application – Layering, Slicing, or Dicing?

stump cut to show wood grain

How did you choose the architecture for your last greenfield product?

Perhaps a better question is did you consciously choose the architecture of your last greenfield product?

When I tweeted an announcement of my previous post, “Accidental Architecture”, I received a couple of replies from Simon Brown:

While I tend to use layers in my designs, Simon’s observation is definitely on point. Purposely using a particular architectural style is a far different thing than using that style “just because”. Understanding the design considerations that drove a particular set of architectural choices can be extremely useful in making sure the design is worked within rather than around. This understanding is, of course, impossible, if there was no consideration involved.

Structure is fundamental to the architecture of an application, existing as both logical relationships between modules (typically classes) and the packaging of those modules into executable components. Packaging provides stronger constraint of relationships. It’s easier to break a convention that classes in namespace A doesn’t use those in namespace C except via classes in namespace B if they’re all in the same package. Packaged separately, constraints on the visibility of classes can provide an extra layer of enforcement. Packaging also affects deployment, dependency management, and other capabilities. For developers, the package structure of an application is probably the most visible aspect of the application’s architecture.

Various partitioning strategies exists. Many choose one that reflects a horizontally layered design, with one or more packages per layer. Some, such as Jimmy Bogard, prefer a single package strategy. Simon Brown, however, has been advocating a vertically sliced style that he calls “architecturally -evident”, where the packaging reflects a bounded context and the layering is internal. Johannes Brodwall is another prominent proponent of this partitioning strategy.

The all in one strategy is, in my opinion, a limited one. Because so many of the of the products I’ve worked on have evolved to include service interfaces (often as separate sites), I favor keeping the back-end separate from the front as a rule. While a single package application could be refactored to tease it apart when necessary, architectural refactoring can be a much more involved process than code refactoring. With only soft limits in both the horizontal and vertical dimensions, it’s likely that the overall design will become muddled as the application grows. Both the horizontal and the vertical partitioning strategies allow for greater control over component relationships.

Determining the optimal partitioning strategy will involve understanding what goals are most important and what scenarios are most likely. Horizontal partitions promote adherence to logical layering rules (e.g. the UI does not make use of data access except via the business layer) and can be used for tiered deployments where the front-end and back-end are on different machines. Horizontal layers are also useful for dynamically configuring dependencies (e.g. swappable persistence layers). Vertical partitions promote semantic cohesion by providing hard contextual boundaries. Vertical partitioning enables easier architectural refactoring from a monolithic structure to a distributed one built around microservices.

Another option would be to combine layering with slicing – dicing, if you will. This technique would allow you to combine the benefits of both approaches, albeit with greater complexity. There is also the danger of weakening the contextual cohesion when layering a vertical slice. The common caution (at least in the .Net world) of harming performance seems to be more an issue during coding and builds rather than at run time.

As with most architectural decisions, the answer to which partitioning scheme is best is “it depends”. Knowing what you want and the priority of those wants in relation to each other is vitally important. You won’t be able to answer “The Most Important Question” otherwise.

Advertisement

“Design By Committee” on Iasa Global Blog

Who's driving?

Can a team of experienced, empowered developers successfully design the architecture of a product? Sure.

Can a team of experienced, empowered developers successfully design the architecture of a product without a unified vision of how the architecture should be structured to accomplish its purpose? Probably not.

Can a team of experienced, empowered developers successfully design the architecture of a product while staying within the bounds of their role as developers? Absolutely not.

See the full post on the Iasa Global Blog (a re-post, originally published here).

“Bumper Sticker Philosophy” on Iasa Global Blog

Yeah!  Wait, what?

YAGNI: You Ain’t Gonna Need It.

Sound bites are great – short, sweet, clear, and simple. Just like real life, right?

Seductive simple certainty is what makes slogans so problematic. Uncertainty and ambiguity occur far more frequently in the real world. Context and nuance add complexity, but not all complexity can be avoided. In fact, removing essential complexity risks misapplication of otherwise valid principles.

See the full post on the Iasa Global Blog (a re-post, originally published here).

Accidental Architecture

Hillside Slum

I’m not sure if it’s ironic or fitting that my very first post on Form Follows Function, “Like it or not, you have an architecture (in fact, you may have several)”, dealt with the concept of accidental architecture. A blog dedicated to software and solution architecture starts off by discussing the fact that architecture exists even in the absence of intentional design? It is, however, a theme that seems to recur.

The latest recurrence was a Twitter exchange with Ruth Malan, in which she stated:

Design is the act and the outcome. We design a system. The system has a design.

This prompted Arnon Rotem-Gal-Oz to observe that architecture need not be intentional and “…even areas you neglect well [sic] have design and then you’d have to deal with its implications”. To this I added “accidental architecture is still architecture – whether it’s good architecture or not is another thing”.

Ruth closed with a reference to a passage by Grady Booch:

Every software-intensive system has an architecture. In some cases that architecture is intentional, while in others it is accidental. Most of the time it is both, born of the consequences of a myriad of design decisions made by its architects and its developers over the lifetime of a system, from its inception through its evolution.

The idea that an architecture can “emerge” out of skillful construction rather than as a result of purposeful design, is trivially true. The “Big Ball of Mud”, an ad hoc arrangement of code that grows organically, remains a popular design pattern (yes, it’s a pattern rather than an anti-pattern – see the Introduction of “Big Ball of Mud” for an explanation of why). What remains in question is how effective is an architecture that largely or even entirely “emerges”.

Even the current architectural style of the day, microservices, can fall prey to the Big Ball of Mud syndrome. A plethora of small service applications developed without a unifying vision of how they will make up a coherent whole can easily turn muddy (if not already born muddy). The tagline of Simon Brown’s “Distributed big balls of mud” sums it up: “If you can’t build a monolith, what makes you think microservices are the answer?”.

Someone building a house using this theory might purchase the finest of building materials and fixtures. They might construct and finish each room with the greatest of care. If, however, the bathroom is built opening into the dining room and kitchen, some might question the design. Software, solution, and even enterprise IT architectures exist as systems of systems. The execution of a system’s components is extremely important, but you cannot ignore the context of the larger ecosystem in which those components will exist.

Too much design up front, architects attempting to make decisions below the level of granularity for which they have sufficient information, is obviously wrong. It’s like attempting to drive while blindfolded using only a GPS. By the same token, jumping in the car and driving without any idea of a destination beyond what’s at the end of your hood is unlikely to be successful either. Finding a workable balance between the two seems to be the optimal solution.

[Shanty Town Image by Otsogey via Wikimedia Commons.]

“When Silos Make Sense” on Iasa Global Blog

silos

Separation of Concerns is a well-known concept in application architecture. Over the years, application structures have evolved from monolithic to modular, using techniques such as encapsulation and abstraction to reduce coupling and increase cohesion. The purpose of doing so is quite simple – it yields software systems that are easier to understand, change, and enhance.

See the full post on the Iasa Global Blog (a re-post, originally published here).

Design Follies – Architect Knows Best

Carmen Miranda

Last (for now), but most definitely not least of the design follies is putting your own “vision” above the needs of the customer. Worse than falling for the latest technology fad or failing to adequate think things through, from an ethical standpoint, putting your ego ahead of your duty to your customer is as bad as making current design decisions with an eye to trying to justify prior mistakes. None of these reflect favorably on the perpetrator.

Whenever I consider this particular anti-pattern, I tend to remember the reality TV series Trading Spaces. In contrast to most of the designers on the show, one designer was renowned for ignoring the wishes of the owners, at some times deliberately doing things he was asked not to. Acting the diva might make for good television, but is abhorrent in terms of professionalism.

“Learn by shipping” can be a valid product development technique when dealing with the truly innovative. As the past two years in the operating system space have shown, that technique may not work as well in mature markets (note: phones/devices that can take on duties previously in the realm of personal computers = absolutely brilliant; computers downgraded to phone/tablet capabilities = not even close). Learning by listening can be much cheaper and just as effective. Giff Constable recently asserted that “…companies, whether startup or enterprise, that do not aggressively build learning into their processes will spend 3x to 5x more time and money…”. Failing to listen to pre-release criticism is, in my opinion, failure to learn at an opportune time.

Change represents both opportunity and danger, more so when we add in people’s reaction to change. The opportunity to innovate can disappear if we are insensitive to the customer’s potential reaction and the reasons for that reaction:

Imagine living in the same house for 10 years. Over that period, you’ve accumulated a lot of stuff.

To keep your house organized, you found places to put everything. Every place made sense to you. Most of the time, you have no trouble finding anything you want. Occasionally, there’s something you can’t find, like a tape measure, because you can’t remember where you last put it, but with a little poking around (and asking your housemates,) you come upon it and all is well.

One morning, you wake up and the house is completely different. Not a little different–completely different.

Nothing is where it used to be. The glasses in the kitchen, the clothes in your closets, and the furniture are reorganized. Even the walls and windows are all completely rearranged.

Whoever rearranged everything didn’t consult you. They didn’t warn you it was coming. They just took it upon themselves to make it happen.

In this “new” house, nothing seems to be where you’d expect it. The coffee cups are stored under your bed. You find your pants on the bottom shelf of the freezer. Logic doesn’t seem to be part of the organization scheme.

The worst part is that you still need to get to work on time. Usually, it only takes you about 45 minutes to get ready, so that’s all you allotted yourself. After all, you didn’t know this was coming, so why would you set your alarm differently? Nothing is where it’s supposed to be, you’re spending a lot of time trying to find everything, and the clock is running out–you’re going to be late and it isn’t your fault!

Jared M. Spool, “Designing Embraceable Change”

Reading that particular passage, I find my “inner voice” rising in pitch and cadence. It evokes a sense of hysteria, and understandably so. Later in the post, Jared points out a key concept when dealing with change:

It’s not that people resist change whole-scale. They just hate losing control and feeling stupid.

It’s important to remember that your intention is in most cases less important than the impact of change on the customer. Things like continuous deployment, although they may be adopted to improve customer satisfaction, can backfire if the intent and the effect do not align. As I’ve noted previously, user experience is extremely important. Unintentionally degrading that experience is bad enough. Purposely making people feel out of control and “stupid” is probably the one case where your intention is more important to the customer and not in a good way.

“Who’s your predator?” on Iasa Global Blog

Just for the howl of it

Have you ever worked with someone with a talent for asking inconvenient questions? They’re the kind of person who asks “Why?” when you really don’t have a good reason or “How?” when you’re really not sure. Even worse, they’re always capable of finding those scenarios where your otherwise foolproof plan unravels.

Do you have someone like that?

If so, treasure them!

In a Twitter conversation with Charlie Alfred, he pointed out that you need a predator, “…an entity that seeks the weakest of the designs that evolve from a base to ensure survival of fittest”. You need this predator, because “…without a predator or three, there’s no limit to the number of unfit designs that can evolve”.

See the full post on the Iasa Global Blog (a re-post, originally published here).

Design Follies – ‘Can I’ vs ‘Should I’

What could go wrong?

Everyone likes a challenge from time to time. However, some challenges should be avoided. Overly complex systems that stretch the limits of possibility can easily exceed those limits. Even when they can be achieved, the better question is can they be achieved in a satisfactory manner (stable, peformant, secure, etc.). The more important question is not “Can it be done?” but “Should it be done?”.

In his essay “Requirements vs Architecture”, Charlie Alfred relates a quote from German aircraft designer Willy Messerschmitt: “The Air Ministry can have whichever features it wishes, as long as the resulting airplane is not also required to fly.” Messerschmitt’s point, of course, was that regardless of what someone specified (even regardless of who that someone was), the laws of physics would rule. The client’s understanding of those laws, much less agreement, was completely orthogonal to the applicability of those laws.

Seventy years later, that lesson has yet to be learned. Ironically, one of the latest examples involves the design of military aircraft. At $400 billion, the grab-bag of features that is the F-35 is described by Time as “The Most Expensive Weapon Ever Built”:

The single-engine, single-seat f-35 is a real-life example of the adage that a camel is a horse designed by a committee. Think of it as a flying Swiss Army knife, able to engage in dogfights, drop bombs and spy. Tweaking the plane’s hardware makes the F-35A stealthy enough for the Air Force, the F-35B’s vertical-landing capability lets it operate from the Marines’ amphibious ships, and the Navy F-35C’s design is beefy enough to endure punishing carrier operations.

“We’ve put all our eggs in the F-35 basket,” said Texas Republican Senator John Cornyn. Given that, one might think the military would have approached the aircraft’s development conservatively. In fact, the Pentagon did just the opposite. It opted to build three versions of a single plane averaging $160 million each (challenge No. 1), agreed that the planes should be able to perform multiple missions (challenge No. 2), then started rolling them off the assembly line while the blueprints were still in flux–more than a decade before critical developmental testing was finished (challenge No. 3). The military has already spent $373 million to fix planes already bought; the ultimate repair bill for imperfect planes has been estimated at close to $8 billion.

As I’ve said before, you have to ask “why?” and you have to pay attention to the answer. If the answer is “because it’s a challenge”, then that’s as ethically suspect as “I wanted to try out the technology”. Focus and discipline isn’t very sexy, but going hundreds of billions over budget isn’t a resume highlight.

Some might consider this type of soul-searching as unnecessary. Neil deGrasse Tyson sparked a minor controversy when he suggested students should avoid taking philosophy because asking too many questions “can really mess you up”. The rebuttal to that, however is:

Another way of falling prey to this particular anti-pattern is via inadequate analysis. Tom Graves’ “”Please don’t touch the touchscreen””, he tells the story of just such a debacle. His doctor’s office upgraded to a new touchscreen-based check-in system. Soon there was a bottle of hand sanitizer and a sign asking patients to clean up after using it (infection control). Cleaning up beforehand wouldn’t work because the touchscreen would get smeared with sanitizer:

So maybe the touchscreen was not such a good idea? Or, maybe, not even the auto-check-in at all – rather than an in-person check-in, allowing the reception-staff to build up a better in-person knowledge of the clinic’s clientele? Hmm…

Sometimes thinking things through (and paying attention to what can go wrong) can save a lot of time and money.

[Tightrope Walker Image by Adi Holzer via Wikimedia Commons.]

Design Follies – ‘It’s all about the technology’

Robot playing ping pong

I recently described the method I use to pick topics for posts as “some posts I plan and some just grab me”. The topic for this post seems to be in a class all its own – it stalks me. In previous posts I’ve mentioned how customers are looking for answers to problems, not technology, not artistry. Recently, however, the principle keeps popping up, prompting me to re-visit it.

Just a couple of weeks back, I was chatting with someone on the information systems faculty for a local university. One of the things she mentioned was the importance of understanding that the technology was secondary. Success comes from determining what the customer wants/needs to do, and then providing the how. It’s about finding a place in the customer’s narrative, not finding a way to use a particular technology. After that conversation, I dutifully added a note to my future posts list that it was time for a post on technology as a means, not an end. There it sat for two weeks, when Tom Graves posted his “My ‘EA Masterclass’ coming to Australia”, complete with a slide deck that included this (slide #4):

Slide Show Screen Shot

Okay, I get it – time to write the post.

In this line of work, you need to have a deep appreciation of and interest in technology. If you lack that, I sincerely doubt that you will have the drive necessary to remain current, particularly in today’s environment. Bearing that in mind, however, all the technological brilliance in the world does no good if the product in question fails to meet someone’s needs. How we do so is far less important than whether we do so. Empathy is critically important, because as Jeff Sussna observed in his “Failure == Failure to Empathize”:

When you see things from another’s perspective, you instinctively want to do something useful based on what you see. Empathy naturally drives action in response to listening.

A large part of what makes empathy so important is the architecture of the problems we commonly deal with. Rather than a single context (set of stakeholders sharing similar goals and concerns), most of what we deal with involves multiple, often competing and conflicting contexts. These conflicts are a source of challenges (obstacles to delivering desired value). As Charlie Alfred observed in “Contexts and Challenges: Toward the Architecture of the Problem”: “Tradeoffs between challenges in a context are often subordinate to tradeoffs between similar challenges across contexts”.

Technology belongs to the architecture of the solution, which, to be effective, must follow from the architecture of the problem. When an aspect of the solution does not trace back to some aspect of the problem, this disconnect should be considered a red flag. Choosing a technology for the wrong reason (“this looks cool!”) is a disservice to our customers. Design decisions not only give structure to, but also limit a system. Form follows function initially, but then function is constrained by the existing form.

As always, the most important question when making design decisions, is “why”?

[Photograph by Humanrobo via Wikimedia Commons.]

“Emergence versus Evolution” on Iasa Global Blog

Heidelberg Man

Hayim Makabee’s recent post, “The Myth of Emergent Design and the Big Ball of Mud”, encountered a relatively critical reception on two of the LinkedIn groups we’re both members of. Much of that resistance seemed to stem from a belief that the choice was between Big Design Up Front (BDUF) and Emergent Design. Hayim’s position, with which I agree, is that there is continuum of design with BDUF and Emergent Design representing the extremes. His position, with which I also agree, is that both extremes are unlikely to produce good results, and that the answer lies in between.

See the full post on the Iasa Global Blog (a re-post, originally published here).