Can you afford microservices?

Check

Much has been written about the potential benefits of designing applications using microservices. A fair amount has also been written about the potential pitfalls. On this blog, there’s been a combination of both. As I noted in “Are Microservices the Next Big Thing?”: It’s not the technique itself that makes or breaks a design, it’s how applicable the technique is to problem at hand.

It’s important, however, to understand that “applicable to the problem at hand” isn’t strictly a technical question. The diagram in Philippe Kruchten‘s tweet below captures the full picture of a workable solution:

As Kruchten pointed out in his post ‘Three “-tures”: architecture, infrastructure, and team structure’, the architecture of the system, the system’s infrastructure, and the structure of the team developing the system are mutually supporting. These aspects of the architecture of the solution must be kept aligned in order for the solution to work. In my opinion, it should be a taken as a given that this architecture of the solution must also align with the architecture of the problem as a minimum condition to be considered fit for purpose.

Martin Fowler alluded to the need to align architecture, infrastructure, and team structure in “MicroservicePrerequisites” when he listed rapid provisioning, basic monitoring, and rapid deployment as pre-conditions for microservices. These capabilities not only represent infrastructure requirements, but also “…imply an important organizational shift – close collaboration between developers and operations: the DevOps culture”. Permanent product teams building and operating applications are, in my opinion, an extremely effective way to deliver IT. It must be realized, however, that effectiveness comes with a price tag, in terms of people, tools, and infrastructure.

In “MicroservicePremium”, Fowler further stated “don’t even consider microservices unless you have a system that’s too complex to manage as a monolith”, identifying “sheer size” as the biggest source of complexity. Size will encompass both technical and organizational concerns:

The microservice approach to division is different, splitting up into services organized around business capability. Such services take a broad-stack implementation of software for that business area, including user-interface, persistant storage, and any external collaborations. Consequently the teams are cross-functional, including the full range of skills required for the development: user-experience, database, and project management.

Expanding on this, the ideal organization will be one cross-functional team per microservice/bounded context. Even with very small teams, this requires either significant expenditure or a compromise of how the architectural and social aspects (i.e. Conway’s Law) work together in this architectural style.

Other requirements inherent in a microservice architecture are things like API governance and infrastructure services to support distributed processing (e.g. a service registry). Data considerations that are trivial in monolithic environment like transactions, referential integrity, and complex queries are absent in a distributed environment and facilities may need to be bought or built to compensate. In a distributed environment, even error logging requires special consideration to avoid drowning in complexity:

The overhead in terms of organization, infrastructure, and tooling, whether in ideal or comprised form, will introduce complexity and cost. I would, in fact, expect compromises to avoid costs to introduce even more complexity. If the profile of the system in terms of business value and necessary complexity (i.e. complexity inherent in the business function) warrants the additional overhead, then that overhead can represent a valid solution to the problem at hand. If, however, the complexity is solely created by the overhead, without an underlying need, the solution becomes suspect. Adding cost and complexity without offsetting benefits will likely lead to problems. Matching the solution to the problem and balancing those costs and benefits requires the attention of an architectural role at the application level, rather than relying on each team to work independently and hope for coherence and economy.

Advertisement

Form Follows Function on SPaMCast 369

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 369, features Tom’s essay on stand-up meetings, Kim Pries on mastery, and a Form Follows Function installment on #NoEstimates.

Tom and I discuss my post “#NoEstimates – Questions, Answers, and Credibility” and take on whether it’s realistic to eliminate estimates given their value to customers.

Organizations and Innovation – Swim or Die!

Shark Mural

One of the few downsides to being a Great White shark is that they must continually move, even while sleeping, in order to keep water moving over their gills. If they stay still too long, they die. Likewise, organizations must remain in motion, changing and adapting to their ecosystem, or risk dying out as well.

Greger Wikstrand and I have been carrying on a discussion about architecture, innovation, and organizations as systems. Here’s the background so far:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.

The over-riding them of the conversation is that innovation doesn’t happen in a vacuum. The software systems that people tend to think about when talking innovation are irrelevant outside of the context of the organizations that use them to innovate. Those organizations are irrelevant outside of the context they operate in, their ecosystem. A great organization with cutting edge technology that has no market is not long for this world. This fractal nature of social systems using software systems within an overarching ecosystem is why I find enterprise architecture increasingly relevant to software architecture. I use “increasingly” as in my awareness of its importance is increasing; the relevance has always been there. Matt Ballantine’s post, “Down Pit”, points this out in the context of Britain’s post-WWII coal mining industry.

Like the sharks, organizations cannot stay immobile and expect to live. Unlike biological organisms, organizations have a far greater degree of control over their internal structures and processes to enable them to adapt to their environment. Whether it’s referred to as Enterprise Architecture or not, there is a need for organizations to understand the context in which they operate. Adapting to this context requires both technological and social (structural and cultural) flexibility.

Unlike the sharks that evolved to “sleep swim”, organizational adaptation must be intentional, considered, and ongoing. Rather than formulaic responses, which tend to jump to the ends of spectra, tailoring is needed to find the right mix. This is one concept that I neglected to fully develop in “Fixing IT – Too Big to Succeed?”, too distributed is likely to be as bad as too centralized. Some aspects of IT need to be centralized to be effective whereas others need to be federated to be effective. Not all embedded teams will be at the same organizational scope. Pragmatism is required as well. Blindly adhering to the rule that each application have a dedicated permanent product team means that the company will either go bankrupt or only units that can afford a team will have applications. A middle ground, while not doctrinally pure, is both possible and preferable.

It’s not enough to just swim, you must swim with purpose. Otherwise, all that emerges is entropy.

Inflection Points and the Ingredients of Innovation

WWI Photo Montage

One of my hobbies is the study of history. Not the dry, dusty, “…on this date these people did that” type of history, rather I’m fascinated by the story of how real people interacted with each other and the world around them. I’m interested in the brilliance and the stupidity, the master strokes and the blunders, the nobility and the infamy; all of which are often found combined in the same person. I like the type of history that, in explaining the world of yesterday, is helpful in making sense of the world of today.

World War I is one of those large-scale human tragedies that holds lessons for today. Teetering on the edge of two very different times, it mixed the tools of industrial warfare (machine guns, armored vehicles, and air power) with infantry tactics barely different from those of a hundred years previous. Over a little more than four years, old and new collided cataclysmically. It provides a brutally stark picture of the uneven nature of innovation and how that uneven nature can yield both triumph and tragedy.

So, just what does this have to do with information technology, systems development, and software architecture?

Quite a lot, actually, if you look beyond the surface.

Over the last few weeks, I’ve been having a running conversation with Greger Wikstrand on this blog and Twitter, about various aspects of IT. “We Deliver Decisions (Who Needs Architects?)” wove together some observations about situational awareness and confirmation bias into a discussion about the need for and aim of the practice of architectural design. In response, Greger posted a link to a video with him and Woody Zuill on serendipity in software development. That prompted my last post, “Fixing IT – Too Big to Succeed?”, discussing how an embedded IT approach could be used to foster the organizational agility needed to provide IT both effectively and efficiently.

Greger responded with “Serendipity and successful innovation”, making a number of extremely important points. For example, while fortune does favor the prepared, recognition of an opportunity is not enough. Likewise, an organizational structure that doesn’t impede innovation is important, but not impeding innovation is not the same as actively driving it forward. Evaluation of ideas is critical as well, in order to differentiate “…between the true gold of serendipity and the fool’s gold of sheer coincidence”. That evaluation, however, needs to be nimble as well. He closed the post with an example of small-company innovation on the part of a large organization.

That last part was what brought WWI to mind. The first and second World Wars were very different affairs (both horrible, but in different ways). Quite a lot of the technologies that we think of as being characteristic of the second (e.g. warplanes and tanks), originated in the first. Having a technology and discovering how best to employ it are two very different things. Certainly there was some technical evolution over twenty years, but the biggest change was in how they were employed.

In some cases, the opposite problem was in play. The organizational structures used by the combatants had proved their effectiveness for over a hundred years. Mission tactics, AKA auftragstaktic (whereby a commander, instead of detailed instructions, provides a desired outcome to his subordinates who are then responsible for achieving that outcome within broad guidelines), was also well-known at the time, but while radio existed, it was not yet portable enough to link dispersed mobile front line units with headquarters. Instead, that communication was limited to field phones and runners, and tactical communication was limited by the range of the human voice, forcing units to bunch up.

Without all the ingredients (technology, the understanding of how to employ it, and an effective organizational structure) outcomes are likely to be poor. Information must flow both up and down the hierarchy, otherwise decisions are being made blindly and without coordination. The parts must not only perform their function, but must also collaborate together. Organizations that fail to operate effectively across the full stack fail to operate effectively at all.

Fixing IT – Too Big to Succeed?

Continuing our discussion that I mentioned in my last post, Greger Wikstrand tweeted the following:

I encourage you to watch the video, it’s short (7:39) and makes some important points, which I’ll touch on below.

Serendipity, when it occurs, is a beautiful thing. Serendipity can occur when heads-down order taking is replaced with collaboration. Awareness of business concerns on the part of IT staff enhances their ability to work together with end users to create effective solutions. Awareness, however, is insufficient.

Woody Zuill‘s early remarks about the bureaucracy that gets in the way of that serendipity struck a nerve. Awareness of how to address a need does little good if the nature of the organization either prevents the need from being addressed. Equally bad is when a modest need must be bloated beyond recognition in order to become “significant” enough for IT intervention. Neither situation works well for the end users, IT, or the business as a whole.

So what’s needed to remedy this?

In “Project lead time”, John Cook asserted that project lead team was related to company size:

A plausible guess is that project lead time would be proportional to the logarithm of the company size. If a company with n employees has a hierarchy with every manager having m subordinates, the number of management layers would be around logm(n). If every project has to be approved by every layer of management, lead time should be logarithmic in the company size. This implies huge companies only take a little longer to start projects than medium-sized companies, and that doesn’t match my experience.

While I agree in principle, I suspect that the dominant factor is not raw size, but the number of management layers Cook refers to. In my experience, equally sized organizations can be more or less nimble depending on the number of approvals and the reporting structure of the approvers. Centralized, shared resources tend to move slower than those that are federated and dedicated. Ownership increases both customer satisfaction and their sense of responsibility for outcomes. Thus, embedding IT staff in business units as integral members of the team seems a better choice than attempting to “align” a shifting set of workers temporarily assigned to a particular project.

The nature of project work, when combined with a traditional shared resource organization, stands as a stumbling block to effective, customer-centric IT. IT’s customers (whether internal, external, or both) are interested in products, not projects. For them, the product isn’t “done” until they’re no longer using it. While I won’t go so far as to say #NoProjects, I share many of the same opinions. Using projects and project management to evolve a product is one thing, I’ve found project-centric IT to be detrimental to all involved.

Having previously led embedded teams (which worked as pseudo-contractors: employed by IT, with the goals and gold coming from the business unit), I can provide at least anecdotal support to the idea that bringing IT and its customers closer pays dividends. Where IT plays general contractor, providing some services and sub-contracting other under the direction of those paying the bills, IT can put its expertise to use meeting needs by adding value rather than just acting as order-takers. This not only allows for the small wins that Woody Zuill mentioned, but also for integrating those small wins into the overall operation of the business units, reducing the number of disconnected data islands.

Obviously there are some constraints around this strategy. Business units need to be able to support their IT costs out of their own budget (which sounds more like a feature than a bug). Likewise, some aspects of IT are more global than local. The answer there, is matching the scope of the IT provider unit with the scope of the customer unit’s responsibility. Federating and embedding allows decisions to be made closer to the point of impact by those with the best visibility into the costs and benefits. Centralization forces everything upwards and away from those points.

Operating as a monolithic parallel organization that needs to be “aligned” to the rest of the organization sounds like a system that’s neither effective nor efficient, certainly not both. When the subject is the distance between the decision and its outcome, bigger isn’t better.

We Deliver Decisions (Who Needs Architects?)

Broken Window

What do medicine, situational awareness, economics, confirmation bias, and value all have to do with all have to do with the architectural design of software systems?

Quite a lot, actually. To connect the dots, we need to start from the point of view that the architecture is essentially a set of design decisions intended to solve a problem. The architecture of that problem consists of a set of contexts. The fitness of a solution architecture will depend on how well it addresses the problem architecture. While challenges will emerge in the course of resolving a set of contexts, understanding up front what can be known provides reserves to deal with what cannot.

About two weeks ago, during a Twitter discussion with Greger Wikstrand, I mentioned that the topic (learning via observational studies rather than controlled experiment) coincided with a post I was publishing that day, “First Do No Harm – the Practice of Software Development” (comparing software development to medicine). That triggered the following exchange:

A few days later, I stumbled across a reference to Frédéric Bastiat‘s classic essay on economics, “What Is Seen and What Is Not Seen”. For those that aren’t motivated to read the work of 19th century French economists, it deals with the concepts of opportunity costs and the law of unintended consequences via a parable that attacks the notion that broken windows benefit the economy by putting glaziers to work.

A couple more days went by and Greger posted “Confirmation bias in software engineering” on the pitfalls of being too willing to believe information that conforms to our own preferences. That same day, I posted “Let’s Talk Value (Who Needs Architects?)”, discussing the effects of perception on determining value. Matt Ballantine made a comment on that post, and coincidentally, “confirmation bias” came up again:

I think it’s always going to be a balance of expediency and pragmatism when it comes to architecture. And to an extend it relates back to your initial point about value – I’m not sure that value is *anything* but perception, no matter what logical might tell us. Think about the things that you truly value in your life outside of work, and I’d wager that few of them would fit neatly into the equation…

So why should we expect the world of work to be any different? The reality is that it isn’t, and we just have a fashion these days in business for everything to be attributable to numbers that masks what is otherwise a bunch of activities happening under the cognitive process of confirmation bias.

So when it comes to arguing the case for architecture, despite the logic of the long-term gain, short term expedience will always win out. I’d argue that architectural approaches need to flex to accommodate that… http://mmitii.mattballantine.com/2012/11/27/the-joy-of-hindsight/

The common thread through all this is cognition. Perceiving and making sense of circumstances, then deciding how best to respond. The quality of the decision(s) will be directly related to the quality of the cognition. Failing to take a holistic view (big picture and details, not either-or) will impair our perception of the problem, sabotaging our ability to design effective solutions. Our biases can lead to embracing fallacies like the one in Bastiat’s parable, but stakeholders will likely be sensitive to the opportunity costs of avoidable architectural refactoring (the unintentional consequence of applying YAGNI at the architectural level). That sensitivity will color their perception of the value of the solution and their perception is the one that counts.

Making the argument that you did well by costing someone money is a lot easier in the abstract than it is in reality.

Engineer, Get Over Yourself

Tacoma Narrows Bridge Collapse

Ian Bogost’s “Programmers: Stop Calling Yourselves Engineers” in the Atlantic, claims “The title “engineer” is cheapened by the tech industry.” He goes on to state:

When it comes to skyscrapers and bridges and power plants and elevators and the like, engineering has been, and will continue to be, managed partly by professional standards, and partly by regulation around the expertise and duties of engineers. But fifty years’ worth of attempts to turn software development into a legitimate engineering practice have failed.

Those engineering disciplines are subject to both professional and legal regulations, it’s true. That being said, bridges, buildings, and power plants (as well as power grids are not immune to failures. Spectacular failures in regulated disciplines, even when they spur changes, can still recur.

Fifty years might seem like a long time, but compared to structural engineering it’s nothing. Young disciplines have a history of behavior that, in retrospect, seems insanely reckless (granted, he wasn’t a nuclear engineer, but at the time there weren’t any). Other disciplines, now respectable, have been in the same state previously.

Bogost complains about the lack of respect for certifications and degrees, but fails to make a case for their relevance. He even notes that the Accreditation Board for Engineering and Technology’s “accreditation requirements for computer science are vague”. Perhaps software development is too diverse (not to mention too much in flux) for a one-size-fits-all regulatory regime. Encouraging the move toward rigor, even if the pieces aren’t in place for the same style of regulation as older engineering disciplines, seems a better strategy than sneering about who should be allowed to use a title.

I’m all for increasing the quality of software development, especially for those areas (e.g. autonomous cars) that have life safety implications. When I’m in the cross walk, I’d prefer that the developer(s) of the navigation system considered and conducted themselves software engineers rather than craftsmen. By the same token, I’d prefer that the consideration be a function of real rigor and professional attitude rather than ticking boxes.

[h/t to Grady Booch for the link]

Hearts and Stars and Prison Riots (User Experience Matters)

So Twitter decided to make a change, and people have been reacting (and reacting to the reaction):

As Jeff Sussna noted, there’s a reason for the reaction:

In my old, pre-IT life, I’ve seen that same cavalier attitude toward change cause a real-life riot (for the record, it was a jail riot rather than a prison riot, but whatever), complete with fires set, property damaged and tear gas deployed. All for the want of a little notice.

Sometimes people react negatively to this type of change. The reaction might be stupid, but how much more stupid is triggering that type of reaction when you didn’t have to?

Let’s Talk Value (Who Needs Architects?)

Gold Bars

Value is a term that’s heard often these days, but I wonder how well it’s understood. Too often, it seems, value is taken to mean raw benefit rather than its actual meaning, benefit after cost (i.e. “bang for the buck”). An even better understanding of the concept can be had from Tom Cagley’s “Breaking Down Value”: “Value = (Benefit + Perception) – (Cost + Perception)”.

The point being?

Change involves costs and, one would hope, benefits. Not only does the magnitude of the cost matter, but also its perception matters. Where a cost is seen as unnecessary or incurred to benefit someone other than the one paying the bills, its perception will likely be unfavorable. Changes that come about due to unforeseen circumstances are more likely to be seen as necessary than those stemming from foreseeable ones. Changes to accommodate known needs are the least likely to be seen as reasonable. This is why I’ve always maintained that YAGNI doesn’t scale beyond low-level design into the realm of architectural decisions. Where cost of change determines architectural significance, decision churn is problematic.

After I posted “Who Needs Architects? Who’s Minding the Architecture?”, Charlie Alfred tweeted:

One way to be seen as an asset is to provide value. As Cesare Pautasso put it:

This is not to say that architectural refactoring is without value, but that refactoring will be seen as redundant work. When that work is for foreseeable needs, it will be perceived as costlier and less beneficial than strictly new functionality. Refactoring to accommodate known needs will suffer an even greater perception problem.

YAGNI presumes that the risk of flexible design being unnecessary outweighs the risk of refactoring being unnecessary. In my opinion, that is far too simplistic a view. Some functionality and qualities may be speculative, but the need for others (e.g. security) will be more certain.

Studies have shown that the ability to modify an application is a prime quality concern for stakeholders. Flexible design enables easier and cheaper change. “Big bang” changes (expensive and painful) are more likely where coherent design is lacking. Holistic design based on context seems to provide more value (both tangible and perceived) than a dogma-driven process of stringing together tactical decisions and hoping for the best.