Microservice Mistakes – Complexity as a Service

Michael Feathers’ tweet about technical empathy packs a lot of wisdom into 140 characters. Lack of technical empathy can lead to a system that is harder to both implement and maintain since no thought was given to simplifying things for the caller. Maintainability is one of those quality of service requirements that appears to be a purely technical consideration right up to the point that it begins significantly affecting development time. If this sounds suspiciously like technical debt to you, then move to the head of the class.

The issue of technical empathy is particularly on point given the popular interest in microservice architectures (MSAs). The granular nature of the MSA style brings many benefits, but also comes with the cost of increased complexity (among others). Michael Feathers referred to this in a post from last summer titled “Microservices Until Macro Complexity”:

It is going to be interesting to see how this approach scales. Some organizations have a relatively low number of microservices. Others are pushing higher, around the 600 mark. This is a bit beyond the point where people start seeking a bigger picture. If services are often bigger than classes in OO, then the next thing to look for is a level above microservices that provides a more abstract view of an architecture. People are struggling with that right now and it was foreseeable. Along with that concern, we have the general issue of dealing with asynchrony and communication patterns between services. I strongly believe that there is a law of conservation of complexity in software. When we break up big things into small pieces we invariably push the complexity to their interaction.

The last sentence bears repeating: “When we break up big things into small pieces we invariably push the complexity to their interaction”. Breaking a monolith into microservices simplifies each individual component, however, as Eran Hammer observed in “The Fallacy of Tiny Modules”, “…at some point, someone has to put it all together and then, all the complexity is going to surface…”. As Yamen Sader illustrated in his slide deck “Microservices 101 – The Big Why” (slide #26), the structure of the organization, domain, and system will diverge in an MSA. The implication of this is that for a given domain task, we need to know more about the details of that task (i.e. have less encapsulation of the internals) in a microservice environment.

The further removed the consumer of these services are from the providers, the harder and less convenient it will be to transfer that knowledge. To put this in perspective, consider two fast food restaurants: one operates in a traditional manner where money is exchanged for burgers, the other is a microservice style operation where you must obtain the beef, lettuce, pickles, onion, cheese, and buns separately, after which the cooking service combines them (provided the money is there also). The second operation will likely be in business for a much shorter period of time in spite of the incredible flexibility it offers. Additionally, it that flexibility only truly exists when the contracts between two or more providers are swappable. As Ben Morris noted in “Do Microservices create extra challenges for distributed development?”:

Each team will develop its own interpretation of the system and view of the data model. Any service collaboration will have to involve some element of translation or mapping and will be vulnerable to subtle bugs that can arise from semantic differences.

Adopting a canonical model across the platform can help to address this problem by asserting a common vocabulary. Each service is expected to exchange information based on a shared definition of the main entities. However, this requires a level of cross-team governance which is very much at odds with the decentralised nature of microservices.

None of this is meant to say that the microservice architecture style is “bad” or “wrong”. It is my opinion, however, that MSA is first and foremost an architectural style of building applications rather than systems of systems. This isn’t an absolute; I could see multiple MSA applications within an organization sharing component microservices. Where those microservices transcend the organizational boundary, however, making use of them becomes more complex. Actions at a higher level of granularity, such as placing an order for some product or service, involves coordinating multiple services in the MSA realm rather than consuming one “chunkier” service. While the principles behind microservice architectures are relevant at higher levels of abstraction, a mismatch in granularity between the concept and implementation can be very troublesome. Maintainability issues are both technical debt and a source of customer dissatisfaction. External customers are far easier to lose than internal ones.

Form Follows Function on SPaMCAST 323

SPaMCAST logo

I’m back with another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

SPaMCast 323 features Tom’s “Five Factors Leading to Failing With Agile”, my “Microservice Principles and Enterprise IT Architecture” and an installment of Jo Ann Sweeny’s column, “Explaining Communication”.

Enjoy!

Microservices vs SOA – Is there any real difference?

Microservice architecture has been the hot topic for 2014, so I suppose it’s appropriate that it be the subject for what I intend to be my last post until 2015. Last week, Kelly Sommers kicked off an active discussion of the nature of microservices vis-a-vis SOA:

This isn’t really a new observation. Before Lewis and Fowler published their final installment of the post that started the everyone talking, Steve Jones had already published “Microservices is SOA, for those who know what SOA is”. Even Adrian Cockcroft, in response to Sommers, noted:

And yet, they’re different. In a post written for Iasa, “Microservices – The Return of SOA?”, I quoted Anne Thomas Manes’ “SOA is Dead; Long Live Services”:

Successful SOA (i.e., application re-architecture) requires disruption to the status quo. SOA is not simply a matter of deploying new technology and building service interfaces to existing applications; it requires redesign of the application portfolio. And it requires a massive shift in the way IT operates. The small select group of organizations that has seen spectacular gains from SOA did so by treating it as an agent of transformation. In each of these success stories, SOA was just one aspect of the transformation effort. And here’s the secret to success: SOA needs to be part of something bigger. If it isn’t, then you need to ask yourself why you’ve been doing it.

As I stated in that post

Part of the problem may be that both SOA and microservices have aspects that transcend boundaries. Manes quote above makes it clear that SOA is concerned with enterprise IT architecture. Microservices are primarily an application architecture pattern.

Microservices can be reused by multiple applications, but need not be. SOA’s emphasis was at a higher level of abstraction. While the two share a great many principles, they apply them at different scales (application and solution architecture versus enterprise IT architecture).

Last weeks Twitter stream did produce some examples of principles that are, if not unique, then at least more heavily emphasized in the microservice style. From Marco Vermeulen:

Eugene Kalenkovich‘s post, “Can I Haz Name?”, captures the essence (in my opinion) of this style: “Independent Scalability, Independent Lifecycle and Independent Data”. It’s not about the lines of code, but about the separation of concerns within the context of the application. These same “independences” are important to SOA, but define a microservice architecture.

Who Needs Architects? – Navigating the Fractals

Vasco da Gama

In my last post, “Microservice Principles and Enterprise IT Architecture”, I mentioned how Ruth Malan frequently notes that design is fractal. In other words, “…a natural phenomenon or a mathematical set that exhibits a repeating pattern that displays at every scale”. Software systems can generally be decomposed into sub-systems, components, classes and methods, all the way down into individual lines of code. Concerns that apply to the more granular levels (cohesion/focus, separation of concerns, managing dependencies, etc.) also apply as the level of abstraction increases.

It is important to note that software systems do not exist in a vacuum, but are themselves components of solutions, possibly of a system of systems, and definitely of an enterprise’s IT architecture. Those same concerns that apply to the hierarchy of structures within a given system also apply to the hierarchy of structures to which the system belongs. To quote Ruth again:

Russell Ackoff urged that to design a system, it must be seen in the context of the larger system of which it is part. Any system functions in a larger system (various larger systems, for that matter), and the boundaries of the system — its interaction surfaces and the capabilities it offers — are design negotiations. That is, they entail making decisions with trade-off spaces, with implications and consequences for the system and its containing system of systems. We must architect across the boundaries, not just up to the boundaries. If we don’t, we naively accept some conception of the system boundary and the consequent constraints both on the system and on its containing systems (of systems) will become clear later. But by then much of the cast will have set. Relationships and expectations, dependencies and interdependencies will create inertia. Costs of change will be higher, perhaps too high.

Ruth illustrates this relationship using this diagram:

System Context illustration, Ruth Malan

Couple this macro-illustration with the graphic breakdown of application structure from Savita Pahuja’s “Relation of Agility and Modularity”, and it should be obvious that architecture (in the form of structure and relationships) is present at all levels of abstraction.

System Granularity illustration

This fractal nature confirms what Savita states: “Agility can only be realized when the underlying entity – the organization or the software product, has structural modularity”. In other words, if you can’t break it down, replace or re-configure it, your ability to change is constrained. Moreover, modularity at one level of abstraction can be defeated by a lack of modularity at a higher level of abstraction.

While this shows that it’s turtles “architecture all the way down”, what does that say about the need for architectural design?

I’ve long been skeptical of the idea that a coherent design can reliably “emerge” from a disparate group of people doing “the simplest thing that can possibly work”. Darwinian evolution is a story set in blind alleys carpeted in the corpses of failures. With millions of years and an unlimited budget, you might develop something in this manner, but lacking that, you need to cheat (i.e. design intentionally).

Alistair Cockburn has recently posted about just that, mixing intentional design with Test Driven Development (TDD) in pursuit of better results (which he felt he got). Kevin Rutherford’s “TDD for teams” related how Cockburn’s efforts seemed to spur some counter posts to defend TDD from the assault of up-front thinking. It appears that some feel that thinking throughout coding is antithetical to spending anything more than a trivial amount of time on thought before coding. Rutherford notes:

So now we have 5 different design approaches to one toy kata from 4 developers, all in the space of one weekend. (And there are probably more that I haven’t heard about.) Of course, it would have been weird if they had all produced identical designs. We are all different, and that’s definitely a good thing. But I worry that this can cause problems for teams doing TDD.

Rutherford then, in my opinion, captures the essence of the problem:

If a team is to produce software that exhibits internal consistency and is understandable by all of its developers, then somehow these individual differences must be surrendered or homogenized in some way. Somehow the team must create designs — and a design style — that everyone agrees on.

As one travels up the hierarchy, one layer’s internal consistency is predicated on the consistency of the external interfaces of the layer below. Traditionally, that consistency has been the realm of architects. Success or failure in achieving that consistency, in my opinion, is determined by the appropriateness of the influence exercised. An attempt to micromanage via Big Design Up Front (BDUF) will likely fail due to the information deficits inherent in trying to control far beyond the extent of your comprehension. Abstraction exists to allow for broader understanding at lower resolution. By the same token, I would question the logic of relying on luck to maintain consistency between a system and its ecosystem. A balance is necessary to well match the system to the context it will inhabit.

Microservice Principles and Enterprise IT Architecture

Julia Set Fractal

Ruth Malan is fond of noting that “design is fractal”. In a comment on her post “We Just Stopped Talking About Design”, she observed:

We need to get beyond thinking of design as just a do once, up-front sort of thing. If we re-orient to design as something we do at different levels (strategic, system-in-context, system, elements and mechanisms, algorithms, …), at different times (including early), and iteratively and throughout the development and evolution of systems, then we open up the option that we (can and should) design in different media.

This fractal nature is illustrated by the fact that software systems and systems of systems belonging to an organization exist within an ecosystem dominated by that organization which is itself a system of systems of the social kind operating within a larger ecosystem (i.e. the enterprise). Just as structure follows strategy then becomes a constraint on strategy going forward, the architectures of the systems that make up the IT architecture of the enterprise influence its character (for good or bad) and vice versa. Likewise, the IT architecture of the enterprise and the architecture of the enterprise itself are mutually influencing. Ruth again:

Of course, I don’t mean we design the (entire business) ecosystem…We can, though, design interventions in the ecosystem, to shift value flows and support the value network. You know, like supporting the development community that will build apps on your platform with tooling and APIs, or creating relationships with content providers, that sort of thing. And more.

So what does any of this have to do with the principles behind the microservice architectural style?

Separation of concerns, modularity, scalability, DRY-ness, high-cohesion, low coupling, etc. are all recognized virtues at the application architecture level. As we move into the levels of solution architecture and enterprise IT architecture (EITA), these qualities, in my opinion, remain valuable. Many of the attributes the microservice style embody these qualities. In particular, componentization at the application level (i.e. systems as components in a system of systems) and focus on a particular business capability both enhance agility at the solution architecture and EITA levels of abstraction.

Conventionally, the opposite of a microservice has been termed a “monolith” when discussing microservice architecture. Robert Annett, in “What is a Monolith?”, takes an expansive view of the term, listing three ways in which an application’s architecture can be monolithic. He notes that the term need not be pejorative. Chris Carroll, in “A Single Deployment Target is not a Monolith”, prefers the traditional definition, an application suffering from insufficient modularity. He notes that an application whose components run in process on the same machine can be loosely coupled and modular. This holds true when considering the application’s architecture, but begins to falter when considering the architecture of a solution and more so at the EITA level.

In my opinion, applications that encompass multiple business capabilities, even when well designed internally, can be described as monolithic at higher levels of abstraction. This need not be considered a bad thing; where those multiple capabilities are organizationally cohesive, more granular componentization may not pass a cost/benefit analysis. However, where the capabilities are functionally disparate, the potential for redundancy in function, lack of organizational alignment, process mismatch, and data integrity issues all become significant. In a previous post, “Making and Taming Monoliths”, I presented a hypothetical solution architecture illustrating most of these issues. This was coupled with an example of how that could be remedied in an incremental manner (it should be noted that the taming of a solution architecture monolith is most likely to succeed where the individual applications are internally well modularized).

Higher level modularity and DRY-ness enhance both solution architecture and EITA. This applies in terms of code:

This also applies in terms of data:

Modularity at the level of solution architecture and EITA is also important in terms of process. Whether the model is Bi-Modal IT or Pace-Layered, it is becoming more and more apparent that no one process will fit the entire enterprise (and for what it’s worth, I agree with Simon Wardley that Bi-Model is a mode short). Having disparate business capabilities reside within the same application increases the risk of collisions where the process used is inappropriate to the domain. When dealing with Conway’s Law, it’s useful to remember “I Fought the Law, and the Law Won”.

Even without adopting a pure microservice architecture for any application, adopting some of the principles behind the style can be useful. Reducing redundant code and data reduces risk and allows teams to concentrate on the core capabilities of their application. Modular solution and enterprise IT architectures have more flexibility in terms of deployment, release schedules, and process. Keeping applications tightly focused on business capabilities allows you to use Conway’s Law to your advantage. Not every organization is a Netflix, but you may be able to profit by their example.

Microservices Under the Microscope

Microscope

Buzz and backlash seems to describe the technology circle of life. Something (language, process, platform, etc.; it doesn’t seem to matter) gets noticed, interest increases, then the reaction sets in. This was seen with Service-Oriented Architecture (SOA) in the early 2000s. More was promised than could ever be realistically attained and eventually the hype collapsed under its own weight (with a little help from the economic downturn). While the term SOA has died out, services, being useful, have remained.

2014 appears to be the year of microservices. While neither the term nor the architectural style itself are new, James Lewis and Martin Fowler’s post from earlier this year appears to have significantly raised the level of interest in it. In response to the enthusiasm, others have pointed out that the microservices architectural style, like any technique, involves trade offs. Michael Feathers pointed out in “Microservices Until Macro Complexity”:

If services are often bigger than classes in OO, then the next thing to look for is a level above microservices that provides a more abstract view of an architecture. People are struggling with that right now and it was foreseeable. Along with that concern, we have the general issue of dealing with asynchrony and communication patterns between services. I strongly believe that there is a law of conservation of complexity in software. When we break up big things into small pieces we invariably push the complexity to their interaction.

Robert, “Uncle Bob”, Martin has recently been a prominent voice questioning the silver bullet status of microservices. In “Microservices and Jars”, he pointed out that applications can achieve separation of concerns via componentization (using jars/Gems/DLLs depending on the platform) without incurring the overhead of over-the-wire communication. According to Uncle Bob, by using a plugin scheme, components can be as independently deployable as a microservice.

Giorgio Sironi responded with the post “Microservices are not Jars”. In it, Giorgio pointed out independent deployment is only part of the equation, independent scalability is possible via microservices but not via plugins. Giorgio questioned the safety of swapping out libraries, but I can vouch for the fact that plugins can be hot-swapped at runtime. One important point made was in regard to this quote from Uncle Bob’s post:

If I want two jars to get into a rapid chat with each other, I can. But I don’t dare do that with a MS because the communication time will kill me.

Sironi’s response:

Of course, chatty fine-grained interfaces are not a microservices trait. I prefer accept a Command, emit Events as an integration style. After all, microservices can become dangerous if integrated with purely synchronous calls so the kind of interfaces they expose to each other is necessarily different from the one of objects that work in the same process. This is a property of every distributed system, as we know from 1996.

Remember this for later.

Uncle Bob’s follow-up post, “Clean Micro-service Architecture”, concentrated on scalability. It made the point that microservices are not the only method for scaling an application (true); and stated that “the deployment model is a detail” and “details are never part of an architecture” (not true, at least in my opinion and that of others):

While Uncle Bob may consider the idea of designing for distribution to be “BDUF Baloney”, that’s wrong. That’s not only wrong, but he knows it’s wrong – see his quote above re: “a rapid chat”. In the paper that’s referenced in the Sironi quote above, Waldo et al. put it this way:

We argue that objects that interact in a distributed system need to be dealt with in ways that are intrinsically different from objects that interact in a single address space. These differences are required because distributed systems require that the programmer be aware of latency, have a different model of memory access, and take into account issues of concurrency and partial failure.

You can design a system with components that can run in the same process, across multiple processes, and across multiple machines. To do so, however, you must design them as if they were going to be distributed from the start. If you begin chatty, you will find yourself jumping through hoops to adapt to a coarse-grained interface later. If you start with the assumption of synchronous and/or reliable communications, you may well find a lot of obstacles when you need to change to a model that lacks one or both of those qualities. I’ve seen systems that work reasonably well on a single machine (excluding the database server) fall over when someone attempts to load balance them because of a failure to take scaling into account. Things like invalidating and refreshing caches as well as event publication become much more complex starting with node number two if a “simplest thing that can work” approach is taken.

Distributed applications in general and microservice architectures in particular are not universal solutions. There are costs as well as benefits to every architectural style and sometimes having everything in-process is the right answer for a given point in time. On the other hand, you can’t expect to scale easily if you haven’t taken scalability into consideration previously.

“Microservices: The Return of SOA?” on Iasa Global

In the early 2000s, Service-Oriented Architecture (SOA) was the hot topic. Much ink was spilled in touting its potential. Effort and money was expended in attempts to secure its promised benefits. Like object-orientation and reuse before it, the reality of SOA was not able to live up to the hype. Unlike them, SOA had better penetration into the business realm and having soared higher in the corporate hierarchy, fell further into disrepute. Before the decade was done, SOA had become a dirty word (although services had become ubiquitous).

Read “Microservices: The Return of SOA?” on Iasa Global for my take on how microservices compares to SOA (for better or worse).

Making and Taming Monoliths

Rock me, Amadeus

In Meditation XVII of Devotions upon Emergent Occasions, John Donne wrote the immortal line “No man is an island, entire of itself; every man is a piece of the continent, a part of the main.” If we skip ahead nearly 400 years and change our focus from mortality to information systems (lots of parallels there), we might re-word it like this: No system is an island, entire of itself; every system is a piece of the enterprise, a part of the main.

In “Carving it up – Microservices, Monoliths, & Conway’s Law”, I discussed how the new/old idea of microservice architectures related to the more monolithic classic style of application architecture. An important take-away was the applicability of Conway’s Law to application and solution architectures. Monoliths are monolithic for a reason; a great many business activities are dependent on other business activities performed by actors from other business units. In Ruth Malan’s words: “The organizational divides are going to drive the true seams in the system.” Rather than a miscellaneous collection of business capabilities, monoliths typically represent related capabilities stitched together in one application.

The problem with monoliths is that there is a tendency for capabilities/data to be duplicated across multiple systems. The dominant concern of one system (e.g. products in an inventory system) will typically be a subordinate concern in another (e.g. products in an order fulfillment system). This leads to different units using different systems based on their needs vis-a-vis a particular concern. Without adequate governance, this leads to potential conflicts over which system is authoritative regarding data about those concerns.

Consider the following diagram of a hypothetical family of systems consisting of an order management system and a system to track the outsourced activities of the subset of orders that are fulfilled by vendors. Both use miscellaneous data (such as the list of US states and counties) that may or may not be managed by a separate system. They also use and maintain duplicate from other systems. The order system does this with data from the pricing, customer management, and product management systems. It also exports data for manual import into the accounts receivable system. The outsourced fulfillment system does this with data from the product management and vendor management systems as well as the order system. It exports data for manual import into the accounts payable system. Because of the lack of integration, there is redundant data with no clear ability to declare any one set as the master one. Additionally, there is redundant code as the functions of maintaining this data are repeated (and likely repeated in an inconsistent manner) across multiple systems.

Monolithic Systems

There are different responses to the issue of uncontrolled data duplication, ranging from “hope for the best” (not recommended) to various database replication schemes. Jeppe Cramon’s “Microservices: It’s not (only) the size that matters, it’s (also) how you use them – part 4” outlines the advantages of publishing data as events from authoritative source systems for use by consumer systems. This provides near real-time updates and avoids the pitfall of trying to treat services like a database (hint, latency will kill you). Caching this data locally should not only improve performance but also allows for the consumer applications to manage attributes of those concerns that are unique to them with referential integrity (if desired).

In the diagram below, the architecture of this family of systems has been rationalized by the introduction of different types of services appropriate to the different use cases. The order system now makes use of the pricing system solely as a service (with its very narrow focus, the pricing system is definitely amenable to the microservice architectural style) and no longer contains data or code related to that concern and only contains enough code to trigger asynchronous messages to the accounts receivable system. Likewise, the outsourced fulfillment system contains only a thin shell for the accounts payable system. Other concerns (customers, vendors, products, etc.) are now managed by their respective owing systems and updates are published asynchronously as events. The mechanism for distributing these events (the cloud element in the diagram) is purposely not named as there are multiple ways to accomplish this (ESBs are one mechanism, but far from the only one). While the consuming systems cache the data locally, much of the code that previously was required to maintain them is no longer needed. In many cases, entire swathes of “administration” UI can be done away with. Where necessary, it can be pared down to only edit data related to those concerns that is both unique to and restricted to the consuming application.

Monolithic Systems

I will take the time to re-emphasize the point that not all communication need be, nor should it be, the same. Synchronous communication (shown as a solid line in the diagram) may be appropriate for some usages (particularly where an end user is waiting on an immediate success/fail type of response). An event-driven style (shown as a dotted line in the diagram) will be much more performant and scalable for many more situations. As I noted in “Coordinating Microservices – Playing Well with Others”, designing and coding distributed applications in the same manner as a monolith will result in a web of service dependencies, a distributed big-ball of mud. Coupling that is a code smell in a monolith can become a crippling issue in distributed systems.

Disassembling monoliths in this manner is complex. It can be done incrementally, removing the need for expensive “big bang” initiatives. There is, however, a need for strategy and governance at the right level of detail (micromanagement and too-granular design are still unlikely to yield the best results). Done well, you can wind up with both better data integrity and less code to maintain.

Technical Debt – Why not just do the work better?

Soap or oil press ruins in Panayouda, Lesvos

A comment from Tom Cagley was the catalyst for my post “Design Communicates the Solution to a Problem”, and he’s done it again. I recently re-posted “Technical Debt – What it is and what to do about it” to the Iasa Global blog, and Tom was kind enough to tweet a link to it. In doing so, he added a little (friendly) barb: “Why not just do the work better?” My reply was “It’s certainly the best strategy, assuming you can“.

Obviously, we have a great deal of control over intentional coding shortcuts. It’s not absolute control; business considerations can intrude, despite our best advice to the contrary. There are a host of other factors that we have less control over: aging platforms, issues in dependencies, changing runtime circumstances (changes in load, usage patterns, etc.), and increased complexity due to organic growth are all factors that can change a “good” design into a “poor” one. As Tom has noted, some have extended the metaphor to refer to these as “Quality Debt, Design Debt, Configuration Management Debt, User Experience Debt, Architectural Debt”, etc., but in essence, they’re all technical deficits affecting user satisfaction.

Unlike coherent architectural design, these other forms of technical debt emerge quite easily. They can emerge singly or in concert. In many cases, they can emerge without your doing anything “wrong”.

Compromise is often a source of technical debt, both in terms of the traditional variety (code shortcuts) and in terms of those I listed above. While it’s easy to say “no compromises”, reality is far different. While I’m no fan of YAGNI, at least the knee-jerk kind, over-engineering is not the answer either. Not every application needs to be built for hundreds of thousands of concurrent users. This is compromise. Having insufficient documentation is a form of technical debt that can come back to haunt you if personnel changes result in loss of knowledge. How many have made a compromise in this respect? While it is a fundamental form of documentation, code is insufficient by itself.

A compromise that I’ve had to make twice in my career is the use of the shared database EAI pattern when an integration had to be in place and the target application did not have a better mechanism in place. While I am absolutely in agreement that this method is sub-standard, the business need was too great (i.e. revenue was at stake). The risk was identified and the product owner was able to make an informed decision with the understanding that the integration would be revised as soon as possible. Under the circumstances, both compromises were the right decision.

In my opinion, taking the widest possible view of technical debt is critical. Identification and management of all forms of technical debt is a key component of sustainable evolution of a product over its lifetime. Maintaining a focus on the product as a whole, rather than projects, provides a long-term focus should help make better compromises – those that are tailored to getting the optimal value out of the product. Having tunnel vision on just the code leaves too much room for surprises.

[Soap or oil press ruins in Panayouda, Lesvos Image by Fallacia83 via Wikimedia Commons.]

“Technical Debt – What it is and what to do about it” on Iasa Global Blog

Durer's Usury

In spite of all that’s been written on the subject of technical debt, it’s still a common occurrence to see it defined as simply “bad code”. Likewise, it’s still common to see the solution offered being “stop writing bad code”. Technical debt encompasses much more than that simplistic definition, so while “stop writing bad code” is good advice, it’s wholly insufficient to deal with the subject.

See the full post on the Iasa Global Blog (a re-post, originally published here).