Form Follows Function on SPaMCast 463

SPaMCAST logo

I’m back for another appearance on Tom Cagley’s Software Process and Measurement (SPaMCast) podcast.

This week’s episode, number 463, features Tom’s essay on big picture stories. This is followed by our Form Follows Function segment discussing my post “Management, Simple and Wrong – Semantics, Systems, and Self-Correction”. Jeremy Berriault‘s QA corner finishes the cast with a segment on motivating testers.

In this installment, Tom and I talk about the way people approach complex (and often emotionally charged) subjects like management. Semantics, defining what we mean, is critical to keep discussions on track. Likewise, we need to be able to differentiate between a concept, differing theories around that concept, and just plain poor practice. Simplistic mental models are likely to generate much more heat than light. It’s not often that something good starts off “I got into a discussion on Twitter”, but this episode is the exception to the rule!

You can find all the SPaMCast episodes I’m in under the SPaMCast Appearances category on this blog. Enjoy!

Advertisement

Organizations as Systems – Kurosawa, Clausewitz, and Chess

16th Century Market Scene

In order to respond appropriately to the context we find ourselves in, it’s helpful that we be able to correctly define that context. It’s something humans aren’t always good at.

Not too long ago, Sun Tzu’s The Art of War was all the rage as among executives. While the book contains some excellent lessons that have applications beyond the purely military, as someone in my Twitter feed noted recently, “Business is not war”.

[Had I realized that the tweet, in combination with another article, would trigger something in my byzantine thought processes, I would have bookmarked it to give them credit – sorry!]

Business is, indeed, not war. In fact, one of the nuggets of wisdom to be found in Clausewitz’s treatise, On War, is that war is often not war. Specifically, what he is saying is that the reality of a concept often diverges from our (mis)understanding of that concept. Our perception is colored by factors such as our experience, beliefs, and interests. Additionally, our tendency to employ abstraction can be both tool and trap. Ignoring irrelevant detail can simplify reasoning about something, assuming that the detail ignored is actually irrelevant. Ignoring relevant detail can quickly lead to problems.

The game of chess illustrates this. Chess involves strategy and has its origins as an abstract simulation of war. Beyond promoting a very rudimentary type of strategic thought, chess is far from capable of simulating the complex social system of warfare. Perhaps if all the pieces were sentient and had both agency and agenda (bonus points for contradictory ones potentially conflicting with the player’s agenda), it might come closer. Perhaps if the boundaries of the arena were indeterminate, it might come closer. Perhaps if the state of the terrain, the composition and disposition of forces (friend, as well as foe), and the goals of the opponent were less transparent, it might come closer.

In short, the more certainty there is, the less accuracy there is. Where the human aspect is ignored or minimized, you may gain certainty, but it comes at the cost of losing contact with reality. Social systems are highly complex and treating them otherwise is like looking for a gas leak with a lighter – you may be able to do so, but your chances of liking the results are pretty small.

This post was originally planned to be for last week, but I stumbled into a Twitter conversation that illustrates my point (specifically re: leadership and management), so I wrote that first as a preamble. Systems of practice designed for a context where value equals effort expended are unlikely to work well in a knowledge work context where the relationship between effort and value is less direct (where, in fact, the value curve may invert past a certain point). Putting an updated veneer on the technique with data and algorithms won’t improve the results if the technique is fundamentally mismatched to the context (or if there is a disconnect between what you can measure and what you actually want). Sometimes, the most important thing to learn about management is when not to manage.

Disconnects between complex contexts and simplistic practices transcend the management of an organization, reaching into the very architecture of the enterprise itself (both in the organization and its relationship to its ecosystem). Poorly designed organizations (which includes those with no intentional design) can wind up with their employees faced with perverse incentives to act in a manner that conflicts with the best interests of the organization. When the employee is actually under pressure from the organization to sabotage the organization, the problem is not with the employee.

Just as with a software system, social systems have both problem and solution architectures. Likewise, in both cases the quality of the solution architecture is dependent on how well (or not) it addresses the architecture of the problem. Recognizing the various contexts in play and then resolving the conflicts between them (to include resolving challenges arising from the resolution of the original conflicts) is the essence of architectural design, regardless of the type of system (software or social). Rather than a static, one time activity, it is an ongoing need for sensing system health and responding appropriately throughout the lifecycle of the system (in fact, stopping the process will likely hasten the end of the lifecyle by way of achieving a state where the system cannot be corrected).

Management, Simple and Wrong – Semantics, Systems, and Self-Correction

Villain Caricature

Simple responses to complex situations are both seductive and dangerous. The difficulty in juggling lots of variables tempts us to employ abstraction so as to avoid being overwhelmed. Abraham Maslow’s observation, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail”, applies. Some things (e.g. landmines) react badly to being treated as if they were nails. Having more tools in the box may help avoid problems.

This isn’t the post I had in mind to write next, but it’s one that came about by accident (via a multi-day mass participant Tweet-storm, with my participation beginning here). I had planned an Organizations as Systems post re: multiple players in multiple contexts (competing, and possibly conflicting goals and motivations) and I stumbled into a conversation that should provide a nice preamble to that post which should follow this one.

Before I dive in, two quick notes:

  • Rather than try to summarize the entire conversation, I’m going to lay out what I brought to and took from it. There are far too many tweets and, as of this writing, I can’t be sure the conversation has concluded.
  • My thanks to everyone involved, whether named or not. This kind of civil, if contentious, dialog is much appreciated. When ideas rub together, it can produce irritation, but sometimes they also get polished.

Management is one of those things that, like landmines, tends to react badly to the hammer of simplistic thought. We can see this in managers who apply (or misapply) theories of management, particularly ones like scientific management (AKA Taylorism) to contexts where it is extremely inappropriate and counterproductive. Whether there really exists a context where Taylorism is actually appropriate or productive is a question for another day. We also see the hammer in reactions to abuses that dismiss all value of management out of hand. While the reaction is understandable, that doesn’t make it credible. The vicious circle just becomes more vicious; heat is generated but without corresponding light.

One thing that’s necessary to pin down is what we mean by the term “management”. Are we talking about a concept (“…the administration of an organization…”)? Are we talking about the job/profession? Perhaps the discipline (branch of knowledge) or academic discipline (field of study) is what we’re talking about. We could be talking about a theory management, or we could be talking about management practices, either individually or grouped into systems of management. Knowing specifically what’s being referred to is critical for evaluating statements. A very valid criticism of a specific theory or system (e.g. Taylorism) will likely fall apart when applied to the concept as a whole due to the fact that the concept is far broader and contrary examples are easily found.

Another issue relates to intent. Few would argue the universal detriment of poor management practices. Extracting the maximum possible effort from your employees is unlikely to result in the generation of the most value in the context of knowledge work. These practices are the very antithesis of fitness for purpose in that they do not materially benefit the organization and they alienate employees (which is yet another hit to the organization where the product is knowledge work). And yet, there are still managers that manage in that very manner. Are they, each and every one, evil? A simplistic answer, hard against either end of the spectrum, is almost surely going to be wrong. That being said, in my experience the distribution is skewed more towards the “no” side than not (just as I’ve found people who only perform when driven to it to be a very small minority).

Why would someone who wants to do their job well and in an ethical manner resort to practices that are harmful to all parties? With sadism eliminated as a motivation (there just aren’t enough in the population to account for all the positions to be filled), the far more plausible answer would be culture, tradition, and/or lack of knowledge regarding alternatives. In short, when the outcome of a system doesn’t match the intent, there’s a bug in the system.

The disconnect between leadership and management is also a problem. Leadership, admittedly, is a concept distinct from management. It makes sense that not every leader needs to be a manager. The extent to which we as a society tolerate management absent leadership, however, is shocking and part of the problem. Consider a tweet from Esther Derby:

I would argue that steering and enabling can be considered leadership qualities as much as management activities. There’s a place for supervision and compliance, however knowing how to achieve results without forcing the issue is, in my opinion, an extremely useful skill. This is not manipulation, rather a matter of understanding goals and how to achieve them intelligently. It’s a matter of understanding how to resolve potential conflicts between the goals and motivations of an organization, groups, and individuals and adapting the system so that the outcomes more closely track the intent. The alternative is allowing the system to degenerate into a web of perverse incentives that increase the gap between intent and outcome. This gap may benefit some individuals, but at the cost to other individuals and the organization as a whole.

Medicine is something that has been through a number of changes, large and small, by finding a way to adapt. While the concept of medicine (diagnosis, treatment, and prevention of disease and injury) has remained constant over time, the practices and theories have evolved greatly. The discipline itself has evolved so that not only does it adapt to change, but that it adapts in as optimal a manner as possible. In short, it has developed a culture of learning.

Understanding organizations from a systems standpoint means recognizing the need for sensing the fit between the system and its contexts (learning) and then steering to correct any mismatches (management). Simplistic approaches to management (particularly relatively static ones that have little save tradition to recommend them) can only lead to a widening gap between the intended outcome and actual results. At some point, this gap becomes wide enough to swallow the organization.

[Villain Carricature by J.J. via Wikimedia Commons.]

This is not a project

Gantt Chart

My apologies to René Magritte, as I appropriate his point, if not his iconic painting.

After I posted “Storming on Design”, it sparked a discussion with theslowdiyer around context and change. In that discussion, theslowdiyer commented:

‘you don’t adhere to a plan for any longer than it makes sense to.’
Heh, agree. I wonder if the “plan as a tool” vs. “plan as a goal in itself” discussion isn’t deserving of a post of its own 🙂

Indeed it is, even if it did take me nearly four months to get to it.

The key concept to understand, is that the plan is not the goal, merely a stated intention of how to achieve the goal (if this causes you to suspect that the words “plan” and “design” could be substituted for each other without changing the point, move to the head of the class). Magritte’s painting stated that the picture is not the thing. The map is not the territory (and if that concept seems a bit self-evident, consider the fact that Wikipedia considers it significant enough to devote over 1700 words, not counting footnotes and links, to the topic).

Conflating plan and goal is a common problem. To illustrate the difference, consider undergoing an operation. Is it your desire that the surgeon perform the procedure as planned or that your problem gets fixed? In the former scenario, your survival is optional.

This is not, however, to say that planning (or design) is useless. The output of an effective planning/design process is critical. As Joanna Young noted in her “Four Signs of Readiness – Or Not”:

I’m all for consigning the traditional 50+ pages of adminis-trivia on scope, schedule, budget, risks that requires signing in blood to the dustbin. However no organization should forego the thoughtful and hard work on determining what needs to be done, why, how, by whom, for how much – and how this will all be governed and measured as it is proceeds through sprints and/or waterfalls to delivery.

The information derived from the process (not the form, not the presentation, but the information) is critical tool for moving forward intelligently. If you have no idea of what to do, how to do it, who can do it when and for how much, you are adrift. You’re starting a trip with no idea of whether the gas tank has anything in it. Conversely, attempting to achieve 100% certainty from the outset is a fool’s errand. For any endeavor, more will be known nearer the destination. Plans without “wiggle room” are of limited usefulness as you will drift outside the cone of uncertainty from the start and never get back inside.

Having a reasonable idea of what’s acceptable variance helps determine when it’s time to abandon the current plan and go with a revised one. Planning and design are processes, not events or even phases. It’s a matter of continually monitoring context and whether our intentions are still in accordance with reality. Where the differ, reality wins. Always.

Execution isn’t blindly marching forward according to plan. It’s surfing the wave of context.

Babies, Bathwater, and Software Architects

'Fixing Problems' - XKCD 1739

I try to be disciplined about my writing (picking themes, creating a backlog, collecting notes and links on those topics, etc.), but it seems like serendipity won’t be denied, no matter what I do. On the same day that XKCD published this cartoon, Erik Dietrich published “Software Architect as a Developer Pension Plan”. While I agree with a lot that Erik says in the post, I think he also gets into a “throwing out the baby with the bathwater” situation by dismissing the need for the software architect role, part of which the cartoon illustrates.

Before I go any farther, I do need to point out that I’ve been following and interacting with Erik for about four years now. It’s a friendly disagreement; meant to be more debate than argument.

In the post, Erik makes the point (which I largely agree with) that companies see the role of software architect through a Taylorist lens. They see the software architect as a “thinker”, meant to drive a herd of “doers”:

The modern corporation, like Taylor before it, loosely divides into three categories of humans: owners/executives, managers, and grunts. Owners own and charge executives with executing their will. Executives delegate to managers. Managers think and assemble the workers into org charts designed to optimize efficiency. Workers keep their simple heads down and do as they’re told.

Theoretically, proponents would say, this applies to any domain and type of work. Corporations form themselves into these pyramids without considering any other options, whether they’re private military outfits, gigantic manufacturing facilities, or companies designing and selling software. Of course, the reality turns out to be that not all operations do well splitting thinking and labor. Let’s pick one at random. Oh, I dunno, say, writing software.

We try it. We hire junior developers to be directed by senior ones. And all of them fall in line under the watchful guise of a “tech lead,” who defers to a council of architects. And somewhere at the center of the organizational maelstrom stands The Lead Architect, directing elaborate bucket marches, water flows, and all sorts of magic, like Mickey Mouse in Fantasia. It’s a beautiful, cascading symphony of work flowing from the cleverest down to the simplest. Or… at least, that’s how it goes on paper.

Erik rightly argues that this model is fatally flawed. This is doubly so in the case of the example he gives, an individual offered a position as a software architect doing the thinking for a set of offshore “commodity” developers. Offshore development can be done well (that’s a post I’ll save for another day), but not using that model. Where Erik goes wrong, in my opinion, is in adopting this flawed definition of the software architect role.

Developers should be perfectly competent to do their own thinking. If they’re not, no software architect will be able to help. Even if that person were capable of handling the load of providing detailed directions for several developers, doing so would prevent them from attending to their own areas of concern. It would require being able to anticipate and make all the functional design decisions up front, as well as communicating them to multiple “typists” quicker than they could mindlessly hammer out the code, while still having time to consider all the requisite cross-cutting, quality of service considerations for an application. Do we really need to bother with experiments to determine that this is beyond improbable?

There is a separation between the developer role and software architect role, but it needs to be clearly understood as a difference in focus. A developer’s focus is more vertical, concentrating on implementing functionality while maintaining/improving/not harming the quality of service (aka the horrendously misnamed “non-functional”) aspects of the system. A software architect’s focus should be more horizontal, concentrating on the architecturally significant quality of service aspects that will enable the system to meet the needs of its stakeholders (or, at least, not unreasonably prevent the system from doing so).

The difference between the roles is a matter of thinking at different levels of abstraction, not of one being superior to the other. The two mindsets are both necessary and complementary. A high-quality architectural design without quality implementation cannot produce a high-quality system. Likewise, where there is implementation without architectural coordination, the quality of the resulting system is a matter of chance. For very small systems maintained by small, permanent teams, it may not be necessary for these roles to be distinct. In my experience, however, dealing with both the broadly general and the very specific at the same time scales poorly. Non-trivial systems quickly come to require architectural leadership, otherwise people are left fixing problems caused by problems fixed previously.

Leadership has been the main theme of almost all of my posts this month and has figured prominently in several of what’s been written this year. Many of these posts are assigned to the “Architectural Practice” category. This is because I absolutely believe that the software architect role is a leadership role. That being said, I don’t consider the Taylorist model to be effective leadership practice. In fact, I consider it an anti-pattern.

I’ve long been an advocate of a more collaborative model of practice. Rather than dictating, the software architect’s role should be one of consensus building and communication. Significant architectural disagreement indicates an issue with one or more members of the team. Significant uncertainty about the architecture indicates an issue with the software architect role.

Effective systems require both breadth and depth of effectiveness. This holds true for software systems as much as social systems. For the social systems producing software systems for use by other social systems (i.e. software development teams), it is imperative.

Abstract Dangers – When ‘And’ Meets ‘Or’

There’s an old saying that if you put one foot in a bucket of ice and the other in a bucket of boiling water, on average you’re comfortable. Sometimes analyzing information in the aggregate obscures rather than enlightens.

A statistician named Francis Anscombe pointed out this same principle in a more visual (though less colorful) manner more than forty years ago:

It’s an idea that I’ve been meaning to write about for a while, but was brought back to mind last week while reading an article on the Austrian school of economic theory posted on a site about medical practice and health care in the U.S. (diversity of interests and a very broad reading list is something I find useful, but that’s a topic for another day). The relevant passage:

When Ludwig von Mises began to establish a systematic theory of economics, he insisted on what he called the principle of methodological dualism: the scientific methods of the hard sciences are great to study rocks, stars, atoms, and molecules, but they should not be applied to the study of human beings. In stating this principle, he was voicing opposition to the introduction into economics of concepts such as “market equilibrium,” which were largely inspired by the physical sciences, and were perhaps motivated by a desire on the part of some economists to establish their field as a science on par with physics.

Mises remarked that human beings distinguish themselves from other natural things by making intentional (and usually rational) choices when they act, which is not the case for stones falling to the ground or animals acting on instinct. The sciences of human affairs therefore deserve their own methods and should not be tempted to apply the tools of the physical sciences willy-nilly. In that respect, Mises agreed with Aristotle’s famous dictum that ” It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits.”

I find myself agreeing and disagreeing with this at the same time. Human behavior is far from being as predictable as gravity and I agree with this for exactly the reason I disagree (at least in part) with the second paragraph. It is a mistake to characterize human action as intentional and rational. That’s not to say that all our choices are irrational and reactionary, but that there is a blend. Not only will different people respond in different ways to the same circumstance for different reasons, but the same person may react differently with different motivations on another occasion. Human nature isn’t rigidly deterministic and we consider it so at our peril.

Tom Graves post “Control, complex, chaotic” makes the same observation:

Attempting to ‘control’ complexity just doesn’t work: we need to treat the complex as complex, not as a ‘controllable problem for which we don’t quite know all the rules (but will know them all Real Soon Now, honest…)’.

Yet I’m also noticing another deeper problem: misguided attempts to apply complexity-theory to things that are neither rule-bound control nor pattern-based complexity, but are inherently ‘chaotic’ – a ‘market-of-one‘. Although we can identify definite patterns in health and health-care – that’s the whole basis of epidemiology, for example – neither rules nor statistics can help us deal with the blunt fact the everyone is different. The kind of patterns that we’d use in a complexity-model – probabilities, Bell-curve distributions, outliers, all that kind of thing – can all too easily mask the real underlying fact of uniqueness, from which that supposed ‘pattern’ will actually arise: somewhat like the barely-visible deep-randomness that underlies the visible patterns of Brownian-motion.

Trying to force something into a mold which it doesn’t fit is unlikely to work well.

Abstraction can be useful in understanding the contexts that influence the architecture of the problem. Designing an effective solution, however, will involve not just integrating the concerns of those contexts, but also dealing with any emergent challenges. The variability of human nature (in other words, sometimes the members of those contexts will not all think and act alike) can be one such emergent challenge.

Tom Grave’s again, this time from his “On mass-uniqueness”:

In practice, the scope of every system will comprise a mix of sameness and uniqueness – of predictable and unpredictable, certain and uncertain. If we design only on an assumption of sameness – as IT-systems often are – we set ourselves up for guaranteed failure. The same applies if – as is all too common – we say that our IT-system will handle all of the ‘sameness’ part of the context, and that the ‘not-sameness’ will Somebody Else’s Problem – without giving any means for that supposed ‘somebody else’ to be able to address the rest of the problem, or to link it up with the parts of the context that our system does handle.

The first requirement to make something that works in the real-world is to design for uniqueness, not against it.

In other words, a solution based on a poorly understood problem is unlikely to be a good fit. Abstraction is one tool to understand the problem, but doesn’t provide the whole picture. Shades of gray (black and white) is more likely than black or white.

When Reality Gets in the Way – Applying Systems Thinking to Design

It’s easy to sympathize with this:

It’s also more than a little dangerous if our desire for simplicity moves us to act as if reality isn’t as complex as it is. Take, for example, a recent tweet from John Allspaw about over-simplification:

My observation in return:

As I noted in my previous post, it’s part of human nature to gravitate towards easy answers. We are conditioned to try to impose rules on reality, even when those rules are mistaken. Sometimes this is the result of treating symptoms in an ad hoc manner, as evidenced by this recent twitter exchange:

This goes by the name of the “balloon effect”, pressure on one area of the problem just pushes it into another in the way that squeezing a balloon displaces the air inside.

Sometimes our response is born of bias. In sociology, for example, this phenomenon has its own name: “normative sociology”:

The whole “normative sociology” concept has its origins in a joke that Robert Nozick made, in Anarchy, State and Utopia, where he claimed, in an offhand way, that “Normative sociology, the study of what the causes of problems ought to be, greatly fascinates us all”(247). Despite the casual manner in which he made the remark, the observation is an astute one. Often when we study social problems, there is an almost irresistible temptation to study what we would like the cause of those problems to be (for whatever reason), to the neglect of the actual causes. When this goes uncorrected, you can get the phenomenon of “politically correct” explanations for various social problems – where there’s no hard evidence that A actually causes B, but where people, for one reason or another, think that A ought to be the explanation for B.

Some historians likewise have a tendency to over-simplify, fixating on aspects that “ought to be” rather than determining what is (which is another way of saying what can be reasonably defended).

Decision-making is the essence of design. Thought processes that poorly match reality, whether due to bias or insufficient analysis or both, are unlikely to yield optimal results. Systems thinking, “…viewing ‘problems’ as parts of an overall system, rather than reacting to specific parts, outcomes or events, and thereby potentially contributing to further development of unintended consequences”, is an approach more likely to achieve a successful outcome.

When the end result will be a software system integrated into a social system (i.e. a system that is a component of an ecosystem), it makes sense to understand the problem space as the as-is system to be remediated. This holds true whether that as-is system is an automated one or not. While it is not feasible to minutely analyze the problem space, much less design in detail the solution, failing to appreciate the full context on a high level presents risks. These risks include not only those inherent in satisfying the needs of the overlooked context(s), but also those challenges that emerge from the interactions of the various contexts that make up the problem space.

Deciding on a particular design direction is, obviously, a decision. Deferring that determination is, likewise, a decision. Refusing to make a definite decision is a decision as well. The answer is not to push all decisions off to as late a date as possible, but to make decisions in the moment that are defensible given the information at hand. Looking at the problem space as a whole in the context of its ecosystem provides the perspective required to make the optimal decision.

#4U2U – Canned Competency, Values & Pragmatism

home canned food

Not quite two years ago, I put up a quick post entitled “The Iron Law of Tools”, which in its essence was: “that which does for you, can do to you” (whence comes the #4U2U in the title of this post). That particular post focused on ORMs (Entity Framework to be specific), but the warning equally applies to libraries and frameworks for other technical issues as well as process, methodology and techniques.

Libraries, frameworks, and processes (“tools” from this point forward) can make things easier by allowing you to concentrate on what to do rather than how to do it (via high-level abstractions and/or conventions). However, tools are not a substitute for understanding. Neither the Law of Unintended Consequences nor Murphy’s Law have been repealed. Without an adequate understanding of how something works, you cannot assess the costs of the trade-offs that are being made (and there are trade-offs involved, you can rely on that). Understanding is likewise necessary to recognize and fix those situations where the tool causes more problems than it solves. As Pawel Brodzinski observed in his post “A Fool With a Tool Is Still a Fool”:

Any time a discussion goes toward tools, any tools really, it’s a good idea to challenge the understanding of a tool itself and principles behind its successes. Without that shared success stories bear little value in other contexts, thus end result of applying the same tools will frequently result in yet another case of a cargo cult. While it may be good for training and consulting businesses (aka prophets) it won’t help to improve our organizations.

A fool with a tool will remain a fool, only more dangerous since now they’re armed.

Pawel’s point regarding cargo cults is particularly important. Lack of understanding of how a particular effect proceeds from a given cause often manifests as dogmatic assertions in defense of some “universal truth”. The closest thing I’ve found to a universal truth of software development is that it’s very unlikely that anything is universally applicable to every context.

It’s dangerous to conflate adherence to a tool with one’s core values, such that anyone who disagrees is “wrong” or “deluded” or “unprofessional”. That being said, values can provide a frame of reference in understanding someone’s position in regard to a tool. In “The TDD Divide: Everyone is Right”, Cory House addresses the current dispute over Test-Driven Development and notes (rightly, in my opinion):

The world is a messy place. Deadlines loom, team skills vary widely, and the impact of bugs varies greatly by industry. Ultimately, we write software to make money and solve problems. Tests are a tool that help us do both. Consider the context to determine which testing style fits for your project.

Uncle Bob is right. Quality matters. Separation of concerns and unit testing help assure the utmost quality, speed, and flexibility.

DHH is right. Sometimes the cost of unit tests exceed their benefit. Some of the benefit of automated testing can be achieved through automated integration testing instead.

You need to understand what a tool offers and what it costs and the result of that equation in relation to what’s important in your context. With that understanding, you can make a rational choice.