Inflection Points and the Ingredients of Innovation

WWI Photo Montage

One of my hobbies is the study of history. Not the dry, dusty, “…on this date these people did that” type of history, rather I’m fascinated by the story of how real people interacted with each other and the world around them. I’m interested in the brilliance and the stupidity, the master strokes and the blunders, the nobility and the infamy; all of which are often found combined in the same person. I like the type of history that, in explaining the world of yesterday, is helpful in making sense of the world of today.

World War I is one of those large-scale human tragedies that holds lessons for today. Teetering on the edge of two very different times, it mixed the tools of industrial warfare (machine guns, armored vehicles, and air power) with infantry tactics barely different from those of a hundred years previous. Over a little more than four years, old and new collided cataclysmically. It provides a brutally stark picture of the uneven nature of innovation and how that uneven nature can yield both triumph and tragedy.

So, just what does this have to do with information technology, systems development, and software architecture?

Quite a lot, actually, if you look beyond the surface.

Over the last few weeks, I’ve been having a running conversation with Greger Wikstrand on this blog and Twitter, about various aspects of IT. “We Deliver Decisions (Who Needs Architects?)” wove together some observations about situational awareness and confirmation bias into a discussion about the need for and aim of the practice of architectural design. In response, Greger posted a link to a video with him and Woody Zuill on serendipity in software development. That prompted my last post, “Fixing IT – Too Big to Succeed?”, discussing how an embedded IT approach could be used to foster the organizational agility needed to provide IT both effectively and efficiently.

Greger responded with “Serendipity and successful innovation”, making a number of extremely important points. For example, while fortune does favor the prepared, recognition of an opportunity is not enough. Likewise, an organizational structure that doesn’t impede innovation is important, but not impeding innovation is not the same as actively driving it forward. Evaluation of ideas is critical as well, in order to differentiate “…between the true gold of serendipity and the fool’s gold of sheer coincidence”. That evaluation, however, needs to be nimble as well. He closed the post with an example of small-company innovation on the part of a large organization.

That last part was what brought WWI to mind. The first and second World Wars were very different affairs (both horrible, but in different ways). Quite a lot of the technologies that we think of as being characteristic of the second (e.g. warplanes and tanks), originated in the first. Having a technology and discovering how best to employ it are two very different things. Certainly there was some technical evolution over twenty years, but the biggest change was in how they were employed.

In some cases, the opposite problem was in play. The organizational structures used by the combatants had proved their effectiveness for over a hundred years. Mission tactics, AKA auftragstaktic (whereby a commander, instead of detailed instructions, provides a desired outcome to his subordinates who are then responsible for achieving that outcome within broad guidelines), was also well-known at the time, but while radio existed, it was not yet portable enough to link dispersed mobile front line units with headquarters. Instead, that communication was limited to field phones and runners, and tactical communication was limited by the range of the human voice, forcing units to bunch up.

Without all the ingredients (technology, the understanding of how to employ it, and an effective organizational structure) outcomes are likely to be poor. Information must flow both up and down the hierarchy, otherwise decisions are being made blindly and without coordination. The parts must not only perform their function, but must also collaborate together. Organizations that fail to operate effectively across the full stack fail to operate effectively at all.

Technical Debt and Rolling Re-writes (Who Needs Architects?)

If you think building a system is challenging, try maintaining one.

Tom Cagley‘s recent post “Plan to Throw One Away Re-Read Saturday: The Mythical Man-Month, Part 11”, was a good reminder that while “technical debt” may be something currently on the radar for many, it’s far from a new phenomenon. The concept of instant legacy applications was in place when forty years ago when Frederick Brooks wrote his masterpiece, even if they weren’t called that. As Tom observed in the post:

Rarely is the first attempt useful to the end consumer, and the usefulness of that first attempt is less in the code than in the feedback it generates. Software development is no different. The initial conceptual design and anticipated technical architecture of a large project rarely stands up to the rigors of the discovery process, and those designs should be learned from and then thrown away.

The faulty assumptions and design flaws accumulate not only from sprint to sprint leading up to the initial release, but also from release to release. In spite of the fact that a product can be so seriously flawed, throwing it away and starting over is easier said than done. While sunk costs cannot be recovered, too sanguine an attitude towards them may not enhance your credibility with the customer. Having to pay for the same thing over and over can make them grumpy.

This sets up a dilemma, one that frequently leads to living with technical debt and attempting to incrementally patch it up. There are limits, however, to the number of band-aids that can be applied. This might make it tempting to propose a rewrite, but as Erik Dietrich stated in “The Myth of the Software Rewrite”:

Sure, they know things now that they didn’t know when they started on this code 3 years ago. But won’t the same thing be true in 3 years? Won’t the developers then be looking at the code and saying, “this is a mess — if only we knew in 2015 what we now know in 2018!” And, beyond that, what makes you think that giving the same group of people the same marching orders won’t result in the same kind of code?

The “big rewrite from scratch because this is a mess” is a losing strategy.

Fortunately, there is an alternative. Quoting Tom Cagley again from the same post as above:

If change is both inevitable and good (within limits), then both systems and organizations (a type of system) need to be engineered to support and facilitate change. Architecturally, techniques such as modularization, object-oriented design and other processes that foster simplification and incremental change create an environment in which change isn’t avoided, but rather encouraged.

While we may laugh at the image of changing a tire while the vehicle is in motion, it is an accurate metaphor. Customers expect flexibility and change on the go; waiting equals lost business. The keys to evolving in place are having an intentionally designed, modular architecture and an understanding of where the weaknesses lie. Both of these are concerns that reside squarely on the architect’s plate.

Modularity not only makes an application more easily maintainable via separation of concerns, but it also embraces change by making components replaceable. This is one of the qualities that has made microservices such a hot topic, although it would be a mistake to think that microservices are the only way (or best way in all cases) to achieve modularity.

Modularity brings benefits beyond the purely technical as well. Rewrites of a fraction of an application are more easily sold than big-bang efforts. Demonstrating forethought (while you can’t predict what the change will be, predicting the need for change is more of a sure thing) demonstrates concern for the customer’s welfare, which should make for a better relationship.

Being able to throw a system away a little at a time allows us to keep the car on the road while it changes and adapts to changing conditions.

Modeling the Evolution of Software Architecture

Herve Lourdin‘s tweet wasn’t aimed at modeling, but the image nicely illustrates a critical deficiency in modeling languages – showing evolution of a system over time. Structure and behavior are captured, but only for a given point in time. Systems and their ecosystems, however, are not static. A map of the destination without reference to the point of origin or rationale for choices is of limited use in communicating the what, how, and why behind architectural decisions.

“The Road Ahead for Architectural Languages” on InfoQ (re-published from IEEE Software) recently noted the following reasons for not using an architectural language (emphasis added):

  • formal ALs’ need for specialized competencies with insufficient perceived return on investment,
  • overspecification as well as the inability to model design decisions explicitly in the AL, and
  • lack of integration in the software life cycle, lack of mature tools, and usability issues.

All of the items in bold above represent usability and value issues; a failure to communicate. As Simon Brown observed in “Simple Sketches for Diagramming Your Software Architecture”:

In today’s world of agile delivery and lean startups, some software teams have lost the ability to communicate what it is they are building and it’s no surprise that these teams often seem to lack technical leadership, direction and consistency. If you want to ensure that everybody is contributing to the same end-goal, you need to be able to effectively communicate the vision of what it is you’re building. And if you want agility and the ability to move fast, you need to be able to communicate that vision efficiently too.

Simon is a proponent of a sketching technique that answers many of these communication failures:

The goal with these sketches is to help teams communicate their software designs in an effective and efficient way rather than creating another comprehensive modelling notation. UML provides both a common set of abstractions and a common notation to describe them, but I rarely find teams that are using either effectively. I’d rather see teams able to discuss their software systems with a common set of abstractions in mind rather than struggling to understand what the various notational elements are trying to show.

Simon’s colleague, Robert Annett, recently posted “Diagrams for System Evolution”, which proposes using the color-coding scheme from diff tools to indicate change: red = remove, blue = change, green = new. Simon followed this up with two posts of his own, “Diff’ing software architecture diagrams” and “Diff’ing software architecture diagrams again”, which dealt with applying Robert’s ideas to Simon’s structurizr.com tool.

Simon’s work, coupled with Robert’s ideas, addresses many of the highlighted deficiencies listed above (it even touches on the third bullet that I didn’t emphasize). Ruth Malan’s work also contains some ideas that are vital (in my opinion) to being able to visualize and communicate important design considerations – explicit quality of service and rationale elements along with organizational context elements. A further enhancement might be incorporating these into a platform that can tie elements of software architecture together with elements of solution and enterprise architecture, such as the one proposed by Tom Graves.

Given the need for agility, it might seem strange to be talking about modeling, design documentation, and architectural languages. The fact is, however, that many of us deal with inherently complex systems in inherently complex ecosystems. Without the ability to visualize a design in its context, we run the risk of either slowing down or going down. Not everyone can afford to “move fast and break things”.

Microservices – Sharpening the Focus

Motion Blurred London Bus

While it was not the genesis of the architectural style known as microservices, the March 2014 post by James Lewis and Martin Fowler certainly put it on the software development community’s radar. Although the level of interest generated has been considerable, the article was far from an unqualified endorsement:

Despite these positive experiences, however, we aren’t arguing that we are certain that microservices are the future direction for software architectures. While our experiences so far are positive compared to monolithic applications, we’re conscious of the fact that not enough time has passed for us to make a full judgement.

One reasonable argument we’ve heard is that you shouldn’t start with a microservices architecture. Instead begin with a monolith, keep it modular, and split it into microservices once the monolith becomes a problem. (Although this advice isn’t ideal, since a good in-process interface is usually not a good service interface.)

So we write this with cautious optimism. So far, we’ve seen enough about the microservice style to feel that it can be a worthwhile road to tread. We can’t say for sure where we’ll end up, but one of the challenges of software development is that you can only make decisions based on the imperfect information that you currently have to hand.

In the course of roughly fourteen months, Fowler’s opinion has gelled around the “reasonable argument”:

So my primary guideline would be don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.

This mirrors what Sam Newman stated in “Microservices For Greenfield?”:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

In short, the application architectural style known as microservice architecture (MSA), is unlikely to be an appropriate choice for the early stages of an application. Rather it is a style that is most likely migrated to from a more monolithic beginning. Some subset of applications may benefit from that form of distributed componentization at some point, but distribution, at any degree of granularity, should be based on need. Separation of concerns and modularity does not imply a need for distribution. In fact, poorly planned distribution may actually increase complexity and coupling while destroying encapsulation. Dependencies must be managed whether local or remote.

This is probably a good point to note that there is a great deal of room between a purely monolithic approach and a full-blown MSA. Rather than a binary choice, there is a wide range of options between the two. The fractal nature of the environment we inhabit means that responsibilities can be described as singular and separate without their being required to share the same granularity. Monoliths can be carved up and the resulting component parts still be considered monolithic compared to an extremely fine-grained sub-application microservice and that’s okay. The granularity of the partitioning (and the associated complexity) can be tailored to the desired outcome (such as making components reusable across multiple applications or more easily replaceable).

The moral of the story, at least in my opinion, is that intentional design concentrating on separation of concerns, loose coupling, and high cohesion is beneficial from the very start. Vertical (functional) slices, perhaps combined with layers (what I call “dicing”), can be used to achieve these ends. Regardless of whether the components are to be distributed at first, designing them with that in mind from the start will ease any transition that comes in the future without ill effects for the present. Neglecting these issues, risks hampering, if not outright preventing, breaking them out at a later date without resorting to a re-write.

These same concerns apply higher levels of abstraction as well. Rather than blindly growing a monolith that is all things to all people, adding new features should be treated as an opportunity to evaluate whether that functionality coheres with the existing application or is better suited to being a service from an external provider. Just as the application architecture should aim for modularity, so too should the solution architecture.

A modular design is a flexible design. While we cannot know up front the extent of change an application will undergo over its lifetime, we can be sure that there will be change. Designing with flexibility in mind means that change, when it comes, is less likely to be an existential crisis. As Hayim Makabee noted in his write-up of Rotem Hermon’s talk, “Change Driven Design”: “Change should entail extending the system rather than refactoring.”

A full-blown MSA architecture is one possible outcome for an application. It is, however, not the most likely outcome for most applications. What is important is to avoid unnecessary constraints and retain sufficient flexibility to deal with the needs that arise.

[London Bus Image by E01 via Wikimedia Commons.]

Institutional Amnesia, Cargo Cults and Software Development

When George Santayana stated that “Those who cannot remember the past are condemned to repeat it.”, he wasn’t talking about technology. When Brenda Michelson and Ed Featherston said much the same thing recently, they were:

https://twitter.com/jetpack/status/573850405026861056
https://twitter.com/jetpack/status/573851102808010752

It’s a sad fact of life that today’s silver bullet is likely to be yesterday’s junk which was probably the day before yesterday’s silver bullet.

Poor design choices are made for a variety of reasons. Sometimes it’s a matter of ego. Sometimes inadequate analysis is the culprit. Focusing on technology rather than problem-solving can be another pitfall. Even attempts at post-hoc justification of a prior bad decision can drive new mistakes.

An uncritical acceptance of tradition is a significant source of problem designs. Eberhard Wolff recently took a swipe at one old standard:

The stock reason for a tiered/distributed design is scalability. However, it’s not a given that distributing X horizontal layers across Y machines (yielding X/Y instances) will yield better results than Y machines, each with all three layers deployed on the same machine. The context in which this sort of distribution makes sense is far from universal. Even when the costs of distribution are outweighed by the benefits, traditional monolithic horizontal layers will likely be less efficient than vertical slices. One of the purported benefits of microservices is the ability to independently scale according to business concerns (vertical slices organized around bounded contexts) rather technology concerns (horizontal layers).

The mention of microservices brings to mind the problem of jumping on bandwagons. How many applications currently under development are being designed using this architectural style because it’s the “next big thing” rather than because the style fits the problem? Sam Newman, author of O’Reilly’s Building Microservices, in “Microservices for Greenfield?”, even states that he considers the style to be more suitable for evolving an existing system rather than building from scratch:

I remain convinced that it is much easier to partition an existing, “brownfield” system than to do so up front with a new, greenfield system. You have more to work with. You have code you can examine, you can speak to people who use and maintain the system. You also know what ‘good’ looks like – you have a working system to change, making it easier for you to know when you may have got something wrong or been too aggressive in your decision making process.

You also have a system that is actually running. You understand how it operates, how it behaves in production. Decomposition into microservices can cause some nasty performance issues for example, but with a brownfield system you have a chance to establish a healthy baseline before making potentially performance-impacting changes.

I’m certainly not saying ‘never do microservices for greenfield’, but I am saying that the factors above lead me to conclude that you should be cautious. Only split around those boundaries that are very clear at the beginning, and keep the rest on the more monolithic side. This will also give you time to assess how how mature you are from an operational point of view – if you struggle to manage two services, managing 10 is going to be difficult.

This same over-eagerness is present in front-end development as much as back-end development. Stefan Tilkow recently tweeted regarding the trend of jumping straight into complex Javascript framework applications rather than evolving into them based on need:

In my opinion, the key to effective design is being able to give a good answer when asked “why”. Being able to articulate the reasons behind the choices made is critical to justifying them. By reasons, I mean a logical explanations of how the techniques chosen contribute to the desired ends. Neither “X recommends this” nor “This is what everybody’s doing” count. Designing, developing, and evolving software systems is not a game of following a recipe. In the words of Grady Booch:

#ShadowSocialMedia or Why Won’t People Use the Product the Way They’re Supposed to

Scott Berkun dislikes the way people are using images to bypass Twitter’s 140 character limit:

His point is very valid, but:

Which is the issue. Sometimes there’s a need to go beyond that limit. Sure, you can chunk your thoughts up across multiple tweets, but users find it burdensome to respect Twitter’s constraint on the amount of text per tweet. Constrained customers, assuming they stick with a product, tend to come up with “creative” solutions to that product’s shortcomings that reflect what they value. The customers’ values may well conflict with the developers’. When “conflict” and “customer” are in the same sentence, there’s generally a problem..

Berkun’s response to @honatwork‘s rebuttal nearly captures the issue:

I say “nearly”, because Twitter was built long before 2015. The problem is that it’s 2015 and Twitter has not evolved to meet a need that clearly exists.

In the IT world, it’s common to hear terms like “Shadow IT” or “Rogue IT”. Both refer to users (i.e. customers) going beyond the pale of approved tools and techniques to meet a need. This poses a problem for IT in that the customer’s solution may not incorporate things that IT values and retrofitting those concerns later is far more difficult. Taking a “products, not projects” approach can minimize the need for customer “creativity”, for in-house IT and external providers.

Trying to hold back the tide just won’t work, because the purpose of the system is to meet the customers’ needs, not respect the designers’ intent.

“Avoiding Platform Rot” on Iasa Global Blog

just never had the time to keep up with the maintenance

Is your OS the latest version? How about your web server? Database server? If not now, when?

A “no” answer to the first three questions is likely not that big a deal. There can be advantages to staying off the bleeding edge. That being said, the last question is the key one. If the answer to that is “I haven’t thought about it”, then there’s potential for problems.

See the full post on the Iasa Global Blog (a re-post, originally written for the old Iasa blog).

Emergence versus Evolution

You lookin' at me?

Hayim Makabee’s recent post, “The Myth of Emergent Design and the Big Ball of Mud”, encountered a relatively critical reception on two of the LinkedIn groups we’re both members of. Much of that resistance seemed to stem from a belief that the choice was between Big Design Up Front (BDUF) and Emergent Design. Hayim’s position, with which I agree, is that there is continuum of design with BDUF and Emergent Design representing the extremes. His position, with which I also agree, is that both extremes are unlikely to produce good results, and that the answer lies in between.

The Wikipedia definition of Emergent Design cited by Hayim, taken nearly a word for word from the Agile Sherpa site, outlines a No Design Up Front (NDUF) philosophy:

With Emergent Design, a development organization starts delivering functionality and lets the design emerge. Development will take a piece of functionality A and implement it using best practices and proper test coverage and then move on to delivering functionality B. Once B is built, or while it is being built, the organization will look at what A and B have in common and refactor out the commonality, allowing the design to emerge. This process continues as the organization continually delivers functionality. At the end of an agile or scrum release cycle, Development is left with the smallest set of the design needed, as opposed to the design that could have been anticipated in advance. The end result is a smaller code base, which naturally has less room for defects and a lower cost of maintenance.

Rather than being an unrealistically extreme statement, this definition meshes with ideas that people hold and even advocate:

“You need an overarching vision, a “big picture” design or architecture. TDD won’t give you that.” Wrong. TDD will give you precisely that: when you’re working on a large project, TDD allows you to build the code in small steps, where each step is the simplest thing that can possibly work. The architecture follows immediately from that: the architecture is just the accumulation of these small steps. The architecture is a product of TDD, not a pre-designed constraint.

Portion of a comment to Dan North’s “PUBLISHED: THE ART OF MISDIRECTION”

Aspects of a design will undoubtedly emerge as it evolves. Differing interpretations of requirements as well as information deficits between the various parties, not to mention changing circumstances all conspire to make it so. However, that does not mean the act of design is wholly emergent. Design connotes activity whereas emergence implies passivity. A passive approach to design is, in my opinion, unlikely to succeed in resolving the conflicts inherent in software development. In my opinion, it is the resolution of those conflicts which allows a system to adapt and evolve.

I’ve previously posted on the concept of expecting a coherent architecture to emerge from this type of blinkered approach. Both BDUF and NDUF hold out tremendous risk of wasted effort. It is as naive to expect good results from ignoring information (NDUF) as it is to think you possess all the information (BDUF). Assuming a relatively simple system, ignoring obvious commonality and obvious need for flexibility in order to do the “simplest thing that could possibly work, then refactor” guarantees needless rework. As the scale grows, the likelihood of conflicting requirements will grow. Resolving those conflicts after code for one or more features is in place will be more likely to yield unsatisfactory compromises.

The biggest weakness of relying on refactoring is that there are well-documented limits to what people can process. As the level of abstraction goes down, the number of concerns goes up. This same limit that dooms BDUF to failure limits the ability to refactor large systems into a coherent whole.

Quality of service issues are yet another problem area for the “simplest thing that could possibly work” method. By definition, that concentrates on functionality to the exclusion of non-functional concerns. Security and scalability are just two concerns that typically fare poorly when bolted on after the fact. Premature optimization is to be avoided, but being aware of the expected performance environment can help you avoid blind alleys.

One area where I do agree with the TDD advocate quoted above, is that active design imposes constraints. The act of design involves defining structure. As Ruth Malan has said, “negative space is telling; as is what it places emphasis on”. Too little structure poses as much risk as too much.

An evolutionary design process, such as Hayim’s Adaptable Design Up Front (ADUF), recognizes the futility of predicting the future in minute detail (BDUF) without surrendering to formlessness (NDUF). Experience about what parts of a system are most likely to change is invaluable. Coupled with reasonable planning based on what is known about the big picture of the current release and what’s known about follow-up releases can be used to drive a design that strikes the right balance – flexible, without being over-engineered.

[Photograph by Jose Luis Martinez Alvarez via Wikimedia Commons.]

Avoiding Platform Rot

(Mirrored from the Iasa Blog)

just never had the time to keep up with the maintenance

Is your OS the latest version? How about your web server? Database server? If not now, when?

A “no” answer to the first three questions is likely not that big a deal. There can be advantages to staying off the bleeding edge. That being said, the last question is the key one. If the answer to that is “I haven’t thought about it”, then there’s potential for problems.

“Technical Debt” is a currently a hot topic. Although the term normally brings to mind hacks and quick fixes, more subtle issues can be technical debt as well. A slide from a recent Michael Feathers presentation (slide 5) is particularly applicable to this:

Technical Debt is: the Effect of
Unavoidable Growth and the Remnants
of Inertia

New features tend to be the priority for systems, particularly early in their lifecycle. The plumbing (that which no one cares about until it quits working), tends to be neglected. Plumbing that is outside the responsibility of the development team (such as operating systems and database management systems) is likely to get the least attention. This can lead to systems running on platforms that are past their end of support date or a scramble to verify that the system can run on a later version. The former carries significant security risks while the latter is hardly conducive to adequately testing that the system will function identically on the updated platform. Additionally, new capabilities as well as operational and performance improvements may be missed out on if no one is paying attention to the platform.

One method to help avoid these types of issues is adoption of a DevOps philosophy, such as Amazon’s:

Amazon applies the motto “You build it, you run it”. This means the team that develops the product is responsible for maintaining it in production for its entire life-cycle. All products are services managed by the teams that built them. The team is dedicated to each product throughout its lifecycle and the organization is built around product management instead of project management.

This blending of responsibilities within a single team and focus on the application as a product (something I consider extremely beneficial) lessens the chance that housekeeping tasks fall between the cracks by removing the cracks. The operations aspect is enhanced by ensuring that its concerns are visible to those developing and the development aspect is enhanced by increased visibility into new capabilities of the platform components. The Solutions Architect role, spanning application, infrastructure, and business, is well placed to lead this effort.

Holding Back the Tide

There’s an apocryphal story about King Canute (pictured) commanding the tide not to come in. Whether you ascribe to the version that it was an example of his arrogance or that it was his teaching the court that there were limits to royal authority, one thing is clear: he failed to stop the tide. In this failure, he achieved something unlike almost any other early English king: people on the internet recognize his name. To paraphrase Dilbert, in order to raise your visibility, screw up (in Canute’s case, royally).

One of the latest tides to roll in is Bring Your Own Device (BYOD), where employees use their personally owned devices (typically smart phones and tablets) for work. On NetworkComputing.com, in an article posted last week, author Joe Onisick referred to it as “Bring Your Own Disaster”. It doesn’t take much imagination to realize that this phenomena poses substantial risks to an organization. At the same time, there’s a risk to prohibiting the use of devices. As Onisick put it:

The word “no” used to be commonplace in the vocabulary of enterprise IT and the CIO/CTO. In the past, they would have easily handled this problem of BYOD, but now the end-user with the request is an equal or senior in the company.

It takes a lot of courage, and very little brains, to reply “denied” when the CEO comes looking for a way to get his new tablet on the network.

Ironically, I read that article on the same day that “The Department of No” was posted on TechRepublic. In that article, author Patrick Gray states:

There’s an exceptionally dangerous perception in many corporate IT departments, and it is one that threatens the very existence of an internal IT department: being perceived as the “Department of No.” This description applies to IT organizations where the unstated goal of IT is to insert itself into every technology-related discussion and highlight all the reasons why an initiative won’t work. Whether IT staff is noting that a technology is unproven, IT lacks sufficient resources, or some other potentially legitimate quip, eventually a perception grows that IT exists to point out every tiny cloud on an otherwise sunny day.

Saying “No” is a great way to start a guerilla movement. It worked for PCs, it worked for PC networking, it worked for the internet and will work with BYOD. When it’s easier to get forgiveness than permission, expect to be handing out a lot of pardons. Make no mistake, when the “offense” is profitable, then pardon will be forthcoming. “We can, here’s what it will cost and here’s the risks” is a better response in that the requester is transformed into a partner in the decision making process.

An IT operation that entertains ideas and provides useful guidance is more likely to be worked with than around. As Gray put it: “When IT starts becoming a trusted advisor and group that is looked to for answers, you’ll find yourself being invited to kickoff meetings rather than called two weeks before go-live.”