Holding Back the Tide

There’s an apocryphal story about King Canute (pictured) commanding the tide not to come in. Whether you ascribe to the version that it was an example of his arrogance or that it was his teaching the court that there were limits to royal authority, one thing is clear: he failed to stop the tide. In this failure, he achieved something unlike almost any other early English king: people on the internet recognize his name. To paraphrase Dilbert, in order to raise your visibility, screw up (in Canute’s case, royally).

One of the latest tides to roll in is Bring Your Own Device (BYOD), where employees use their personally owned devices (typically smart phones and tablets) for work. On NetworkComputing.com, in an article posted last week, author Joe Onisick referred to it as “Bring Your Own Disaster”. It doesn’t take much imagination to realize that this phenomena poses substantial risks to an organization. At the same time, there’s a risk to prohibiting the use of devices. As Onisick put it:

The word “no” used to be commonplace in the vocabulary of enterprise IT and the CIO/CTO. In the past, they would have easily handled this problem of BYOD, but now the end-user with the request is an equal or senior in the company.

It takes a lot of courage, and very little brains, to reply “denied” when the CEO comes looking for a way to get his new tablet on the network.

Ironically, I read that article on the same day that “The Department of No” was posted on TechRepublic. In that article, author Patrick Gray states:

There’s an exceptionally dangerous perception in many corporate IT departments, and it is one that threatens the very existence of an internal IT department: being perceived as the “Department of No.” This description applies to IT organizations where the unstated goal of IT is to insert itself into every technology-related discussion and highlight all the reasons why an initiative won’t work. Whether IT staff is noting that a technology is unproven, IT lacks sufficient resources, or some other potentially legitimate quip, eventually a perception grows that IT exists to point out every tiny cloud on an otherwise sunny day.

Saying “No” is a great way to start a guerilla movement. It worked for PCs, it worked for PC networking, it worked for the internet and will work with BYOD. When it’s easier to get forgiveness than permission, expect to be handing out a lot of pardons. Make no mistake, when the “offense” is profitable, then pardon will be forthcoming. “We can, here’s what it will cost and here’s the risks” is a better response in that the requester is transformed into a partner in the decision making process.

An IT operation that entertains ideas and provides useful guidance is more likely to be worked with than around. As Gray put it: “When IT starts becoming a trusted advisor and group that is looked to for answers, you’ll find yourself being invited to kickoff meetings rather than called two weeks before go-live.”

Advertisement

Do you have releases or escapes?

When thinking about improving the quality of a development process, the mind naturally heads in certain directions: requirements gathering and tracking, design, coding practices, and testing and quality assurance. All of these are vital components, but without a solid release management process, they can be insufficient.

Excellent code that is poorly delivered will be perceived as poor code. Faulty release management will even cause problems prior to go-live in that time spent correcting release issues will likely eat into that scheduled for testing efforts.

I won’t attempt to create the definitive work on release management and environments, but I will outline the system I helped create and have used over the last eleven years. It’s appropriate for development groups creating in-house applications, both internal and customer-facing. It works equally well with traditional desktop applications, smart clients, and web applications. It does not, however, encompass performance testing, which is outside the scope of this post. Performance testing will require its own dedicated environment that mirrors the production environment.

First and foremost is to understand that beyond the development environment, administrative access to both production environments and non-production environments should be restricted. Even if you don’t have a dedicated release management team, at least two people should have that role (one primary and a backup). Those performing the release management function should not be involved in coding.

Access restrictions should not be viewed as a matter of trust, but of accountability and control. Even as lead architect and manager of a development team, I lacked (and didn’t want) the ability make changes to environments under the control of the release management team. Aside from making the auditors happy, not having access to make changes outside of the process insulates you from accusations that you made changes outside the process. People sometimes forget to document changes, but if they lack the ability to make a change in the first place, then that consideration can be eliminated when troubleshooting a deployment.

The purpose of forcing all changes into a controlled framework is to promote repeatability. Automated build and deployment tools help in this regard as well. Each environment that a build must be promoted through provides another chance to get the deployment process perfect before go-live. The first environment should catch almost all possible deployment errors, with only configuration and/or data errors left for the succeeding environments.

The next step is to construct a set of environments around your development process and the number of versions you need to support. In our case, we only deal with two versions, so we have two pre-production environment branches: current, which is the same version as production and is used for any hotfixes that may be required, and future which hosts the release currently under development. To support our process, we have three to four (depending on the application) pre-production environments per branch as follows:

  • Development: Used for coding, this environment typically consists of a shared database server and the virtual machines on the developers workstations that are used for web/application servers. As noted above, coders have unrestricted access to all components of this environment. All changes to code and database objects must originate in this environment and be promoted through the succeeding ones in order to be deployed to production.
  •  DevTest: This is the first controlled access environment and is used for integration testing of code by the development staff (for all applications with more than one developer assigned, we use a “no one tests their own code” rule). In addition to allowing the development team the ability to shake down the build as a whole, it verifies that the deployment instructions are complete. As noted previously, developers have no administrative access to the servers and have only read access to the database(s). This ensures that only documented changes made via the release process take place.
  • Test: This environment is used for functional testing by the test staff. As with all controlled environments, developers have no administrative access to the servers and have only read access to the database(s). Since the deployment has been verified in the previous environment (with the exception of environment-specific configuration and data changes), the chance that testing will be delayed due to a bad release should be greatly minimized.
  • UAT/Training: This environment is optional, based on the application and the preferences of the business owner(s). For those applications that use it, it allows for User Acceptance Testing and/or training to take place without impacting any functional testing that may still be under way.

These environments should share the same hardware architecture as the production environment, but need not be exact clones. For example, an application that consists of two web farms (one internal, one in the DMZ) and a common database server can have its pre-production needs be adequately served by a single database server and ten (fourteen if you include UAT/Training) web servers. Ideally, the database server should run a separate instance for each of the six (or eight) environments, but as long as the database name(s) are configurable, then they could all be handled by a single instance if absolutely necessary. The environments would look as follows:

Current Future
Development database instance only database instance only
DevTest internal web server, external web server, database instance internal web server, external web server, database instance
Test internal web server, external web server, database instance two internal web servers, two external web servers, database instance
UAT/Training internal web server, external web server, database instance internal web server, external web server, database instance

If the production environment is load balanced, then that must be accounted for in at least one environment since it can lead to functional issues (losing web session state if the balancing isn’t set up properly is a classic one). My practice is to do so in the Test Future environment since the most comprehensive functional testing occurs there and it is on the branch where new functionality is introduced.

I would imagine that some might have choked on the 10-14 web servers. Remember, however, that absent conflicting dependencies, these environment can be shared across multiple applications and virtualization technology can drastically reduce the number of physical boxes needed. Cloud computing (infrastructure as a service) could also be used to reduce infrastructure costs significantly.

The last step is to make the process as smooth as possible. Practice makes perfect, automation makes it more so. Releases should be boring.

Specialists, Generalists and “Blame the Victim”

An article I read this weekend on TechRepublic caught my eye with its title: “Insist medical practices hire specialists rather than one generalist”. The author (Donovan Colbert) discusses another post by another author (Erik Eckel) who had been called in to clean up a mess left by a previous IT provider. The previous provider had incurred a significant cost for the client and failed to correct the issue. Donovan’s summed it up thusly:

What kind of business throws a Raid 10, 32 GB, SSD, dual CPU/multi-core server at a performance problem and yet has a network that some hack threw together with daisy-chained D-Link unmanaged switches? A healthcare provider.

Wow. Blame the victim much? He goes on to say:

Erik claims the problem is amateur IT consultants, but I think that is a symptom. The problem is healthcare professionals; more specifically, the doctors who run the medical practices cause many of these issues.

I wonder what doctor confronted with a specific and serious medical aliment would consider the services of a one-stop general practice to perform the necessary services to restore her to health. No MD would go in for surgery if one guy claimed he was capable of doing it all — the cutting, the clamping, the anesthetics, and any other specialties required to operate; and yet, doctors will hire one consultant to handle their entire IT solution, including systems, applications, networking, wireless, hardware, and printers.

I’m not sure what it is about health care professionals that pushes Donovan’s buttons, but it seems to me that Erik is right on the money. No doctor (or anyone else, for that matter) is going to depend on a generalist for specialist procedures, but most people don’t have a stable full of specialists on call, either. They start out with a general practitioner, who will refer them out to specialists as appropriate. That same model should work for IT as well.

Just as the patient (in most cases) isn’t a doctor, the doctor isn’t an IT person. Expecting a doctor, or for that matter, any non-IT business person, to independently manage their services is as ridiculous as expecting a patient without any medical background to be able to independently manage their own care.

The issue in this instance appears to be one of professionalism. The prior provider lacked the requisite knowledge and failed their customer by not calling in someone who had the skills needed. Worse, that provider caused the client to pay for something that wasn’t needed. Ironically, it was Donovan himself who identified the answer:

My gift as an IT professional is not how outrageously skilled and knowledgeable I am from one end of the IT spectrum to another; my gift is that I know enough to realize when I am out of my scope, and to say, “we need to bring in someone else who specializes in this area to deal with this issue.”

Connecting with your Customers

An article posted Sunday on Fresh Brewed Code caught my eye. According to the author:

I recently wrapped up a project that afforded me an opportunity I had never had before as a software developer. I sat with my end users every day. I lived with them. This may sound like a bad idea, and in many cases it might be, but in this particular context I loved it. I have never had such a tight feedback loop. I could push out a new feature and within five minutes hear someone down the row yell, “Hey, look at this!”

This contains a wealth of lessons for those involved in software development:

  • Contact with those using the product is critical to understanding their needs: Obviously, the author’s situation, being co-located with his user base, isn’t something that would work universally. It doesn’t scale across large teams or large user populations and the potential interruptions aren’t conducive to productivity. That being said, it’s obvious that some close up contact with those actually working with the product is important, even if it’s not the entire development team doing so. Observing the product “in the wild” enables you to see the where the product helps and where it hinders the users. It allows you to see opportunities for enhancement that you might not otherwise find out about.
  • Users that feel that their needs are important are more likely to become active partners in the process: If people feel that their concerns aren’t being heard, most won’t bother providing feedback. Some, however, will still provide feedback, just of a shriller nature. People who are engaged will tend to provide more constructive feedback, making it easier to meet their needs.
  • Grateful users lead to motivated development teams: Satisfied users tend to be grateful users (really, I’ve actually experienced this and it’s a wonderful thing). Sometimes they’re so grateful that you have to push them to ask for more.
  • Motivated teams work harder for their customers: In spite of the stereotypes, technical types do respond to positive reinforcement. It’s a powerful motivator when your efforts are recognized and appreciated. Professionals should (and do) provide their best efforts regardless, but the applause certainly makes it easier.

Economists call a self-perpetuating cycle of benefits a virtuous circle. Producers and consumers of software might call it an ideal situation.

So what exactly does an Architect do?

It’s not unusual in the IT world to have difficulty explaining what you do to your parents. It is, however, a cause for concern when your peers, even your superiors, don’t have a clear understanding. Unfortunately, that’s the case with architects right now.

Part of the problem is terminology. Terms like Enterprise Architect, Solutions Architect, Technical Architect, Information Architect, Business Architect, Database Architect, Software Architect, Network Architect, Systems Architect, User Experience Architect, Security Architect, Application Architect, and Infrastructure Architect are thrown around without authoritative definition and little agreement as to what they mean. I’ve yet to see a comprehensive taxonomy and I won’t be attempting one here. Instead, I hope to highlight some recent work in the area and contribute something of my own to help increase understanding.

The best place to start, is with some definitions. As I noted in the blog’s inaugural post, Wikipedia defines “architecture” as “…both the process and product of planning, designing and construction”. Merriam-Webster’s second definition for “architect” is “a person who designs and guides a plan or undertaking”. These definitions, in my opinion, apply regardless of the type of architect being described. All architects plan, design, and guide.

I’ve found three articles that, in my opinion, do a good job of illustrating how different architect roles relate to each other and what those roles do: IASA Sweden’s Study of Architect Roles, ArchiTech Consulting LLC’s “Enterprise Architecture .vs. Collection of Architectures in Enterprise”, and Igor Lobanov’s “Differences Between Architecture Roles”. I conclude with my thoughts on the subject.

IASA Sweden’s Study of Architect Roles

The Sweden chapter of Iasa (originally the International Association of Software Architects) commissioned a study of Architect Roles that was published in the April 2008 edition of The Architecture Journal. The article dealt with four roles: Enterprise Architect, Business Architect, Solution Architect, and Software Architect. The roles are described as overlapping (see image below) and differing in depth and breadth of knowledge as well as scope of interest. Unfortunately, the article ignores two architectural domains: Technology Architecture and Information Architecture. Additionally, the description of the Enterprise Architect is too IT-centric.

ArchiTech Consulting LLC’s “Enterprise Architecture .vs. Collection of Architectures in Enterprise”

“Enterprise Architecture .vs. Collection of Architectures in Enterprise” by Yan Zhao, PhD, was posted on the ArchiTech Consulting LLC blog on November 21. While the article was mainly focused on the Zachman Framework and how it defines a collection of architectures within an enterprise, it also illustrated the relationship of architectures (and by extension, architects) within that framework (see image below). While it leaves out the overlap, it does a much better job of showing the level at which each role works and it includes the Information Architecture domain.

Igor Lobanov’s “Differences Between Architecture Roles”

Igor Lobanov posted “Differences Between Architecture Roles” on his blog on September 19. While it doesn’t touch on the Business or Information Architecture domains, it provides the best visual representation (see image below) of the depth, breadth, and scope of Enterprise, Solution, and Technical Architects. He also captures the overlap inherent in the roles. In his model, the Technical Architect (correctly, in my opinion) encompasses both the Application Architecture and Technology Architecture domains. Additionally, Igor’s descriptions of the three roles are excellent, detailing the concerns that each role deals with.

My Thoughts

The fact that we are dealing with roles rather than positions complicates matters, because one person can undertake many roles. This is why the overlap aspect is one I consider important. I would hazard a guess that most Solution Architects serve double duty as Technical Architects (generally Application Architects) in small and medium-sized organizations. For that matter, in many cases the Solutions Architect/Application Architect may also double as a senior developer/technical lead (which has both positive and negative effects, but that’s a post for another day). As shown by all three articles, however, the roles share common characteristics differing mainly by depth and breadth of knowledge as well as scope of interest.

For the most part, Information Architecture and Business Architecture receive short shrift in the articles linked to. This is unfortunate as these domains are becoming increasingly important (Information Architecture in particular). I suspect, given recent trends, that those roles will become more common and better defined in the very near future.

Security Architecture is another neglected, but critical area. There’s still some debate as to whether it is another domain or a cross-cutting concern. That’s a debate I won’t be able to settle here.

My concept is a blend of the three articles I’ve referenced. From a high level, I like the overlapping circles of the Iasa model, but I see the Enterprise Architect as encompassing all four domains (Business, Information, Application, and Technology), with the Solutions Architect occupying a similar role on a smaller scale (fewer domains and less organizational scope). The remaining roles I listed, in my opinion, shake out as follows:

  • Technical Architect – this is more a category than a role.  Both Application Architects and Technology Architects would be considered Technical Architects
  • Software Architect – a synonym for Application Architect.
  • Infrastructure Architect – a synonym for Technology Architect.
  • Network Architect – a specialization of the Infrastructure Architect role.
  • Systems Architect – Wikipedia’s definition is synonymous with Solutions Architect, but the usage I’ve seen more commonly is that of an Infrastructure Architect dealing with servers and operating systems.
  • User Experience Architect – a specialization of the Application Architect dealing with user interface design and usability.
  • Database Architect – I view this role as a specialization of the Application Architect (big surprise: an Application Architect views the database as a component of the application), but I see it commonly falling into the Infrastructure space.

There are, of course, roles that I’ve left out. SOA Architects, Storage Architects and Active Directory/LDAP Architects come to mind, but I’ll leave it as an exercise for the reader to fit them into the framework above. It’s my hope that the framework itself is useful for understanding what to expect from these roles.

Unintended Inheritance and the sealed Keyword

Inheritance is a unique aspect of object orientation. It’s seen as both a defining condition and something to be avoided whenever possible (according to James Gosling, inventor of Java). The Gang of Four, in their seminal work “Design Patterns: Elements of Reusable Object-Oriented Software”, referred to inheritance as a threat to encapsulation and stated that object composition should be preferred to it.

There are many reasons for the conflicted relationship. While inheritance can provide a powerful and simple mechanism for code reuse and extension, it has spawned a collection of anti-patterns and issues. From the yo-yo problem (bouncing up and down through an inheritance hierarchy to trace behavior) to the fragile base class problem (where apparently trivial changes to the base class break derived classes), there are many reasons to avoid using inheritance. Both the .Net framework and Java even provide the ability to prevent inheritance (via the sealed keyword in C#, NotInheritable in VB.Net, and Final in Java).

In spite of the many potential pitfalls with inheritance, it remains a powerful and useful feature. The keys to using inheritance successfully are planning and discipline on the part of the designer of the base class and responsibility on the part of the consumer.

In designing a class for reuse, thought must be given to how a derived class will use (and possibly abuse) the base class and its members. This will help determine which members should be virtual, abstract (pure virtual), and sealed. Maintaining as much encapsulation as possible (by limiting the use of protected members) is likewise critical to providing robust base classes. Documentation will allow the consumer to make better assumptions regarding the way their derivations work. Lastly, the more widely used the base class will be, the more important adherence to the open/closed principle becomes.

The consumer of a base class also plays a crucial part in ensuring that it is used successfully. Consider an example of the fragile base class problem:

Suppose that a class Bag is provided by some object-oriented system, for example, an extensible container framework. In an extensible framework user extensions can be called by both the user application and the framework. The class Bag has an instance variable b : bag of char, which is initialized with an empty bag. It also has methods add inserting a new element into b, addAll invoking the add method to add a group of elements to the bag simultaneously, and cardinality returning the number of elements in b.

Suppose now that a user of the framework decides to extend it. To do so, the user derives a class CountingBag, which introduces an instance variable n, and overrides add to increment n every time a new element is added to the bag. The user also overrides the cardinality method to return the value of n which should be equal to the number of elements in the bag. Note that the user is obliged to verify that CountingBag is substitutable for Bag to be safely used by the framework.

After some time a system developer decides to improve the efficiency of the class Bag and releases a new version of the system. An “improved” Bag1 implements addAll without invoking add. Naturally, the system developer claims that the new version of the system is fully compatible with the previous one. It definitely appears to be so if considered in separation of the extensions. However, when trying to use Bag1 instead of Bag as the base class for CountingBag, the framework extender suddenly discovers that the resulting class returns the incorrect number of elements in the bag.

(From “A Study of The Fragile Base Class Problem”, Leonid Mikhajlov1 and Emil Sekerinski, November 1998)

Unless there was a documented contract that addAll would always use add, the break detailed above was due to the extender making an assumption that the internals of Bag would never change. Likewise, the Liskov Substitution Principle requires that subclasses represent an “Is A” relationship (any code expecting an instance of the superclass could receive an instance of the subclass without breaking). Extending the base class by adding new members to the subclass is acceptable. Attempting to restrict it, such as by overriding a method and throwing a NotImplemented exception, violates the Liskov Substitution Principle. This “Is Kinda Sorta A” relationship is almost guaranteed to lead to problems.

It’s common sense that good design does not happen by accident. Accordingly, inheritance should be by design rather than by default. Liberal use of the sealed keyword can certainly help head off problems with unintended inheritance.

I will note that this isn’t a universal opinion. Brian Button, of MS Patterns and Practices fame, has been reported to dislike the sealed classes in the .Net Framework:

Brian shares my frustration about sealed classes in the .Net Framework. He has encountered parts of the framework that are sealed, and when he needs to extend them, he can’t. Sealed classes are hard to write testable code against. He made a good point that I hadn’t thought of before: If you seal a class, you are saying “I can predict the future, and this is all that this class will ever need to do.”

(from “Blogging from Tech Ed 2006 – Brian Button on how to write frameworks – level 300”)

In defense of my opinion, however, I will note this: a sealed class can be unsealed at some point in the future without breaking existing code. Once unsealed, the genie cannot be put back in the bottle without breaking derived classes. In this respect, an unsealed class is much more a prediction of the future than the sealed one.