Pride, Prejudice, and Professionalism in the Business of IT

interior of a 1958 Plymouth Savoy

Twenty-plus years in IT have led me to believe that there are very few absolutes when it comes to software systems. Two that do seem to hold true are these:

  1. Creating systems is esteemed far more highly than maintaining systems.
  2. Systems that are not maintained, will decay.

There are a variety of reasons for this situation, many of which are baked into the architecture of the enterprise. Regardless of the why, however, the two facts remain. Without a response to those issues, entropy is inevitable.

Over the past few days, I’ve seen several blog posts by two different authors dealing with this situation in two different ways:

Jason complains about the non-technical “leadership class” in his first post:

And hence we get someone making the big decisions about healthcare who knows nothing about medicine or about running hospitals or ambulance services. And we get someone in charge of all the schools who knows nothing about teaching or running a school. And we get someone in charge of a major software company whose last job was being in charge of a soft drinks company. And so on.

Again, this is fine, if they leave the technical decisions to the technical experts. And that’s where it all falls down, of course. They don’t.

The guy in charge of the NHS insists on telling doctors and nurses how they should do their jobs. The woman in charge of UK schools insists on overriding the expertise of teachers. The guy in charge of a major software company refuses to listen to the engineers about the need for automated testing. And so on.

This is the Dunning-Kruger effect writ large. CEOs and government ministers are brimming with the overconfidence of someone who doesn’t know that they don’t know.

In his second post he follows up with how to respond:

My pithy – but entirely serious – advice in that situation is Do It AnywayTM.

There are, of course, obligations implied when we Do It AnywayTM. What we’re doing must be in the best interests of stakeholders. Do It AnywayTM is not a Get Out Jail Free card for any decision you might want to justify. We are making informed decisions on their behalf. And then doing what needs to be done. Y’know. Like professionals.

I disagree. Strenuously.

If you go to the doctor and they tells you that you will need surgery at some point for some condition, would you expect to be forcibly admitted and operated on immediately?

If you were charged with a crime, would you expect your attorney to accept a plea bargain on your behalf without consultation or prior permission?

If neither of those professionals would usurp the right of their client/patient to make their own informed decision, why should we? Both of those examples would be considered malpractice and the first would be criminal assault in addition. Therefore, I disagree that acting on someone’s behalf without their knowledge or consent is a viable option.

John’s approach, rejecting helplessness and confronting the issues by communicating the costs (with justifications/evidence) is, in my opinion, the truly professional approach. We have a responsibility to make the problem visible and continue making it visible. We also have a responsibility to operate within the limits we’re given. We know far more about our area than someone higher up the management chain, but, that does not equate to knowing more in general than those higher up the management chain. Ignorance is relative. Micro-managing, getting deeper in the weeds than you need to is ineffective. If, however, you’re in the weeds, do you have the information necessary to say that the issue being “interfered with” is one without higher-level consequences? Dunning-Kruger can cut a wide swathe. Trust needs to cut both ways.

Imagine riding as a passenger in a car. You see the car drifting closer and closer to the shoulder. Do you point it out to the driver or do you just grab the wheel? You might prevent an accident or you just might cause one by steering into a vehicle coming up from behind that you didn’t see from your vantage point.

[Plymouth Savoy photo by Christopher Ziemnowicz via Wikimedia Commons]

Regulating Software Development

'Belvidere Street construction, pouring concrete', Library of Virginia

 

Another weekend, another too good to pass up Twitter conversation during my “unplugged” time. This weekend, Grady Booch hooked me by retweeting Mike Potts tweet:

Mike’s tweet was a reply to Grady’s comment on the latest news out of Uber:

It’s an understandable question. It’s a reasonable question. It’s one that came up back during the healthcare.gov fiasco and it’s one raised by Volkswagen’s recent criminal misconduct.

However, when contemplating fixing a problem, we need to be extremely mindful of the potential for creating harm as a result of the “fix”. Particularly we should be wary of creating harm out of proportion to any good we do (i.e. we don’t want to kill roaches by burning down the house). I chose the image at the top to illustrate something key to this discussion – changing laws (the software of our meta-enterprise) is only slightly harder than moving a roadway once laid down.

Now for the caveats:

  • I do my utmost to avoid politics on this site – I really doubt you’re looking to me for guidance or even just my opinion. I’m not intending this post as a political statement. I’m not asserting that government is never the answer, merely that it’s a rather blunt instrument that we need to use with care.
  • I agree with Grady and Mike that those who took part in this are a disgrace. Moreover, I believe everyone involved, top to bottom, needs to be prosecuted and, if convicted, punished to the fullest extent of the law.
  • My tl;dr position is this: if we have regulation, it should be effective and without avoidable harmful side effects.

As I noted above, it’s human nature to respond to problems with some proposal to fix the problem. It also seems to be human nature to respond in a manner that doesn’t necessarily deal with an issue from a systemic perspective. We tend to allow ourselves to concentrate on the need to “do something” and ignore the hard work of making sure what we do is effective (and doesn’t cause further problems). In other words, we put band aids on bullet wounds.

In both the case of VW and Uber, the conduct alleged is criminal. We could pass new laws making it a crime to commit a crime, but that seems to be an exercise in recursive futility. If the potential penalty in the first case was insufficient to induce compliance, should we really believe adding another layer will make it better?

An element that’s present in both cases is that the illegal conduct involves creating software to help avoid detection of the fact that the company was breaking another law. Regulatory pressures coupled with a corrupt culture can create perverse incentives to cheat. This does not in any way excuse the conduct, particularly in the case of VW. It is, however, one of the systemic factors that should be taken into account.

In my experience, the most effective compliance program is one where compliance is the path of least resistance. Self-imposed compliance cannot fail to be more effective than compliance enforced externally. Corrupt agents will still violate the rules, but ideally you want to make it so that the lazy way out is the desired behavior.

Another aspect of regulation that comes up is something along the lines of professional standard similar to those of attorneys, accountants, and doctors. Increasing the level of professionalism is laudable, but would it be an effective response to the issue of criminal misconduct? Additionally, assuming it was legally enforced, what would the cost be? Everything from administration of the program to salary increases would introduce new costs and would likely affect the pace of innovation (due to the impact on both supply and demand). Again, without justifying the conduct, what was Uber’s motivation to develop its code to defeat detection by regulators?

I can well imagine other potential issues with a regulatory regime that requires a license to code. Not only commercial innovation would suffer, but the effects on the Open Source community could be disastrous if the licensing regime was expensive.

Doing “something” is easy. Doing something effective is a bit harder. I’m all aboard for punishing the guilty (each and every one), but we should move carefully when considering actions that might be more difficult to undo.

Volkswagen and the Cost of Culture

Hand holding a wad of cash

Thanks to Volkswagen, we now have an idea of the cost of failing to maintain an ethical culture, roughly $18 billion US (emphasis added in the quoted text below by me):

Volkswagen’s financial disclosure on Friday, in a preliminary earnings report, came a day after the company agreed on the outlines of a plan to settle some legal claims in the United States, which would include giving owners of about 500,000 affected vehicles the option of selling the cars back to the company or having them repaired.

Volkswagen is still negotiating the size of the fines it will pay to the United States government for violations of clean-air laws, as well as how much additional compensation it will provide to owners. The money set aside by the company on Friday provides an indication of what Volkswagen expects the total global costs of the scandal to be, although the figure could rise further.

Since the scandal broke in September, 2015, the news has steadily worsened. Last December, Volkswagen’s chairman admitted that the cheating found was not an isolated lapse:

…the decision by employees to cheat on emissions tests was made more than a decade ago, after they realized they could not meet United States clean air standards legally.

Hans-Dieter Pötsch, the chairman of Volkswagen’s supervisory board, said the cheating took place in a climate of lax ethical standards.

“There was a tolerance for breaking the rules,” Mr. Pötsch said here on Thursday during his first lengthy news conference since the company admitted in September that 11 million cars with diesel engines were rigged to fool emissions tests.

Volkswagen’s executive leadership explanation at the time:

Mr. Müller and Mr. Pötsch conceded that the deception reflected organizational shortcomings.

For example, the people who developed the software were the same ones who approved it for use in vehicles. At other companies, it is standard practice for one team to develop components and another to check them for quality. Volkswagen said it would correct those procedures.

Mr. Müller also said he wanted to change the company’s culture so that there was better communication among employees and more willingness to discuss problems. His predecessor, Martin Winterkorn, who resigned after the scandal, was criticized for creating a climate of fear that made managers afraid to admit mistakes.

“We don’t need yes men,” Mr. Müller said, “but managers and engineers who make good arguments.”

I would argue that what’s needed more than “good arguments” is a corporate culture where it’s understood that refusing to break the law is not only allowed, but expected. Given that the size of the loss reserve has more than doubled since then, perhaps they’ve realized that now as well.

What is not needed, however, is the traditional response to high-profile issues, layering on additional ad hoc rules and regulations with an eye toward making sure this “never again happens”. For one thing, there’s no indication that anyone was not aware of the fact that this behavior was wrong. Additional compliance theater is unlikely to improve anything in that respect, and may actually cause new problems in addition to exacerbating the root problem, VW’s culture.

A recent study (reported on in The Atlantic) by Simon Gächter and Jonathan Schulz, University of Nottingham, reports that corrupt cultures breeds corruption. In this study, they:

…asked volunteers from 23 countries to play the same simple game. The duo found that participants were more likely to bend the game’s rules for personal gain if they lived in more corrupt societies. “Corruption and fraud are things going on in the social environment all the time, and it’s plausible that it shapes people’s psychology, what they can get away with,” says Gächter. “It’s okay! Everybody does it around here.”

This study also has implications for Volkswagen’s ability to fix the problem:

Causality could eventually flow in the other direction. “If people are dishonest or think it’s okay to violate rules, it would also be harder to fight corruption and install institutions that work,” says Gächter. “In the long run, these things move together. But to show that, you’d need a 20 year project measuring this on an annual basis.”

Volkswagen, however, probably does not have twenty years to fix their problem. In the US alone, VW will have to fix or buy back (at the owner’s option) over 500,000 vehicles. I suspect VW’s reputation is severely impaired with those that opt to have their vehicles repaired and I would be willing to bet that the majority of those who sell their cars back won’t be returning as customers. This loss for Volkwagen is only the beginning of their financial problems, and it could all have been avoided.

Back in November, Matt Balantine floated an interesting (and very plausible) theory:

https://twitter.com/jetpack/status/667670950540943360

That may well turn out to be the case, but I also have to agree with what Grady Booch tweeted when the scandal first broke:

There’s plenty of blame to go around, but ultimately I believe only a systemic fix, top to bottom, will have any chance of correcting the problem (not that I’d be willing to give VW very good odds on remaining in business long enough for that to take effect). Their value going forward may be to serve as empiric confirmation of Gächter and Schulz’s work. Their bad example may serve as a wake up call for others to pay attention to the culture they’ve fostered (and are fostering) before their employees, innocent and guilty alike, pay the price.

“Want Fries with That?”

Hamburger and French Fries

Greger Wikstrand and I have been trading posts about architecture, innovation, and organizations as systems (a list of previous posts can be found at the bottom of the page) for quite a while now. His latest, “Technology permeats innovation”, touches on an important point – the need for IT to add value and not just act as an order taker.

It’s funny how this series of innovation posts keeps taking me back to posts from the early days of this blog. In my last post, “Accidental Innovation?”, I referred to my very first post, “Like it or not, you have an architecture (in fact, you may have several)”. Less than a month after that first post, I published “Adding Value”, which had the exact same theme as Greger’s post: blindly following orders without adding value (in the form of technical expertise) is not serving your customer. In fact, failing to bring up concerns is both unprofessional and unethical. Acceding to a request that you know will harm your customer without pushing back is tantamount to sabotage.

Innovation involves multiple disciplines. In a recent tweet, Brenda Michelson illustrated this important truth in the context of digital technology:

https://twitter.com/jetpack/status/709770731282784256

Both Brenda and Greger make the same point – successful innovation is a team effort. In fact, using Scott Berkun’s definition of the word, it’s redundant to say “successful innovation”:

If you must use the word, here is the best definition: Innovation is significant positive change. It’s a result. It’s an outcome. It’s something you work towards achieving on a project. If you are successful at solving important problems, peers you respect will call your work innovative and you an innovator. Let them choose the word.

In a recent series of post, Casimir Artmann noted that innovation comes in many forms: improving existing products, developing new products, and finding better ways to work. Often, as shown in examples of innovation in music, photography, and telephony, innovation comes from a combination of these forms. He sums it up this way:

Regardless if we talk about innovation for existing products, new products or new ways of working, inventions in technology is one of the drivers.

Internet of Things, Cloud, Autonomous devices, Wearables, Big Data etc, are all enablers for innovation in the organisations. The challenge is to find out the benefit our clients customers will have from these technology enablers.

Meeting that challenge requires integrating the expertise of both business and IT. Innovation and value aren’t picked from a menu and served up at a drive-through.

Previous posts in this series:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.
  7. “Organizations and Innovation – Swim or Die!” – I discussed the ongoing need of organizations to adapt to their changing contexts or risk “death”.
  8. “Innovation – Resistance is Futile” – Continuing on in the same vein, Greger points out that resistance to change is futile (though probably inevitable). He quotes a professor of his that asserted that you can’t change people or groups, thus you have to change the organization.
  9. “Changing Organizations Without Changing People” – I followed up on Greger’s post, agreeing that enterprise architectures must work “with the grain” of human nature and that culture is “walking the walk”, not just “talking the talk”.
  10. “Developing the ‘innovation habit’” – Greger talks about creating an intentional, collaborative innovation program.
  11. “Innovation on Tap” – I responded to Greger’s post by discussing the need for collaboration across an organization as a structural enabler of innovation. Without open lines of communication, decisions can be made without a feel for customer wants and needs.
  12. “Worthless ideas and valuable innovation” – Greger makes the point that ideas, by themselves, have little or no worth. It’s one thing to have an idea, quite another to be able to turn it into a valuable innovation.
  13. “Accidental Innovation?” – I point out that people are key to innovation. “Without the people who provide the intuition, experience and judgement, we are lacking a critical component in the system.”
  14. “Technology permeats innovation” – Greger talks about how tightly coupled innovation and technology are and the need for IT to actively add value to the process.

Engineer, Get Over Yourself

Tacoma Narrows Bridge Collapse

Ian Bogost’s “Programmers: Stop Calling Yourselves Engineers” in the Atlantic, claims “The title “engineer” is cheapened by the tech industry.” He goes on to state:

When it comes to skyscrapers and bridges and power plants and elevators and the like, engineering has been, and will continue to be, managed partly by professional standards, and partly by regulation around the expertise and duties of engineers. But fifty years’ worth of attempts to turn software development into a legitimate engineering practice have failed.

Those engineering disciplines are subject to both professional and legal regulations, it’s true. That being said, bridges, buildings, and power plants (as well as power grids are not immune to failures. Spectacular failures in regulated disciplines, even when they spur changes, can still recur.

Fifty years might seem like a long time, but compared to structural engineering it’s nothing. Young disciplines have a history of behavior that, in retrospect, seems insanely reckless (granted, he wasn’t a nuclear engineer, but at the time there weren’t any). Other disciplines, now respectable, have been in the same state previously.

Bogost complains about the lack of respect for certifications and degrees, but fails to make a case for their relevance. He even notes that the Accreditation Board for Engineering and Technology’s “accreditation requirements for computer science are vague”. Perhaps software development is too diverse (not to mention too much in flux) for a one-size-fits-all regulatory regime. Encouraging the move toward rigor, even if the pieces aren’t in place for the same style of regulation as older engineering disciplines, seems a better strategy than sneering about who should be allowed to use a title.

I’m all for increasing the quality of software development, especially for those areas (e.g. autonomous cars) that have life safety implications. When I’m in the cross walk, I’d prefer that the developer(s) of the navigation system considered and conducted themselves software engineers rather than craftsmen. By the same token, I’d prefer that the consideration be a function of real rigor and professional attitude rather than ticking boxes.

[h/t to Grady Booch for the link]

First Do No Harm – the Practice of Software Development

Medieval Anatomy Illustration

Analogies are never perfect, but reading Erik Dietrich’s “Do Programmers Practice Computer Science?” brought one to mind. Software development has much in common with the practice of medicine. Software development, like medicine, involves the application of knowledge. Also like medicine, this application is made complex by considerations of context. Yet another commonality is that in both disciplines, there are (or, at least, should be, limits regarding experimentation).

Erik’s post used the following comparison of developers to electricians:

Let’s consider three actors in the realm of physics, as a science.

  1. A physicist, who runs electricity through things to see if they explode.
  2. An electrical engineer, who takes the knowledge of what explodes from the physicist and designs circuitry for houses.
  3. An electrician, who builds houses using the circuits designed by the electrical engineer.

I list these out to illustrate that there are layers of abstraction on top of actual science. Is an electrician a scientist, and does the electrician use science? Well, no, not really. His work isn’t advancing the cause of physics, even if he is indirectly using its principles.

Let’s do a quick exercise that might be a bit sobering when we think of “computer science.” We’ll consider another three actors.

  1. Discrete mathematician, looking to win herself a Fields medal for a polynomial time factoring algorithm.
  2. R&D programmer, taking the best factoring algorithms and turning them into RSA libraries.
  3. Line of business programmer, securing company’s Sharepoint against script kiddies uploading porn.

Programming is knowledge work and non-repetitive, so the comparison is unfair in some ways. But, nevertheless, what we do is a lot more like what an electrician does than what a scientist does. We’re not getting paid to run experiments — we’re getting paid to build things.

There is definitely some validity in this. The three roles in each example have many similarities. His observation that development work is “non-repetitive”, however, is key. Electricians work in a more certain context than doctors who may need to account for body chemistry or metabolism. Likewise, developers may find environmental factors (e.g. memory usage profile, network load, etc.) produce uncertainty in the course of their work. Whereas the plumbing and electrical systems in a house are mostly separate, biological systems and information systems tend to be more intertwined.

Another similarity between software development and the practice of medicine is the feedback loop. The physicist will never hear back from the electrician, but physicians doing research are not similarly removed from practitioners. Practice and theory in medicine have a chicken and egg relationship where neither is clearly dominant, but each influences the other. Likewise with software development. Ethics and practicality in both cases constrain pure research.

As Erik noted, developers are “…not getting paid to run experiments — we’re getting paid to build things”. That being said, the uncertainties mean that, like physicians, we can’t be positive about the exact outcome without trying a particular course of action (which isn’t really an experiment):

Like doctors, those involved in software development have an ethical obligation to let our “patients” know when we’re learning on the job and what the risks are (not to mention the obligation to try things that are in their best interests and not just something we want to test drive). In addition to considerations of professionalism, more open communication has its benefits. We can solve problems and advance the practice at the same time.

Negotiating Estimates

Congress of Vienna

In my previous post dealing with Ron Jeffries (since revised) “Summing up the discussion”, I focused strictly on the customer-focused aspects. I did, however, note some language regarding negotiating estimates that I wanted to touch on:

“Negotiating” estimates is deeply embedded into most cultures. It probably started in the marketplace in the village in ancient Greece, where the carrot guy tried to get three hemitetartemorions for his carrots, and your great-to-the-nth grandmother talked him down to one.
We assume that a contractor’s estimate has fat in it and we assume that we need tough negotiation to squeeze it out. The better the contractor is at estimating, the more this process hurts him, because he has nothing left to squeeze. And in the end, it hurts the buyer as well: the only way the contractor has to survive is to cut quality.

Jeffries is absolutely correct that negotiation is an ancient tradition. Likewise, he is correct that shaving an accurate estimate may well end up costing the customer in reduced quality (especially if giving point estimates, which is an incredibly poor practice, in my opinion). What he fails to mention, however, is that negotiating an estimate need not happen. In fact, negotiating an estimate without negotiating the deliverable is pretty much the worst possible thing you could do. You’re risking either your profit margin or any future business with the customer and quite possibly, both. It’s a bad idea for a vendor and incredibly ill-considered for in-house IT (it’s not like you can take the money and run).

In my opinion, when someone is willing to change an estimate without a change in what they’re estimating, it’s a bad sign. If you adequately understand the information at hand, have some experience, and have made a good faith effort, there’s no reason to be willing to change the estimate without learning more about what is or is not needed. It’s not really a negotiation if only one party is getting what they want, particularly if the other party is getting abused. For the abused party, striking back (by shaving quality without the customer’s knowledge) is counter-productive. The only negotiation should be what’s in scope and what’s not.

In a balanced relationship, the provider can explain what the customer is getting for their money and the customer realizes they won’t get something for nothing. Communication and collaboration can provide the basis for trust. Trust is essential for both parties to become partners in delivering the best possible product.

Professional Software Development – Can We Mandate What We Can’t Define?

The law is a what?!?

The only true wisdom is in knowing you know nothing.
Socrates

What types of software products have you worked on: desktop applications, traditional web, single-page applications, embedded, mobile, mainframe?

How about organizations: private for-profit, government, non-profit?

How about domains: finance, retail, defense, health care, entertainment, banking, law enforcement, intelligence, real estate, etc. etc. etc.?

Given that the realm of “software development” is currently huge (and probably expanding as you read this), how logical is it that someone (or even a group) could regulate what is acceptable process and practice? I won’t say that it would be impossible to come up with one unified set of regulations that would fit all circumstances, but I’m very comfortable estimating the likelihood as a minute fraction of a percent. If the entire realm were broken down into smaller groupings, the chance might increase, but the resulting glut of regulations would become an administrative nightmare and still wouldn’t address those circumstances that aren’t in the list above but are on the horizon.

Nonetheless, people continue to float the idea of regulation.

Last fall, Bob Martin floated the idea of government regulation as a reaction to the healthcare.gov fiasco. That would be the same government whose contracting regulations contributed to the fiasco in the first place, correct? That would be the same government that has legally mandated Agile for Department of Defense contracts? Legally mandated agility just seems to sound a bit suspicious. As Jeff Sutherland noted “Many in Washington are still trying to figure out what exactly that means but it is a start”. A start, for sure, but the start of what?

Ken Schwaber’s blog post “Can Software Developers Meet the Need?” takes a different approach. Schwaber proposes that:

A software profession governing body is needed. We need to formalize and regulate the skills, techniques, and practices needed to build different types of software capabilities. On one side, there is the danger of squeezing the creativity out of software development by unknowledgeable bureaucrats. On the other side is the danger of the increasingly vital software our society relies on failing critically.

We can either create such a governance capability, or the governments will legislate it after a particularly disastrous failure.

Call me a cynic, but I’m betting that the amount of bureaucratic squeezing that would result from this would far outweigh any gain in quality.

Most of the organization types listed above are already on the hook for harm caused by their IT operations; just ask Target and Knight Capital (don’t ask the Centers for Medicare & Medicaid Services). Is it more likely that a committee, whether private or public, can better manage the quality of software across all the various categories listed above? Could they be more likely to keep up with change in the industry? Color me doubtful.

Not All Gold Glitters

ooh, shiny

After two back-to-back posts, I thought I was done with YAGNI, simplicity, and economy of design – at least for a while. But then Jef Claes published “But I already wrote it”.

Jef’s post dealt with how a colleague had implemented a new feature in a much richer manner than anticipated. When the analyst confirmed that the implementation was more than what was needed, Jef recommended trimming out the extra, while his colleague argued that since it was done, it should be left as is. After pointing out the risks and costs of the additional complexity, Jef’s colleague came around (which is, in my opinion, the correct way to do YAGNI – a consideration of the costs and benefits, rather than a reflex). Then came the comments.

One commenter took exception to Jef’s statement that “code is just a means to an end; the side product of creating a solution or learning about a problem”. For that commenter, that attitude would “inevitably” lead to writing bad code. “The way you write good code is by loving good code”.

Another suggested that the situation taught his colleague never to take initiative and had ruined his/her job satisfaction. “From now on, he should consider himself to be a code monkey whose job is to accept the designer’s vision, regardless of how short-sighted or limited it is, and produce a working program.” This commenter stated that Jef should have waited to see if additional maintenance costs had materialized before deciding.

Needless to say, I disagree with both.

The first commenter above needs to understand that the application belongs to the customer. For functionality, substituting your judgment for the customer’s is unprofessional. If I ask for a garage and you build a mansion while my back’s turned, you don’t get paid. Taking pride in how you deliver value is a virtue provided that you remember that the customer is the one who determines what they value.

The second commenter assumes that Jef’s colleague will be discouraged because his initiative wasn’t accepted. If that’s the case, perhaps another line of work would be appropriate. As noted above, the customer determines what is needed and we should be taking pride in fulfilling those needs. They also assume that the design was short-sighted and limited, but the basis for that is never provided.

The second commenter’s suggestion that the code should have been left as is and only changed if a problem emerged is even more problematic. Taking on risk and potential expense on the customer’s behalf is not responsible behavior. Additionally, decisions are not made in a vacuum – each choice builds on earlier choices to enable or constrain (often both). Making those decisions without a rational basis is equally irresponsible.

On a personal level, I can sympathize that someone has expended effort and is proud of what they’ve accomplished. However, putting our own wants above the needs of our customers does not advance the profession. Delivering requested value, without surprises, does.