Apple vs. the FBI: Winning and Losing

Drawing of an Apple with a Worm

The FBI, with the help of a third party, has managed to gain access to Syed Farook’s iPhone. In a court filing Monday, the FBI stated that they did not require Apple’s help any longer.

Apple, on the other hand, now has a need to know what vulnerability was exploited to access the phone. Whether the FBI will provide that information is questionable. From a purely legal standpoint, it seems there is no obligation for it to disclose that to Apple:

Attorneys for Apple are researching legal tactics to compel the government to turn over the specifics, but the company had no update on its progress Tuesday.

The FBI could argue that the most crucial information is part of a nondisclosure agreement, solely in the hands of the outside party that assisted the agency, or cannot be released until the investigation is complete.

Many experts agree that the government faces no obvious legal obligation to provide information to Apple. But authorities, like professional security researchers, have recognized that a world in which computers are crucial in commerce and communications shouldn’t be riddled with technical security flaws.

So, had Apple decided not to fight the FBI’s writ, it would likely have full control (IP ownership and physical custody) over a handset-specific version of IOS that only bypassed the feature limiting access attempts and was only provided pursuant to a legal writ. Now, the FBI has access (through a third party) to what’s reputed to be the same capability, but Apple does not. It appears that there may be no way to compel the FBI to share that information with Apple.

So the question is: did Apple win or lose in this case? More importantly, did Apple’s customers win or lose?

NPM, Tay, and the Need for Design

Take a couple of seconds and watch the clip in the tweet below:

https://twitter.com/jetpack/status/713320642616156161

While it would be incredibly difficult to predict that exact outcome, it is also incredibly easy to foresee that it’s a possibility. As the saying goes, “forewarned is forearmed”.

Being forewarned and forearmed is an important part of what an architect does. An architect is supposed to focus on the architecturally significant aspects of a system. I like to use Ruth Malan‘s definition of architectural significance due to its flexibility:

Decisions (both those that were made and those that were left unmade) that end up taking systems offline and causing very public embarrassment are, in my opinion, architecturally significant.

Last week, two very public, very foreseeable failures took place: first was the chaos caused by a developer removing his modules from NPM, which was followed by Microsoft having to pull the plug on its Tay chatbot when it was “trained” to spew offensive comments in less than 24 hours. In my opinion, these both represented design failures resulting from a lack of due consideration of the context in which these systems would operate.

After all, can anyone really claim that no one would expect that people on the internet would try to “corrupt” a chatbot? According to Azeem Azhar as quoted in Business Insider, not really:

“Of course, Twitter users were going to tinker with Tay and push it to extremes. That’s what users do — any product manager knows that.

“This is an extension of the Boaty McBoatface saga, and runs all the way back to the Hank the Angry Drunken Dwarf write in during Time magazine’s Internet vote for Most Beautiful Person. There is nearly a two-decade history of these sort of things being pushed to the limit.”

The current claim, as reported in CIO.com, is that Tay was developed with filtering built-in, but there was a “critical oversight” for a specific kind of attack. According to the article, it’s believed that the attack vector involved asking Tay to “repeat after me”.

Or, as Matt Ballantine put it:

https://twitter.com/jetpack/status/713012721218883585

Likewise, who could imagine issues with a centralized repository of cascading dependencies? Failing to consider what would happen if someone suddenly pulled one of the bottom blocks out led to a huge inconvenience to anyone depending on that module or any downstream module. There’s plenty of blame to go around: the developer who took his toys and went home, those responsible for NPM’s design, and those who depended on it without understanding its weaknesses.

“The Iron Law of Tools” is “that which does for you will also do to you”. Understanding the trade-offs allows you to plan for risk mitigation in advance. Ignoring them merely ensures that they will have to be dealt with in crisis mode. This is something I covered in a previous post, “Dependency Management is Risk Management”.

Effective design involves not only the internals of a system but its externals as well. The conditions under which the system will be used, it’s context, is highly significant. That means considering not only the system’s use cases, but also its abuse cases. A post written almost a year ago by Brandon Harris, “Designing for Evil”, conveys this well:

When all is said and done, when you’ve set your ideas to paper, you have to sit down and ask yourself a very specific question:

How could this feature be exploited to harm someone?

Now, replace the word “could” with the word “will.”

How will this feature be exploited to harm someone?

You have to ask that question. You have to be unflinching about the answers, too.

Because if you don’t, someone else will.


When I began working on this post, the portion above was what I had in mind to say. In essence, I planned a longer-form version of what I’d tweeted about the Tay fiasco:

However, before I had finished writing the post, Greger Wikstrand posted “The fail fast fallacy”. Greger and I have been carrying on a conversation about innovation over the last few months. While I had initially intended to approach this as a general issue of architectural practice rather than innovation, the points he makes are just too apropos to leave out.

In the post, Greger points out that the focus seems to have shifted from learning to failure. Learning from experience can be the best way to test an idea. However, it’s not the only way:

Evolution and nature has shown us that there are two, equally valid, approaches to winning the gene game. The first approach is to get as much offspring as possible and “hope” many of them survive (r-selection). The second approach is to have few offspring but raise them and nurture them carefully (K-selection). Biologists tell us that the first strategy works best in a harsh, unpredictable environment where the effort of creating offspring is low. The second strategy works better in an environment where there is less change and offspring are more expensive to produce. Some of the factors that favour r-selection seems to be large uncompeted resources. K-selection is more favourable in resource scarce, low predator areas.

The phrase “…where the effort of creating offspring is low” is critical here. The higher the “cost” of the experiment, the more risk is involved in failure. This makes it advisable to tilt the playing field by supporting and nurturing the “offspring”.

In response to Greger’s post, Casimir Artmann posted two excellent articles that further elaborated on this. In “Fail Fast During Adventures”, he noted that “There is a fine line between fail fast and Darwin Awards in IRL.” His point, preparation beforehand and being willing to abort during an experiment before failure is equivalent to suffering a fatality can be effective learning strategies. Lessons that you don’t live to apply aren’t worth much.

Casimir followed with “Fail is not an Option”, in which he stated:

I want the project to succeed, but I plan for things going wrong so that the consequences wouldn’t be to huge. Some risk are manageable, as walking alone, but not alone and off-trail. That’s to risky. If you doing outdoor adventures, you are probably more prepared and skilled than a ordinarie project member, and thats a huge benefit.

I guess the best advice, when doing completely new things with IT, is to start really small so that the majority of your business is not impacted if there is a failure. When something goes wrong, be sure that you could go back to safe place. Point of no return is like being on a sailing boot in the middle of the Atlantic where you can’t go back.

That’s excellent advice. “Fail Fast” has the advantage of being able to fit on a bumper sticker, but the longer, more nuanced version is more likely to serve you well.

Storm Clouds: DropBox’s Back to the Future Moment

Rembrant's Christ in the Storm on the Lake of Galilee

One of the big news items from last week was DropBox’s announcement that it had brought its file storage infrastructure in-house, moving (mostly) away from AWS:

Years ago, we called Dropbox a “Magic Pocket” because it was designed to keep all your files in one convenient place. Dropbox has evolved from that simple beginning to become one of the most powerful and ubiquitous collaboration platforms in the world. And when our scale required building our own dedicated storage infrastructure, we named the project “Magic Pocket.” Two and a half years later, we’re excited to announce that we’re now storing and serving over 90% of our users’ data on our custom-built infrastructure.

Given both the massive scope of the endeavor and the massive repudiation of what’s becoming more and more common infrastructure practice, Wired.com‘s article is appropriately titled: “The Epic Story of Dropbox’s Exodus From the Amazon Cloud Empire”. According to that article:

In essence, they built their own Amazon S3—except they tailored their software to their own particular technical problems. “We haven’t built a like-for-like replacement,” (Dropbox engineering VP Aditya) Agarwal says. “We’ve built something that is customized for us.”

Did DropBox make the right decision?

Only time will tell. In truth, it will probably be easier to tell if it was the wrong decision than if it was the right one. Poor choices tend to be more absolute, good choices less clear-cut.

DropBox has also spent more than two years developing and proving their infrastructure accord to their announcement. Given that file storage is DropBox’s core business and recognizing the scale they operate at (500 million users and 500 petabytes of data according to an article on CIO.com), the idea that they should control their own infrastructure makes sense. This is particularly true given Agarwal’s statement that “We’ve built something that is customized for us.” While I can’t say whether the decision is right, I can say that it is a reasonable one.

That’s not the same, however, as saying that everyone should operate the own infrastructure. Context matters. Simon Wardley‘s tweet sums it up nicely (where “nicely” is defined as “snarkily”):

Emulating DropBox when you’re not in the storage business, when you don’t have the volume (nor, most likely, the budget), and when you haven’t done the homework to prove it out, makes no sense. It would be like rowing a dinghy out into a stormy ocean because the oil tanker that just left port is doing fine.

Context matters.

“Want Fries with That?”

Hamburger and French Fries

Greger Wikstrand and I have been trading posts about architecture, innovation, and organizations as systems (a list of previous posts can be found at the bottom of the page) for quite a while now. His latest, “Technology permeats innovation”, touches on an important point – the need for IT to add value and not just act as an order taker.

It’s funny how this series of innovation posts keeps taking me back to posts from the early days of this blog. In my last post, “Accidental Innovation?”, I referred to my very first post, “Like it or not, you have an architecture (in fact, you may have several)”. Less than a month after that first post, I published “Adding Value”, which had the exact same theme as Greger’s post: blindly following orders without adding value (in the form of technical expertise) is not serving your customer. In fact, failing to bring up concerns is both unprofessional and unethical. Acceding to a request that you know will harm your customer without pushing back is tantamount to sabotage.

Innovation involves multiple disciplines. In a recent tweet, Brenda Michelson illustrated this important truth in the context of digital technology:

https://twitter.com/jetpack/status/709770731282784256

Both Brenda and Greger make the same point – successful innovation is a team effort. In fact, using Scott Berkun’s definition of the word, it’s redundant to say “successful innovation”:

If you must use the word, here is the best definition: Innovation is significant positive change. It’s a result. It’s an outcome. It’s something you work towards achieving on a project. If you are successful at solving important problems, peers you respect will call your work innovative and you an innovator. Let them choose the word.

In a recent series of post, Casimir Artmann noted that innovation comes in many forms: improving existing products, developing new products, and finding better ways to work. Often, as shown in examples of innovation in music, photography, and telephony, innovation comes from a combination of these forms. He sums it up this way:

Regardless if we talk about innovation for existing products, new products or new ways of working, inventions in technology is one of the drivers.

Internet of Things, Cloud, Autonomous devices, Wearables, Big Data etc, are all enablers for innovation in the organisations. The challenge is to find out the benefit our clients customers will have from these technology enablers.

Meeting that challenge requires integrating the expertise of both business and IT. Innovation and value aren’t picked from a menu and served up at a drive-through.

Previous posts in this series:

  1. “We Deliver Decisions (Who Needs Architects?)” – I discussed how the practice of software architecture involved decision-making. It combines analysis with the need for situational awareness to deal with the emergent factors and avoiding cognitive biases.
  2. “Serendipity with Woody Zuill” – Greger pointed me to a short video of him and Woody Zuill discussing serendipity in software development.
  3. “Fixing IT – Too Big to Succeed?” – Woody’s comments in the video re: the stifling effects of bureaucracy in IT inspired me to discuss the need for embedded IT to address those effects and to promote better customer-centricity than what’s normal for project-oriented IT shops.
  4. “Serendipity and successful innovation” – Greger’s post pointed out that structure is insufficient to promote innovation, organizations must be prepared to recognize and respond to opportunities and that innovation must be able to scale.
  5. “Inflection Points and the Ingredients of Innovation” – I expanded on Greger’s post, using WWI as an example of a time where innovation yielded uneven results because effective innovation requires technology, understanding of how to employ it, and an organizational structure that allows it to be used well.
  6. “Social innovation and tech go hand-in-hand” – Greger continued with the same theme, the social and technological aspects of innovation.
  7. “Organizations and Innovation – Swim or Die!” – I discussed the ongoing need of organizations to adapt to their changing contexts or risk “death”.
  8. “Innovation – Resistance is Futile” – Continuing on in the same vein, Greger points out that resistance to change is futile (though probably inevitable). He quotes a professor of his that asserted that you can’t change people or groups, thus you have to change the organization.
  9. “Changing Organizations Without Changing People” – I followed up on Greger’s post, agreeing that enterprise architectures must work “with the grain” of human nature and that culture is “walking the walk”, not just “talking the talk”.
  10. “Developing the ‘innovation habit’” – Greger talks about creating an intentional, collaborative innovation program.
  11. “Innovation on Tap” – I responded to Greger’s post by discussing the need for collaboration across an organization as a structural enabler of innovation. Without open lines of communication, decisions can be made without a feel for customer wants and needs.
  12. “Worthless ideas and valuable innovation” – Greger makes the point that ideas, by themselves, have little or no worth. It’s one thing to have an idea, quite another to be able to turn it into a valuable innovation.
  13. “Accidental Innovation?” – I point out that people are key to innovation. “Without the people who provide the intuition, experience and judgement, we are lacking a critical component in the system.”
  14. “Technology permeats innovation” – Greger talks about how tightly coupled innovation and technology are and the need for IT to actively add value to the process.

Form Follows Function on SPaMCast 385

SPaMCAST logo

This week’s episode of Tom Cagley’s Software Process and Measurement (SPaMCast) podcast, number 385, features Tom’s essay on Agile portfolio metrics, Kim Pries talking about the value of diversity, and a Form Follows Function installment on sense-making and decision-making in the practice of software architecture.

Tom and I discuss my post “Architecture and OODA Loops – Fast is not Enough”. We talk about how making good decisions is the very essence of the practice of software architecture. I relate John Boyd’s Observe-Orient-Decide-Act loop to designing software systems.

You can find all my SPaMCast episodes using under the SPAMCast Appearances category on this blog. Enjoy!

Enterprise Architecture and the Business of IT

Turning Gears Animation

I’ve been following Tom Graves and his Tetradian blog for quite a while. His view of Enterprise Architecture (EA), namely that it is about the architecture of the enterprise and not just the enterprise’s IT systems, is one I find compelling. With some encouragement on Tom’s part, I’ve begun touching on the topic of EA, although in a limited manner. When it comes to enterprise architecture, particularly according to Tom’s definition, I consider myself more of a student than anything else. I design software systems and systems of systems, not the enterprises that make use of them.

However…

I’m finding myself drawn to the topic more and more these days. I’m finding it more and more relevant to my work. The fractal nature of social systems using software systems is a major theme of the category “Organizations as Systems” on this site. If the parts fit poorly, then the operation of the system they comprise will be impeded. A good way to ensure the parts fit poorly is to fail to understand the context in which they will inhabit. So while I may not be designing the social system which my systems fit into, an understanding of how that system functions is invaluable, as is an understanding of how my social system (IT) interacts with my client’s social system (the rest of the business) to further the aims of the enterprise.

Tom’s latest post, “Engaging stakeholders in health-care IT”, is an excellent example of how not to do things. In the post, Tom discuss attending a conference on IT in healthcare where the players had no real knowledge about healthcare. They couldn’t even identify the British Medical Journal or the Journal of the American Medical Association, much less have an idea about what issues might be found in the pages of those publications. They didn’t feel that was a problem:

Well, they didn’t quite laugh at me to my face about that, but in effect it was pretty close – a scornful dismissal, at best. In other words, about the literally life-and-death field for which they’d now proclaimed themselves to be the new standard-bearers, they were not only clueless, but consciously clueless, even intentionally clueless – and seemingly proud of it, as well. Ouch… At that point the Tarot character of ‘The Fool’ kinda came to mind – too self-absorbed to notice that he’s walking straight over a cliff

While I recently advocated embracing ignorance, it was in the context of avoiding assumption. The architect will know less about the business than a subject matter expert who will most likely know less than the end user on the pointy end of the spear. The idea is not to remain ignorant of the domain, but to avoid relying on our own understanding and seek better sources of information. It’s unreasonable to think we can design a solution that’s fit for purpose without an understanding of the architecture of the problem, and make no mistake, providing solutions that are fit for purpose is the business of IT.