Friday, December 26, 2008

Cargo Cult Agile Adoption

I've seen the following scenario unfold at two different clients:

1. Initial enthusiasm to go Agile, driven by an executive mandate.

2. A mix of eagerness shown by some delivery people (devs, QA, analysts, PMs etc.) and resistance by others.

3. Growing friction between the two factions, inducing the executive management to create layers of Oversight Committees, PMOs, Steering Groups, Agile Coaches / Coordinators / Evangelists / Czars / Sensei / Black-belts -- and other holders of titles culled from clergy, industry and fantasy.

4. Adoption of the easier Agile habits: story-walls, open work-spaces, iterations (sometimes not time-bound) and stand-ups (sometimes modified to sit-downs).

5. Assiduous proclamations of the merits (equally often, the demerits) of the harder Agile practices: independent stories, independent estimates, TDD, CI, automated test suites, tracking delivered business value (using PM tools appropriately) and continuous customer collaboration.

6. Permeation of a snarky disbelief throughout the organization in the viability and usefulness of Agile practices.

7. Recidivism to the previous state of suboptimal software delivery, while retaining the glittery Agile titles: people, processes and productions glow with the newly acquired veneer of Agility while losing none of their inner fragility. I've seen 40 minute long meetings (with less than 10 attendees) conducted in the name of a "stand-up".

This scenario -- which I dub Cargo Cult Agile Adoption -- reveals a belief that simple mimicking of some externally visible Agile practices will somehow create the values that underlie those practices.

The ritual of having a card-wall cannot create even an illusion of communication, if the will to communicate openly is not there. The ritual of creating a CI server cannot substitute for the values of courage and simplicity -- the CI build will probably be either mostly broken, or developers won't commit any code for days for fear of breaking a brittle build. Rituals and practices are merely the expression of underlying values, they cannot cause those values to exist out of nothing.

Sunday, August 31, 2008

Test Driving the "Money" example

The other day, I demo'ed Kent Beck's "Money" example in a Lunch and Learn session.

Even though participation was very thin (for a variety of scheduling challenges), the people who did attend found the session useful. This was heartening.

I test-drove the "Money" example equivalent to the first 11 chapters of Kent Beck's book [1]. There were some variations: I added a test-for-instanceof-based-equality, asserting that a ClassCastException was thrown when Dollars were compared to Euros. Later, when the instanceof operator in the equals() method was replaced with the domain concept of "currency", I test-drove this change by modifying the test-for-instanceof-based-equality to a test-for-currency-based-equality. This new test asserted that no exceptions were thrown when Dollars were compared to Euros.

The other variations dealt with the natural ebb and flow of the demands of an eager (though small) audience: I took detours -- even to dead-ends -- to answer questions and demonstrate principles. I allowed the audience to direct some of the refactorings: we replaced the calls to Dollar() and Euro() constructors with calls to factory methods in Money class much earlier than in the book, based on their familiarity with the principle of separating creation of objects from their usage. And, of course, I made the example more contemporary: I replaced Francs with Euros. This latter change had the effect of inverting the exchange rate from 1 Dollar = 2 Francs (in the book) to 2 Dollars = 1 Euro; which is a fairly close approximation of current reality.

The feedback from the small audience was generous: it was mostly 4's and 5's (on a 5-point scale, 5 being highest). I would have liked to do it differently (and hopefully better) in the following ways:

1. Test Driving all the way to the end of the Money example. This corresponds to chapter 16 (chapter 17 is the retrospective). I believe I'd have needed about 20 more minutes to do these five remaining chapters.
2. Demonstrating code coverage more often. I switched to the Eclemma coverage window not often enough.
3. Being more deliberate and thoughtful (rather than speedy and mechanical) in test-driving some of the changes.

I do realize that even the items on this short wish-list have built-in conflicts. But then, crafting software is all about finding serene compromises amongst conflicts.

[1] Test-Driven Development By Example, by Kent Beck

No Integration Without Representation

This article is not about the European Union.

Much has been written about the dangers of integrating software components late in a project. "Late" could mean "not at the first opportunity", "not after every working change to every module" or "closer to the product's deployment date than the project's commencement date". Any definition suffices for the purpose of this article.

One reason that exacerbates late integration is that not all the software systems that need to be integrated with the system-under-development (SUD) are known in advance. This problem is (in software terms) ancient, and apparently bit the first XP project -- "end to end is farther than you think". [1]

It is always going to be difficult to a priori enumerate the complete list of software systems with which a given SUD is going to integrate. And such a list would be an evolving artifact, anyway. So how do we continually determine with which systems our SUD is expected to integrate?

One way is to make this apparent by representation. Most iterative teams regularly present their evolving product to the customers and stakeholders -- usually at the end of each iteration. This is a natural junction where integration points with other systems can be demonstrated.

How do we know if we're missing some key integration point(s)? Expand the membership of the "iteration showcase" meeting to representatives of external system who might have an integration point. It may seem chaotic to invite too many people to an iteration showcase, but in practice this works out well. Quite often, such "chickens" either quickly transmogrify into "pigs" or leave the roost.

The foil to this wishy-washy "come if you're interested, don't if you aren't" is the statement "you probably are interested if you use an application that consumes data from or provides data to this SUD". I like to condense this to the pithy "no integration without representation".

Explicitly proclaiming that the SUD will integrate with all (and only) the applications whose stakeholders are represented has a self-correcting effect on guiding the integration points. Of course, encouraging, enabling attendance, and welcoming the representatives who might have an integration point are all necessary to the correct usage of this pith.

[1] See the "Creation Story" in Extreme Programming Explained by Kent Beck.

Saturday, June 28, 2008

Another reason for TDD

It seems that one can never have enough good reasons for espousing Test Driven Development (TDD). If there is a dearth of cogency in this argument, then I hope the following helps.

One way to think about a robust test suite is that it is something akin to a safety net against creating unsafe software. The definition I have in mind is coincident with what Donald Norman calls a "forcing function", and is sometimes referred to as Poka-yoke.

Consider a kitchen appliance such as the food processor. To run the food processor, you must assemble the various parts in a precise way. Consider just the lid for the moment. Not only must you lock the lid in place; it must also unlock the hidden safety switch with the projecting cam (see figure).

If the cam is broken, then the food processor will not run, even if the lid fits snugly on the bowl. While it can be distressing if the cam breaks during washing the lid in the dishwasher; the cam provides a forcing function, preventing unsafe operation of the food processor.

TDD can be looked as a practice of providing a comprehensive forcing function for software. If the tests are broken, the software will not run (would not build in most cases). It can be vexing when the build breaks because of failing tests, but the tests prevent producing unsafe software.

So the next time someone complains about tests failing builds, politely ask them if they'd like to use a blender that ran with or without the lid.

Upgrades and Shovegrades

Quite often, you feel that you really don't need the new version of a certain piece of software; but you really don't have a choice. You simply cannot keep working with the old version -- the decision is not yours to make.

This is not a Luddite argument. Technological progress is eventually inevitable -- even change (as opposed to progress) is a fact of life. However, when bad change is thrust upon unwilling and unsuspecting users, it is at least proper to call it by its correct name: a shovegrade.

Perhaps the neo-classical example of a shovegrade outside the software world is the so-called New Coke. What made it feel like a shovegrade was not just that it was a "change", nor only that it was "unsolicited"; nor that it was "compulsory"; nor that it was "less appealing" -- but that it was all of those at once.

This gives us a fair working definition of a shovegrade: An unsolicited compulsory change which results in a product that is less appealing than before.

In software , many upgrades are in fact shovegrades. The reason this is prevalent in the software world is because it is relatively easy to retire one version and move to the next. License keys (especially those managed through a central license server), time-bombs, deactivation of online support accounts, version management through Push Technology -- all these make it possible to forcibly shove new versions of software instantly down
the collective throats of all consumers.

I experienced recently an example of a shovegrade on a large project. The architecture team (to which I did not belong) created a new API to handle the concept of Time and shoved it down the throats of all other teams (including the one to which I did belong). The real implementation problems this new Time API caused were unanticipated by the architecture team. Since the old implementation of Time had been removed (not deprecated), and the new API failed to deliver key "-ilities" (reliability, simplicity and others); this felt like -- and was, by my definition above -- a shovegrade.

The underlying fact -- that the architecture team failed to anticipate the problems caused by the new API -- was due to their lack of involvement in the design and implementation of the domain solution. In one sense, this is a lack of application of the Architect Also Implements organizational pattern.

That is a topic best handled elsewhere.

Frontispiece

This blog lists my musings on software.