Bluemini.comBluemini.com

More Agile Development

Experience is the best teacher

posted: 13 Oct 2010

Current team has identically-configured laptops, usually works in pairs, follows guideline to leave no code checked out locally at end of day. At start of day, any team member can pair with any other team member using any laptop. They begin by checking out the project.

Yesterday, a team member's laptop went south. No problem; he can use any other laptop, right? Well, not quite. He had several days' worth of work on the now-broken laptop.

Oops.

Any wagers on whether he checks in his work every day (at least until his memory fades)?

Agile Business Conference 2010: Trip Report

posted: 11 Oct 2010

The eighth annual Agile Business Conference took place October 5-6, 2010, at the Inmarsat Conference Centre in London. My impressions of the conference are somewhat mixed. On the one hand, there were quite a few smart people there, and a lot of great discussions during coffee breaks, lunch, and after hours. Some of the scheduled sessions were pretty good, too. On the other hand, my expectations were not well aligned with the content and level of the conference.

This conference has always dealt with agile software development from a business perspective, and not with business agility, as its name suggests. I participated in the 2008 edition of the conference, so it wasn't a surprise to me, but I wonder whether this is what business people actually expect when they read the phrase, agile business. These two phrases — agile software development and business agility — have long been confused.

In any case, there were relatively few business people in attendance. Most participants were consultants or representatives of companies that sell training in agile methods and/or software products related to agile project management. They were interested in meeting prospective customers, but (with some exceptions) they had to be satisfied with meeting one another. This wasn't necessarily a bad thing, of course. Software companies explained their wares to consultants, so they would understand when the products might help their clients. Consultants explained their philosophies to training firms, so that they might be engaged as trainers now and again. Contacts were made, business cards were exchanged.

Every session was at an introductory level. There seemed to be a certain amount of "selling" agile versus waterfall, although in four short months we will celebrate the 10th anniversary of the Snowbird meeting. At times it felt as if we had stepped through a temporal portal to a time when agile methods were still in the early adoption stage of innovation diffusion. Although some 18 months ago I didn't see it, I think now that agile has crossed the chasm and is well into the late majority adoption stage.

Circa 2010, conferences are not where people receive a basic introduction to agile concepts and methods. There are books, websites, and training classes for that. People go to conferences to learn about next steps in process improvement, applying agile principles in mixed environments, or fine-tuning their agile practice. Circa 2010, there is little value in comparing agile methods with "waterfall" — if anyone even uses a waterfall approach anymore. As the conference progressed, more and more attendees opted to remain in the exhibition area rather than to join the scheduled sessions, so that they could enjoy in-depth discussions with one another.

I facilitated two sessions myself. The first was entitled, How can I tell whether I need what they're selling? It was aimed at managers who engage consultants and/or choose software products to help them adopt methods and tools that improve the effectiveness of their organizations. In view of the small size of the conference and the number of concurrent tracks, the session was well attended, with about 35 participants. (I had the first time slot after the opening keynote, so people were still chipper and enthusiastic.) Of the 35, only three were decision-making managers. The rest were consultants.

After taking a quick poll to see who was who, I remarked that I was about to lose some friends among the consultants, because the goal of the session was to equip the three prospective clients to defend themselves against people like us who are trying to sell them our services. But the consultants seemed to like the content, too, and I received a lot of positive comments.

I described the content of the session in an earlier blog post. In a nutshell, there are five points I wanted to make:

  1. Before buying anything, understand your problems.
  2. Dig below the buzzwords when assessing solutions.
  3. Keep your options open when engaging consultants.
  4. Beware of false economies.
  5. Try the things your helpers suggest.

 

My other session was a hands-on workshop introducing the analysis tool, Diagram of Effects (DOE). I had a slightly smaller group than for the first session, but they were very engaged and interested. The session had two time slots, and I didn't lose anyone after the coffee break. It turned out to be one of very few hands-on sessions at the conference; most presentations were slide shows with the presenter talking, talking, and talking some more.

The previous day, one of the speakers had suggested setting velocity targets as a "good" practice. I was appalled. I was still thinking about that on the second day, when I was thinking about an example to use to introduce the DOE workshop. I decided to use a situation from past experience when IT management had tried to "gain control" of their teams' "productivity" by setting velocity targets. This, along with management's dictating the relative size of User Stories, contributed to reinforcing loops that all but guaranteed failure.

In developing the DOE for that situation, I gave myself the opportunity to explain the fallacy in setting velocity targets, although that wasn't the topic of my session. Treated as an empirical observation of a team's delivery capacity, velocity provides a leading indicator we can use for forward planning when employing an iterative process model. Given a planned scope and a known velocity, we can predict approximately the number of iterations needed to complete a given amount of scope, or the amount of scope that can be completed by a given date. It's a tool for managing the Iron Triangle. When we set targets for a team's velocity, we turn it into a useless number the team will game in order to avoid punishment. It ceases to be useful for forward planning.

There were enough participants to form three working groups. Each group discussed a few candidate situations suggested by its own members and selected one as the subject of their DOE. There were some great discussions and all three groups came up with interesting results they hadn't anticipated. They got so involved with analyzing their problems that it was hard to peel them away from their tables to present their results to the rest of the group when the time was up!

One participant observed that the DOE appeared to be more about the journey than the destination, since it tended to elicit multiple perspectives on a situation and to expose relationships that were not immediately obvious, yet it did not result in specific, concrete corrective actions. I thought that was a very good observation.

The sample DOE I used to introduce the workshop reminded me, as well, of something Josef Bacher had described in his session: A common cycle (which looks very much like a reinforcing loop) starts with a small failure, which leads to management increasing "control," which leads to an increase in formality and ceremony (he called it "documentation"), which leads to increased stress, which leads to decreased productivity, which leads to more failure, which leads to...

Obviously, I could not attend every session. Of those I attended, I enjoyed Bacher's "The Naked Mind" the most. He described how emotion plays a role in resistance to change. His presentation style is very engaging and holds one's attention. The talk was filled with psychological insights, surprising statistics, and counterproductive behaviors that were all too familiar to those in the audience who attempt organizational change. I've long thought that most resistance to change boils down to a particular kind of fear: The fear of loss. Bacher made that point, too, among many others.

Another very good and pragmatic session was Roman Pichler's Mastering Common Product Owner Challenges. He raised a number of Product Owner (PO) anti-patterns, including the Proxy PO, the Underpowered PO, the Overworked PO, the Remote PO, the PO Committee, the Blindfolded PO (meaning the PO lacks vision), and Feature Shock (where the backlog is an endless wish list for Santa Claus). A participant added another: The Split PO Role. The key take-away was that the PO role is an important one that calls for full-time attention, accountability, vision, knowledge, and authority.

The conference had some very good aspects and some not-so-good. On balance, I feel my time was well spent.

Looking forward to a week of learning

posted: 05 Oct 2010

I'm looking forward to some great learning opportunities this week.

First, there's the Agile Business conference, a two-day event focusing on business agility. I'll be facilitating two sessions at the conference.

The first is aimed at managers who select consultants and software products to help them solve organizational or delivery problems. The title is "How can I tell whether I need what they're selling?" The session was inspired by my observation that many managers choose a solution before they understand their problem. They end up with expensive software they don't need and/or consultants who specialize in problems other than the ones at hand. I think there are five broad steps people can take to improve their chances of getting relevant help with their problems, when using consulting services or selecting software products:

  1. Before buying anything, understand your problems. This might seem rather obvious, but in fact many managers jump the gun and buy software or choose a consultant before they have identified the problem(s) to be solved. It's a matter of random chance whether the software or consulting services will address the real problem(s). There are plenty of techniques and tools around to help analyze the root causes of problems. Use them. If you aren't sure how to use them, then let this be the first step in your improvement program. Despite the ongoing pain of your current problems, it will be worthwhile to learn how to understand the real problems before throwing solutions at the situation.
  2. Dig below the buzzwords when assessing solutions. Every consultancy and every software vendor describes services and products in terms of whatever buzzwords happen to be popular at the moment. The services and products may not change from year to year, but the marketing material is kept up to date with the latest terminology. Once you understand the problem(s) to be solved, the second step is to parse the sales talk so that you can distinguish between offerings that actually address the problem(s) and those that will not. "Agile" is the popular term just now, so the presentation focuses mostly on that word, but there are others. Participants will play Buzzword Bingo while we're on this topic.
  3. Keep your options open when engaging consultants. The typical pattern for working with consultancies is that we try to create a roadmap to solve the full set of problems that have been identified. We then agree on a long-term program to implement the plan. I suggest a different engagement model intended to maximize business agility to cope with lessons learned and variances from plan that occur along the way. Rather than a single, long-term commitment, consultants are engaged only for the duration set for the first interim goal. At that point, the business assesses the situation as it now stands and decides whether to continue along the same path or to adjust the plan. The same consultancy might be engaged for the next interim goal, if it is sensible for the business. A second suggestion is to dispense with the conventional time-and-materials billing model for consulting services, and look for alternatives that keep everyone focused on solving the problem rather than maximizing "utilization."
  4. Beware of false economies.
  5. Because consultants are expensive, there is a temptation to try and minimize immediate costs and maximize the deliverables they produce. In principle, those are good ideas. In practice, we have to be aware of potential pitfalls. One example is outsourcing software development or application testing. Hourly rates may be low, but companies that have gone this route have learned it is quite expensive in the long run. Another temptation is to aim for more than one target with the same initiative. Two common examples: (a) Combine an application re-write initiative with a product enhancement initiative; that is, add new features to the product while re-writing it to run on a different platform; and (b) Combine a product delivery initiative with a pilot project for mentoring, knowledge transfer, or organizational change.
  6. Try the things your helpers suggest. This is another one that seems obvious, but in fact many (most?) organizations resist making the changes their helpers recommend. To obtain the maximum benefit from the consultants who are tying to help you, you must be open to the idea of change. All too often, clients reply to recommendations by saying things like, "That will never work here!" or "That's not the way we do things!" Well, of course it's not the way you do things. Isn't that the point?
The other session is a workshop on using the Diagram of Effects to understand the forces that tend to hold an organization or a process in an equilibrium state. It's called, "No matter what we do, nothing ever changes." My premise is that this happens when we apply a linear cause-and-effect analysis to situations that are actually more complicated than that. We need a tool that helps expose causal loops so that we can identify multiple changes that have to be done in concert to achieve the goal.

 

In the evenings of the two days of the conference, interesting user group meetings will take place at Skills Matter. Fortunately, the location is within walking distance of the conference venue. I'm planning to attend Acceptance Testing in the Land of the Startup on Tuesday evening, and What is software craftsmanship? on Wednesday. I think the latter is the initial meeting of a new sofwtare craftsmanship group in London.

Some free advice: Don't lie on your r&eacu

posted: 25 Sep 2010

I suppose exaggeration has a long tradition in résumé writing and interviewing, but I still find myself surprised by candidates who apparently think they can bluff their way through an interview just by parrotting the buzzwords they read in the job announcement. Sometimes it's kind of obvious when a candidate has a good grasp of the job requirements. Sometimes...not so much.

I favor letting candidates for programming or testing jobs show what they can do. Typically, they prefer to work than to talk anyway. Well, the good ones do. Besides, it's gotten to the point that we can't tell anything at all from a candidate's résumé, and if a candidate shines in an interview it may only be because he/she has had a lot of practice interviewing.

Candidates need to find out whether the team they would join is a good fit for them, just as the team needs to get to know new people a bit before making a commitment. It's only fair for everyone. So, if we're going to audition technical candidates and we need to ensure they are a good fit for the team, it follows that the audition should consist of hands-on work with some of the team members. The closer the audition can be to the real work context, the better.

Companies like (for instance) Lean Dog Software and Emergn audition candidates for hands-on technical jobs, and the result is evident in the quality of their services. When I can, I advise clients to use auditioning in addition to interviewing as a way to screen candidates. It usually works quite well, although sometimes they have to reject a few who might have slipped through an interview-only recruitment process.

Early in my current project we were looking for a couple of decent Java programmers to round out the team. We asked for candidates who had the usual Java skills, plus a bit of Oracle, and if possible some exposure to development methods such as test-driven development and pair programming. We weren't asking for gurus; just for competent people who were interested in a collaborative working style.

Several candidates made it through the initial phone screen and came to the office to meet the team and have an interview. They claimed to have between 9 years and 18 years experience with Java. Think for a moment about those numbers. While there is a certain amount of complexity in the practice of software development, it is definitely not rocket science. Nine years experience is plenty of time for a motivated individual to reach a solid Practitioner level, and in some cases a Journeyman level. By the same token, anyone with 18 years experience who has not achieved the Practitioner level is in the wrong line of work. So, I don't think our expectations were unreasonable.

All these candidates claimed strong Oracle experience. All claimed to be Java experts. All claimed to have significant experience in TDD and pairing, spanning multiple projects and teams over a period of several years. During their interviews, all the candidates were able to describe both technical and process issues accurately, if dryly. Let me reiterate that we did not insist on previous experience with TDD and pairing; we understood that these skills are the exception rather than the rule, circa 2010. The desire to work in a collaborative style and a willingness to try unfamiliar techniques would have been perfectly acceptable traits for the new team members. That assumes they had basic competence in SQL and Java, of course. Even in those areas our expectations were not extreme.

I was a bit suspicious when the candidates were unable to tell any personal war stories from their experience; they merely recited textbook definitions of things like "inheritance" and "unit testing." Although each of them listed TDD and pairing on their résumés, none was able to describe any cases when they personally applied TDD or pairing. They weren't able to share any funny or memorable tales from the trenches. Can you really work for 18 years and have no personal war stories? Nothing interesting ever happened? Nothing amusing? Nothing ironic? Nothing surprising? Nothing edifying? Nothing discouraging? Nothing stupid? Nothing ever happened? Nothing at all? <gesture type="chin-scratching">Hmm. Spidey sense tingling</gesture>.

On to the audition phase. Each candidate paired with two team members, one at a time, while the other team member and I observed. We explained that we wanted them to act just as if they were already on the team and they were working on a User Story with a partner. We wanted to see how they interacted with their partners and their teammates, and how they approached test-driving some code. Some of the following description uses the plural; in fact each candidate was interviewed and auditioned individually. It's just that they didn't behave any differently, so the plural is a valid way to describe what they did.

I want to be perfectly clear about this: We never asked a candidate to demonstrate any skill he had not claimed to possess. Had a candidate said that he had never tried TDD before, then the tenor of the audition phase would have been different. For instance, we might have shown him the TDD cycle in the process of getting a sense of his general Java skills. But these candidates claimed that they had been using TDD for years already, and that they were very good at it. If that were true, then it should have been both easy and enjoyable for them to show us.

The project involves Java and Oracle work. We started each candidate with a simple SQL exercise. Given Employee and Department tables, they were asked to write a query to list all employees by department, including only those employees who were assigned to a department. Then, they were to write a query showing all employees by department including those who were not assigned to a department. So, first an inner join and then an outer join. That's all. They were presented with a sqlplus command line where describe commands had already been entered for the two tables, so they could see the table structure from the start.

No candidate interacted with his pairing partner at all. None uttered a single word unless his partner prodded him with leading questions. No candidate bothered to run a 'select *' to see what data was in the two tables, so they would have some idea of what correct output might look like. No candidate remembered the correct syntax off the top of his head; not a problem in itself, except that no candidate bothered to ask his partner, look for a manual, or search for the answer online.

After seeing the first candidate do nothing at all for a solid three minutes, we decided to open a browser to Google and leave it there, on the screen alongside the sqlplus window, as part of the set-up for the audition. (Psst: Hint, hint.) All the candidates just sat there. At one point I said, "You know, a job is not a university exam. On the job you can Google for the syntax." That guy Googled and completed the exercise. We were thrilled that he was able to use Google, but less than thrilled at the fact he still never interacted with his pairing partner.

After a few minutes we moved on to the Java exercise. For this, we set up a simple problem to build a 'stack' class. Candidates were presented with Eclipse, already open to the project, and with a unit test class already open in the editor. The unit test class looked like this:

public class TestStackOriginal{


    @Test
    public void whenStackIsEmpty_popReturnsNull() {
        OurStack stack = new OurStack(); // Note: This is NOT java.util.Stack
        assertNull("Should have returned null!", stack.pop());
    }

    @Test
    public void whenStackHasOneItem_popReturnsThatItem() {
    }

    @Test
    public void whenStackHasTwoElements_popReturnsTheLastElementAdded() {
        throw new RuntimeException("Write me!");
    }

    @Test
    public void whenStackIsEmpty_pushAddsOneItemToTheStack() {
        throw new RuntimeException("Write me!");
    }

    @Test
    public void whenStackHasOneItem_pushAddsASecondItem() {
        throw new RuntimeException("Write me!");
    }

    @Test
    public void whenStackHasTwoItems_peekReturnsTheTopItemWithoutPoppingIt() {
        throw new RuntimeException("Write me!");
    }

}

The problem is simple enough that candidates ought not become confused by the "requirements." The test cases pretty clearly suggest what to do first. The exercise presents several opportunities for candidates to show their knowledge and demonstrate how well they interact with pair partners. Personally, I had looked forward to this. I expected advanced practitioners would have something to say about the pop method returning a null reference and, possibly, about the fact that one of the test cases will pass even with nothing added to it.

Yes, it would have been nice to hear some discussion of these things, and maybe more. But what we were most interested in was the way in which the candidates worked with their partner, and that they seemed to understand the TDD cycle. We didn't pull these things out of our...well, out of thin air. All candidates claimed to have significant experience with TDD and pairing, as well as with Java. Therefore, it was reasonable to expect them to know how to function in a pair, how to drive code from unit tests, and how to write Java. Since they presented themselves as "senior" Java developers, we expected them to have some knowledge and opinions about OO design, too.

No candidate interacted with his pairing partner at all. The first wrote unit tests for java.util.Stack, without asking anyone if that was the point. (That's the reason we added the comment in the test case and suggested the class name OurStack.) He claimed to have been doing TDD for the past eight years. One would expect that TDD was his default mode of work. He gave no indication that this was true, or even that he had the slightest awareness that the exercise had something to do with TDD.

Another candidate immediately changed the implied method signature of the new stack class so that it threw an exception instead of returning a null reference. He did not consult his pairing partner about it. It did not occur to him that the team may have decided to return a null reference before he joined the project. He just went ahead and changed it — in a utility class, no less — without asking about it. Yeah, I want that guy on my team. Always a surprise in store. Who doesn't like surprises?

No candidate questioned the null reference. While throwing an exception might be overkill for an empty stack condition (even though that's what java.util.Stack does), returning a null reference isn't a good practice, either. The exercise was set up that way to provide an opening for candidates to show that they were aware of this, and to suggest an alternative such as the Null Object Pattern. No candidate provided any evidence that he was aware of any of this, despite claimed experience in the range of 9 to 18 years and self-assigned titles of Senior This and Senior That.

One of the few candidates who actually tried to write some code instead of just staring off into space implemented the pop method as follows (after his partner suggested that he actually write something instead of just staring off into space):

public String pop() {
    ArrayList stack = new ArrayList();
    if (stack.size() == 0) {
        return null;
    }
    return null;
}

He was then unable to explain how the code worked. His partner asked him about the effect of the size check. Blank stare. His partner asked him what would happen if the stack contained an entry before pop was called. Blank stare. Nine years of Java experience, by the way. Engineer. Senior Something. Sharp-looking suit. Classy necktie. Yeah.

That was the best result achieved by any of the candidates.

You can't fake this stuff. Don't try. It's a waste of everyone's time, and all you will gain is a reputation as a liar.

Lean and Kanban Europe 2010 - Trip Report

posted: 25 Sep 2010

Just back from Lean & Kanban Europe 2010. The first order of business has to be to thank Maarten Volders, the main organizer of the conference, for a job well done. It was only the second event he has organized, and the result was as good as conferences organized by more-experienced people. The quality of the conference food was exceptional, compared with any other conference I've attended; and food is important! For next time, a commemorative T-shirt would be nice.

Alan Shalloway's insightful opening keynote set the stage for a great conference full of practical learning opportunities and rich interaction among the participants. Good summary of how and why current leading-edge thinking about process improvement came to be.

I've been wanting to get more directly involved with the Lean and Kanban community for some time now, and this was the first explicitly Lean-focused conference I've been able to attend. There is considerable overlap between "agile" and "lean" in software development. Many people who are already well known in the agile community have become active with Lean and Kanban in the past couple of years. It seems to be a logical next step in the evolution of software delivery practice.

A mindset of continuous improvement is fundamental to Lean thinking. That mindset was very evident in the nature of the presentations given by the Big Fish in the Small Pond of Lean and Kanban. They were not so much "experts" laying down the law for "followers" as they were peers offering experience reports bolstered by thoughtful analysis. There was a lot of straight talk from the trenches, deep interest in understanding what worked or didn't work and exactly why, and a sincere desire to learn rather than to profess.

A great example of this was David Anderson's talk, "Using Classes of Service with Kanban Systems for Improved Customer Satisfaction." Although the talk was not listed on the "experience report" track as such, it was entirely based on practical experience. He described a situation from a real-world project that led to the idea of classes of service, which I suppose you could say are different types of work requests. A key point is that the types or classes are not arbitrary. A basic concept of Lean development is that delay is a cause of many problems. Classes of service can be defined on the basis of the cost of delay curve, a method of understanding the business impact of delayed delivery.

The cost of delay curve for various types of work requests leads directly to four different classes of service: Expedite, Fixed Delivery Date, Standard Class, and Intangible Class. I will refer you to published descriptions of these rather than reiterating the material here. David's teams have set different work-in-process (WIP) limits to each class of service, with Expedite limited to 1. That limit prevents everything from being defined as Expedite, which would throw the whole process out of whack. He also mentioned that peer pressure keeps the number of Expedite requests low, since client managers have to justify their requests to the other stakeholders.

The classes of service concept is described in this blog post by Jeff Anderson, in this set of slides from David's presentation at Agile 2009 (pdf), and in David's book, Kanban.

Mary Poppendieck gave another sort of experience report as a way to explain the concept of pull in her talk, "What is this thing called 'pull'?," She told the story of a time when she was a line manager at a plant that manufactured cassette tape cartridges. The day came when Japanese competitors were able to sell cassettes at half the price as her own company's cost to manufacture them. In those days, there were no books or conferences about the Toyota Processing System, and the word "lean" had not yet been coined.

She described, step by step, how the company came around to realizing that the only people who could devise a production system that could compete with the Japanese were the people who did the actual work on the shop floor. No system could be designed by management or by outsiders that would work. What they ultimately came up with was a pull-based kanban system similar to those used at Toyota. Something to take away from this is the fact that all this "lean" stuff is not ivory tower academic theory; it comes directly from practical experience in competitive business situations where failure equates to direct financial losses.

John Seddon's closing keynote on Thursday was refreshing, entertaining, and informative. His delivery is low-key, engaging, humorous, and British. He spoke without slides, and told the story of a consulting engagement in which he helped a service company improve its performance. It seems the company was in the home repair business. They took calls from residents who reported needed repairs, determined the urgency of the repair, and dispatched workers to carry out the repairs. Customer service was dismal.

The short version is that he advised the company first to improve its process, and only then to engage IT people to automate it. In his experience (and probably in most of ours, too), when IT is brought in at the start of a process improvement initiative, they tend to (a) automate the current process, and (b) make the solution too complicated. The result is the same old process, without any improvement, a huge bill for software development services, and an automated solution that doesn't serve. By improving their process first, the company had a greatly simplified and highly appropriate process in place before they asked for a bid for automation. A programmer offered to build the solution in three weeks for £2,000. The company offered to pay £3,000 for a solution delivered in two weeks. They got one. It worked. They're still using it.

I wasn't sure how well-received my own session might be. It represented some out-of-the-box thinking that very few of the people I've worked with in the past five years would consider feasible or even sane. It was difficult to assemble the ideas into a coherent enough form for a presentation, and I'd had the opportunity to dry-run it only with one colleague prior to the conference. She agrees with many of the key ideas, so I didn't get any feedback of the sort that could protect me from self-annihilation, such as "Are you crazy? You can't say that! That's nonsense!" I had no choice but to expose myself to that sort of feedback with the camera rolling and in front of a room full of people who know more about Lean than I do. Oh, well. Worst case: I might learn something.

I was pleasantly surprised by the positive response. It seems others also have been questioning the conventional separation of IT and "business." I was also pleased to note that about half the participants had read Implementing Beyond Budgeting and understood the impact of the annual budgeting cycle on an organization's ability to embrace Lean thinking. Several people in the room and at the conference have already been developing ways to apply Real Options to software delivery problems.

A question that came up, and that often comes up in conversations about the idea of integrating IT with business units, was how to manage economies of scale, governance, and shared IT assets if we fold IT functions into business units. My view is that the only portion of IT services that should be folded into the business is application development and support; that probably accounts for around 20% of the work of most corporate IT departments. Central IT functions should remain centralized. So, we're not talking about ripping the IT department to shreds.

Frankly, this seems rather obvious to me, and yet many people I talk to about the idea of integration seem to assume it's all or nothing, and folding IT into the business means the end of the central IT department. That isn't quite what I mean by integration, and I'm glad to have had an opportunity to clarify that with a sizeable number of people.

I suppose I was surprised by the positive response because in my recent experience I haven't met many people who are aware of these issues or who seem to perceive organizational structure as an inhibiting factor in their process improvement efforts. Very few are genuinely interested in change at all, and those tend to think about improvement strictly within the scope of the IT department or individual software development projects and teams.

It was very refreshing to hear positive comments about the session afterwards. I had almost given up on the idea that organizations can be changed at all. I don't know whether it's because the conference concentrated people together who share a common interest in general process improvement, or because North American companies are too stodgy and traditional-minded to consider change of that scope. I suspect it may be a little of both. Either way, I was very encouraged and re-energized by the conference. Really glad I participated!

There's someone else I think should be acknowledged. He wasn't on the program, and I don't think he often presents at conferences: Paul Dolman-Darrall, who works for Emergn. He participated in my session about the future of agile in Orlando, and again in my session in Antwerp as well as a couple of the other sessions I attended. He always brings fresh thinking to the table, and he has a knack for perceiving connections between ideas that other people overlook. His questions and comments always help refine, redirect, or simply correct the assertions made by speakers. He's an avid reader and always suggests books and articles for us to follow up on. He's also hands-on in the field doing lean and agile coaching and delivery, so his thoughts are not purely theoretical. This is just the sort of person I like to have in my sessions. It helps make the sessions into powerful vehicles for learning.

Unfortunately I had to miss the final day of the conference. Several compelling sessions took place that day, featuring a number of speakers who are thought leaders in the Lean software community.

Some pics...

David Anderson's session on classes of service Attendees enjoy the fine weather between sessions
Old cranes at the conference venue (used to be a dock) View along the Scheldt from the venue
View along the Scheldt from the venue View along the Scheldt from the venue
Shopping center near my hotel Street near my hotel
Street near my hotel Antwerp Central Station - interior
Antwerp Central Station - exterior Paced by another aircraft, approaching Maritimes on the way home

Appearing at Lean and Kanban 2010 this week

posted: 20 Sep 2010

Attending the Lean & Kanban 2010 conference this week, where I'll be hosting a discussion based on the following premise: The existence of an IT portfolio separate from the overarching enterprise portfolio limits the organization's ability to implement and benefit from a lean model. It's on the lean side of the hall, and not the kanban side.

To set some context: The session deals with internal IT departments in companies that do something for a living other than software development or IT services. It deals with the sort of company I think of as a tertiary technology company; that is, a user of information technology, not a creator of information technology. So, if you're tempted to ask, "How does this apply to us? We're a seven-person start-up building a cool new peer media sharing app," don't ask. This isn't about you.

The basic idea is that structure begets function. If your organization is structured in a way that makes lean adoption difficult, then lean adoption will be...well, difficult. The "hook" about the IT portfolio is a conversation-starter meant to underscore the idea that having two separate plans for two separate organizations within an enterprise leads to various manifestations of the Three M's of lean.

I'm hoping that the discussion will progress beyond just the IT portfolio, and that we will be able to explore the ways in which keeping the IT organization separate from the rest of the enterprise inhibits emerging ideas like Real Options and Beyond Budgeting, as well as other ideas that are compatible with lean thinking.

I also want to suggest that treating the internal IT department as a cost center with a fixed annual budget leads to wasteful behaviors, including small-scale time-based estimation, tracking actual time against estimated time per task, gaming the budget numbers, and defining performance incentives for IT personnel that are different from those of the rest of the enterprise, leading to different and independent priorities.

Many organizations use a quadrant-style model for mapping business objectives along two axes, such as "importance" vs "urgency," or "market differentiating" vs. "mission critical." I make the case that when enterprise planning and IT planning are separate, the axes along which planners map their objectives may be different. For example, IT planners might use a quadrant-style planning model with the axes, "value to the business" vs. "risk." Those axes sound good, but all too often "value to the business" merely represents the IT department's best guess, and "risk" merely represents the anticipated difficulty of implementation. IMHO avoiding work that it hard to do, but that supports the enterprise strategic plan, is just a blame-avoidance strategy.

Although planners speak of "alignment," as long as they are working from two different plans the IT organization will never be aligned with the enterprise strategy. When the IT department is treated as a cost center with an annual budget allocation, the tendency is for IT planning to be aimed at minimizing the blame that may be assigned to the IT department for any "failed" initiatives. Perfecting this sort of planning, as recommended by many proponents of formal IT Porfolio Management (ITPM) methods, amounts to "doing the wrong thing better."

The IT portfolio approach also impedes the organization's ability to establish a pull system at the level of enterprise capabilities, because it tends to lead to a program of discrete projects. Each project is, in effect, a batch of work. Some batches are many months long and comprise a very large amount of work. The IT portfolio itself establishes a push system for the discrete projects. High "risk" (that is, "difficult") projects may be declined in order to avoid blame, regardless of their importance to the enterprise strategic plan. The three M's are to be found in abundance in this sort of arrangement.

A common sort of disconnection between enterprise planning and IT planning has to do with needs that are obvious to the IT staff but that do not fall out from enterprise strategic planning. An example is a "technology refresh" initiative, by which I mean a project to replace aging technical infrastructure, a hardware platform, or a COTS package or legacy application. The business risk of leaving the old technology in place grows year by year, but often it is not obvious to business planners. Typically, IT staff try to sell the idea up the management hierarchy, with mixed results.

The same disconnection can occur from the other direction. When enterprise strategic planners identify a business capability that requires an investment in IT infrastructure above and beyond the usual annual budget allocation, they may simply assume the IT department has the capacity to complete the work. Unless the CIO is actually treated as a peer of the CEO, CFO, and COO, there will be no voice in the enterprise strategic planning process that can identify business capability goals that have significant implications for IT.

Another concept I'd like to explore in a lean light is "shadow IT." Given that business people have absolutely no interest in spending money to create their own mini-IT departments, what prompts them to do so? My observation is that they do it because they are driven to do it by IT departments that provide poor service. I think this is another effect of organizational structure and of separate IT planning, budgeting, and tracking.

Shadow IT isn't the answer to the problem, because it amounts to duplication, a form of waste. It can also lead to legal and financial problems for the enterprise, since shadow IT solutions may not comply with regulatory requirements or information security standards. But what if business units normally hired and managed their own application development teams, while the central IT department took care of matters that rightfully belong under central control? There's a lot to say about this and not much room here, but we'll be discussing it this week.

If you're in the neighborhood, drop in and participate. There are a lot of good sessions on the program and several well-known speakers who are thought-leaders in the application of lean thinking to software development and IT management.

TDD doesn't work. Or maybe it does.

posted: 17 Sep 2010

There seems to be a growing backlash among software developers against what they perceive to be "agile." It's understandable — in the past decade or so, that notorious buzzword has traveled the world, leaving in its wake results ranging from the astonishingly good to the maddeningly bad. Few people have bothered to analyze the reasons for the results they have seen, or even to try and understand how well they applied agile principles. It's easier to blame a buzzword or a consultant than it is to ask questions that may have answers we don't want to hear.

There's a pattern I've seen repeated over the years. When a bright new idea appears on the scene, people jump on the bandwagon enthusiastically with high hopes. Then, when the shine wears off, the same people are eager to discard the idea and look for something else. I'm beginning to think a lot of people are actually looking for something that will magically solve all their problems without demanding any effort or understanding on their part. They (seem to) want to be told what to do, step by step, by a process that will guarantee success every time, automatically; a process that can't be "done wrong," no matter what; a process that takes the blame for bad outcomes but gives the people the credit for good outcomes. "Agile" software development is at that stage today. The shine is off.

What worries me about this is not that I agree or disagree with individual professional choices about how to build software. What worries me is the final part of the repeating pattern of enthusiastic adoption followed by sneering rejection — at the point when people are ready to discard an idea, they discard everything they perceive to be associated with the idea. In effect, they throw out the baby with the bathwater.

A number of sound software engineering practices have become popularized through the agile movement. The practices aren't part of the definition of "agile;" the Agile Manifesto doesn't mention any specific software development practices. Nevertheless, many people associate certain practices with "agile" because they first learned about the practices in the context of an agile adoption or agile pilot project. Possibly for the same reason, proponents of certain practices tend to label them as "agile practices," too. That probably doesn't help people parse the useful ideas from the general background noise.

The agile movement raised awareness of several good practices, including continuous integration, automated testing, test-driven development, team collocation, and pair programming. You can probably think of more examples. In their eagerness to reject "agile," a growing chorus of voices has begun to chant the mantra that these practices are not valuable.

Throughout the first 25 years of my career, I wrote software in the same way as detractors of TDD write it today: I would quickly type in some code, and then spend the next couple of days poking at it manually, tracking it through a debugger, displaying variables on the console, and anything else I could think of to make it work (sort of). Testing (if any) was manual testing done after the fact, and the smallest scope of a test case was usually an end-to-end functional test.

So, 5 to 10 percent of my time was spent in creative activities like analysis, design, and coding. The rest was spent in debugging and in responding to bug reports. It was a frustrating, uncreative, and boring way to spend my time. That's why I was looking at franchise brochures and considering dropping out of the IT field altogether. Selling venetian blinds or running a storefront for a package shipping company would have been a more satisfying professional life.

When I learned TDD in 2002, it revitalized my enthusiasm for software development. I could still quickly type in some code — a unit test followed by some production code " and rapidly build up a chunk of functionality. The difference, however, was that I no longer had to spend the next couple of days debugging the code the hard way. I've never looked back.

And the fact I've never looked back might be the very reason I'm overlooking the wisdom of the crowd of anti-TDD activists. Could they be right? Should I stop using TDD in my work? In the interest of keeping an open mind, and cognizant of the wisdom of crowds (50 million flies can't be wrong, after all), I decided to conduct a little experiment to re-validate the value of TDD, one way or the other.

Yesterday, my pairing partner and I picked up a story card to work on. It involved adding some functionality to existing code. I suggested we forego TDD and play the story the old-fashioned way. My partner was skeptical and worried that we were recklessly speeding toward the nearest ditch. We did it anyway. If the anti-TDD crowd is right, then we would achieve at least the same results in at most the same amount of time as we could by using TDD.

Working from the back end toward the UI, we made a small database change, modified a couple of SQL statements here and there in the code, added some logic in a few Java classes, and made a small change to an XHTML document. It all looked pretty good and we felt we hadn't overlooked anything. After 90 minutes of work, we were feeling confident and happy.

Then we started hacking through the debugging and manual testing process that is always necessary when one does not test-drive code. It took us the rest of yesterday and all of today to get to "done." I think it was a fair experiment, because TDD was the only variable we changed. We still did pair programming and continuous integration. The Product Owner and the database specialist were in the team room, available for questions and assistance. Having worked on the project for a while now, we knew that we could have completed a similar task in about two to four hours, working on the same code base, in the same team room, on the same project, with the same people, had we used TDD as usual.

So, 50 million flies can be wrong.

TDD isn't a value-neutral choice. It really is better than other approaches. Those who promote it are not snake-oil salesmen or religious zealots. They just care about quality software and professionalism. Reject "agile" if you wish. Just beware of throwing out the baby with the bathwater.

Disclaimer: No code was harmed in the making of this experiment. We went back and remediated the missing tests.

Child's play

posted: 16 Sep 2010

At the family dinner table the other day, my son asked me, "What do you actually do in your job, dad?"

Me: Basically, most companies are run like this: <pantomime action="shooting myself in the foot"> "Bang! Ow! That hurt! Bang! Ow! That hurt! Bang! Ow! That hurt!" </pantomime> So, they call me in and ask what they should do. I say, "Stop doing that," and they go <pantomime action="holding a gun and not shooting myself in the foot"> "Hey, that's a lot better! Thanks!"</pantomime>

Son: Can you be more specific?

Me: Okay, here's one example. To carry out any given business initiative, people with a range of different skills have to play their respective parts. A business initiative might require marketing people, attorneys, financial analysts, software specialists, and others.

Son: Yeah, that makes sense.

Me: So, most companies organize all the people who have a given skill into the same group, and keep the groups separate from each other. Each business initiative gets chopped up into little tasks, each of which is done by some specialist in one of the separate groups. The groups try to put the initiative together by making formal requests for "services" from one another, using a cumbersome, bureaucratic process, but no one has a clear overarching view of the goal. From the point of view of any one employee, their work looks like an endless series of disconnected, isolated tasks that have no context.

Son: Why don't they just put people with all the skills necessary for the business thing in the same room so they can talk to each other directly?

Me: Well, that's what I recommend. And they go <pantomime action="holding a gun and not shooting myself in the foot"> "Hey, that's a lot better! Thanks!"</pantomime>

Son: It's kind of obvious, isn't it?

Me: Sure. As you've just demonstrated, it's child's play.

Son: Then what do they need you for? Why don't they just do it themselves?

Me: Because most companies are not run by children.

We're still writing Big Balls of Mud

posted: 07 Sep 2010

Various memory aids have been concocted to help us remember guidelines for writing good code, like SOLID, GRASP, and KISS. In general, these guidelines tend to encourage high cohesion and low coupling of software modules. When modules are tightly coupled, the code becomes hard to understand, hard to modify, hard to reuse, and fragile (a change in one module causes problems in other modules).

One of the classic examples of tight coupling is cyclic dependency: Each of two modules depends on the other. For example, two C++ classes whose header files include each other are cyclically dependent — each depends on the other. Depending on the characteristics of the programming language, tight coupling may lead to additional challenges, as well. For example, Java applications are organized into logical namespaces called packages. If programmers do not pay attention to coupling in their design, they may inadvertently create cyclic dependencies between two Java packages, even if no two individual Java classes have cyclic dependencies. When two packages have cyclic dependencies, it is not possible to release either package independently of the other; software deployment and release processes become complicated. The software works, but it is difficult to live with; in business terms, its total cost of ownership is higher than necessary.

Most of the software design guidelines people talk about are meant to help us achieve high cohesion and low coupling in one way or another. Sometimes, we can detect possible problems in the code by keeping an eye (or "nose") out for so-called code smells, like the ones listed in this article on Coding Horror. Static code analysis tools such as Sonar can detect structural problems in code. Some development environments may offer protection from common coupling problems; for example, Microsoft .NET development tools can prevent a successful build of an application that has cyclic dependencies.

Another very common form of tight coupling is that one module in a system knows too much about the internal implementation details of other modules. The idea of minimizing this sort of coupling has come to be known as the Law of Demeter: "Only talk to your friends." The name comes from the Demeter Project, whose goal is to advance the science and art of adaptive and aspect-oriented programming. The Law of Demeter was popularized by the influential (in software circles) 1999 book, The Pragmatic Programmer: From Journeyman to Master, by Andy Hunt and Dave Thomas.

Unfortunately, it wasn't popularized enough. Despite the wealth of information available on the subject, it seems that the vast majority of people who write software for a living are either (a) completely unaware of any guidelines for sound software design, or (b) largely misunderstand the guidelines. I recently saw some Java code that looked more-or-less like this (this is a sanitized example, of course):

public Thing getThing(Integer id) {
    return new Beta().getGamma().getDelta().getEpsilon().getOmega().getThing(id);
}

This code exhibits three of the basic code smells listed in the aforementioned article on Coding Horror:

  • Message Chains
  • Inappropriate Intimacy
  • Indecent Exposure

 

The Message Chain is fairly obvious: The call to retrieve an instance of Thing goes six layers deep through the application. A change to any one of the referenced classes could force a change in the client class, contrary to the well-known design guideline known as Single Responsibility Principle, which states that a class should have only one reason to change. This is yet another form of tight coupling.

The message chain also exhibits Inappropriate Intimacy in that the client class has to know something about the internal structure of all the other classes referenced in the message chain. The client class should only have to know that Beta exposes a public API to retrieve a Thing instance; it should not know exactly how Beta goes about obtaining the instance.

Beta, Gamma, Delta, and Epsilon all exhibit Indecent Exposure by allowing clients to call through them to reach a third object. None of these classes can be changed independently of the others. They might as well all be written as a single class, a Big Ball of Mud.

Despite the emergence of development methods that encourage and facilitate well-factored code, and the growth of the Software Craftsmanship movement, the Big Ball of Mud remains the most popular design for software, including greenfield development that has the full benefit of hindsight regarding the bad design approaches of the past.

Agile 2010 part 2: Perspectives about current stat

posted: 14 Aug 2010

Needless to say, every session and every conversation throughout the week added to my understanding of how people perceive agile these days, and where they think we should be going. There were quite a few informal discussions of the topic every day. For me, the following contributed strongly to this area of learning:

The Oath is simply a statement that we will consider any idea regardless of its source, rather than limiting ourselves to some predefined and branded package of ideas. It came about as a reaction to all the pointless bickering in the community the past few months, and to the tendency for agile proponents to judge everything according to whether it fits a predefined model of "agile" regardless of whether it adds value in a given situation.

 

The ICA is a community response to the circular debate about certification, and about who gets to decide how to certify people, and so forth. Companies can join the ICA, but it is not controlled by any one corporation or organization. It is a work in progress, but already has a pretty well-designed framework for a certification program.

The two sessions listed were both mine, so I can safely summarize their intent. I had hoped the round table discussion would help us come around to a consistent view of where "agile" stands as of today and in which direction we, as a community, want to take it next. The best practice/process smell session was meant to help us identify cases when we have become stuck using a suboptimal process because we've had to work around a tactical issue, and then never got around to solving the root cause. Neither session turned out as I had expected.

Let me review the round table session first. If we take the goals of agreement on a common vision and selection of a clear direction as the acceptance criteria for the session, then it failed. Fortunately, the unexpected (to me, anyway) outcome of the discussion provided valuable learnings (to me, anyway).

The discussion was very open-ended and was largely participant-driven, and yet I did have a few ideas I wanted to float to the group. I tried to relate the present state of agile to the canonical innovation adoption curve and to map out a continuum from "heavyweight" to "lightweight" processes. I was prepared to introduce a few other angles on the topic, too, but these two along with ideas the participants brought with them were more than enough to fill the three-hour time slot. Several people stayed afterwards to continue the discussion on their own.

I wrote "2000" and "2010" on two sticky index cards, and applied them to a large drawing of the innovation adoption curve. I placed the 2000 card at the edge of the Chasm, and the 2010 card near the end of the Early Adoption phase. This represented my concept of the penetration of "agile" at those points in time. After a moment of silence (one of the few, I'm happy to say), one of the participants walked up to the wall and repositioned the 2010 card at the top of the curve, mid-way through the Early Adoption phase. Heads nodded in agreement.

I talked through an analogy I had intended to demonstrate, but I never purchased the food coloring. Picture a large container of clear water. That's the IT industry circa 2000. Now picture a small vial of dark blue fluid (it was to be water with blue food coloring). The agile movement introduced the dark blue water into the IT industry — pour the contents of the small vial into the larger container. Ten years later, the IT industry is pale blue. It will never be dark blue, as the proponents of agile envision success, but it is definitely more blue than it was before. This is a sort of success. The question is, where do we go from here? We aren't at the same state we were in 2000, so the same solutions probably won't work. How do things differ today?

Some participants related to the analogy, but most disagreed with the premise. They felt it was possible to make the water darker, and we should not accept poor implementations of agile as "successes;" not even partial ones. One participant took the idea in a different direction. Laurent Bossavit noted that if we began with milk and added a drop of something, that a living culture would begin to grow in the milk and the result would be cheese. One does not debate cheese with a Frenchman. Beyond that, I found his analogy powerful. The colored water is the result of planting seeds that never grow; the cheese is the result of beginning a process that takes hold and moves forward on its own. (I also like the play on the word "culture," but that's neither here nor there.)

Laurent's take on the growth process provided a segue to another notion I wanted to explore with the group. My understanding is that the people who wrote the Agile Manifesto originally called their meeting the lightweight process summit. They were interested in removing waste from IT processes, to transform them from "heavyweight" to "lightweight." Apparently, some members of that group feared that people would misunderstand "lightweight" to mean "not robust." They decided to look for another word. They found one.

I wanted to get back to the fundamental idea of making our processes lighter. The agile community has benefited from an infusion of ideas from Lean Thinking in the past few years. Key lean ideas pertaining to the idea of lighter processes include:

  • Perfection — always keep perfection in mind as a target, even though we understand perfection is not actually achievable;
  • Continuous improvement — use the Five Focusing Steps from the Theory of Constraints as a framework for improving our processes, moving them closer to perfection; and
  • Focus first on value, second on continuous flow, and third on removing waste from our processes.
To start this part of the discussion, I posted a large drawing of a horizontal line labeled "heavyweight" on the left and "lightweight" on the right. I had prepared several index cards with the names of various approaches, models, or practices in the areas of organizational structure, process framework, and development practices. I held up each card, the group reached enough of a consensus about word meanings to prevent the discussion from stalling, and we collectively decided where on the spectrum the item belonged.

 

These were, for example, things like "Linear SDLC process," "Time-boxed iterative process," and "Flow-based process," in the category of process frameworks, along with things like "Time-based task estimation," "Relative sizing of work items," and "Single-piece pull w/o task estimation" in the category of practices. There were other categories, too. We laid these things out along the line and associated items that were prerequisite to others.

The idea was to map out a continuum of agile practice from heavy to light, with the assumption that our target is to make things as light as possible. At first, the group bunched most of the items in the middle of the spectrum. They protested that "we can't do away with practice ABC because of constraint XYZ." True. They protested that a linear process can be lightweight and an "agile" process can be heavyweight, depending on how one does things. True. One participant pointed out that single-piece pull may simply be inappropriate in the context of software development. Possibly true, especially in the absence of a long list of prerequisites that are barely even on the horizon of general management and IT practice today. They were focusing on the as possible part instead of on the as light part of the statement.

These are present-day-reality truths; they are not inherent attributes of the models and practices on the chart. I tried to guide the discussion toward distinguishing between the inherent "lightness" of each item on the one hand, and the practical reality of implementing that item in participants' current circumstances. In that light, the group gradually agreed to spread the cards out a little more, so that we could see a sort of progression from heavy to light. My goal in this was to get us a little closer to an idea of "perfection" (in the lean sense) that we could use as a reference for deciding whether an opportunity for improvement exists in our work environment. Without that sort of reference, how can we know whether improvement is possible or what sort of change would be helpful?

This seemed to open the sluices for fresh ideas. Everyone in the room had been thinking about this issue for some time already, and they all had great ideas. We brought up some of the better-known emerging ideas such as Capabilities-Based Planning, Real Options, Beyond Budgeting, and restructuring organizations such that application development and support functions reside directly in the business units they serve. We also questioned some of the Lean concepts that are being talked about in agile circles these days. In particular, the whole concept of the value stream was questioned; another notion, the value network, was proposed by one participant. The same participant questioned whether Throughput Accounting was a good fit for software development. (Sorry, I've forgotten his name; if you know who it was, please post a comment or send me an email. He works for Emergn, if that helps jog the memory.)

My second session was entitled, "Today's best practice is tomorrow's process smell." I made some foolish assumptions in preparing this session. The first was that I assumed an "expert" level session would draw participants who were already applying agile principles and practices very well, and who had reached an impasse or a challenge of some sort in moving ahead with continuous improvement. That assumption led to other assumptions about how much context the participants would need in order to contribute to the discussion. I prepared insufficient physical tokens of the content, such as charts, cards, handouts, or slides, and provided insufficient introduction to provide context for the discussion.

I had hoped participants would bring examples of work-arounds or "painkillers" they had implemented in their own organizations, and that had become institutionalized, thus preventing them from removing the root cause problems that had led to the work-arounds in the first place. I had provided a few examples in the session proposal as well as on my blog to give people a sense of what I had in mind. It wasn't sufficient to set expectations for the session. To distinguish between a "good idea" and a "work-around," a person has to have a concept about heavy-to-light process improvement similar to what I introduced in my first session.

I learned that the majority of people who believe they are already practicing "agile" do not have a deep understanding of agile values and principles, and simply cannot judge how effective their current practices are. They can see that whatever they are doing today in the name of "agile" is working much better than whatever they were doing before, but they don't really understand why. They are simply following a defined process that was taught them by a consultant or that they read about in a book. They cannot tell whether anything they are doing on their projects is a work-around as opposed to a good practice; their assessment of any given practice is not based on whether it adds value or incurs overhead, but merely on whether it matches the defined process.

Rather than painkillers or work-arounds, participants described issues with agile implementation that are common in the early months of an organization's initial experiments with agile methods, when only a few people in the organization are interested in change, and the change agents have no past experience with the new methods. Most of their problems are organizational impediments, and not problems with agile methods as such. In describing their problems, participants did not explain how they had crafted a work-around to a specific problem, and how the work-around had become institutionalized. Instead, they simply stated a problem and waited for me to hand them a solution; and the problems were of massive scope. This sort of thing was far afield of the intent of the session.

I'm afraid these participants did not come away from the session with much useful information. I can say that I learned from them, though. I gained a clearer idea of how agile is being implemented in the industry, and how well-understood the concepts really are. Going back to the blue water analogy, I would say the water is very, very pale blue. Agile penetration is wide, but not deep.

Another informative angle for me was that a few participants believed (and still believe, I guess) that what they are currently doing in their organizations represents not just a "good" agile environment, but an exemplary one that requires no improvement at all. Usually, their descriptions of the current state in their organizations is horrifying.

In one case, a participant who holds a Ph.D. in agile methods described a nightmarish 1980s-style dysfunctional organization that uses a pseudo-RUP process in a heavily-siloed organizational structure, decorated with every sort of high-ceremony, high-overhead gates and checks and controls you can imagine. This in itself is not a problem, provided the current state is recognized as a problem and the change agents in the organization are working toward continuous improvement. In this case, if I take his description literally, then the organization settled into this mode of work some six years ago, and has not changed anything since. The most puzzling bit was that he seemed to be proud of it all. I must say I admire his perseverance, at least. In his place, I think I would have long since traded my laptop for a push-cart, and turned to selling popcorn on the street.

Since these were my sessions, I'll review the feedback I received, and how I intend to use the feedback. Feedback often falls into one of two patterns: Reviews are bunched in the middle, or reviews are bunched on each end (U-shaped reviews). Both these sessions garnered U-shaped reviews. For the round table, all the formal review forms that were submitted were strongly positive. However, about half the people who started the session decided not to return after the break. I take that as a negative review (they voted with their feet), although it does not offer me any information to improve the session in future. For the process smells session, the formal reviews included both very positive feedback and very negative feedback that included concrete recommendations for improvement, and nothing in the middle. Thanks to everyone who took the time to provide the latter. I think the process smells topic deserves more attention, and I will revise this presentation to include better contextual information.

Agile 2010 part 1: Overview

posted: 14 Aug 2010

If you were following the Agile 2010 conference on Twitter last week, then you probably know more about it than I do. They had a live Twitter feed running on several large-screen monitors, but I didn't read them and didn't go online very often during the week. I wanted to be fully present and immersed in the experience, so I probably missed a lot. FWIW, here are my impressions.

With approximately 1,400 participants, this year's conference was slightly smaller than the last two or three, but still quite large. I was happy to participate in several excellent sessions, but disappointed that I had to miss a few, since there were two or more compelling choices in every time slot. At the same time, there were a lot of very smart people whom I was happy to meet and listen to between sessions, at mealtimes, in the Open Jam areas, and at conference- or vendor-sponsored social mixers. Most of the "big names" whose proposed sessions were not accepted for the program attended the conference anyway, and were very active in other people's sessions and in informal settings throughout the week. So, is large size a good thing or a bad thing for a conference? Yes.

I had no particular plan in mind at the start of the conference. In hindsight, it looks as if my conference experience tended to fall into a handful of categories:

  • Learning about other people's perspectives about the current state and future direction of agile adoption and the agile community;
  • Learning how agile adoption can affect people who work in roles or disciplines other than my own;
  • Learning about effective coaching and how to grow as a coach; and
  • Learning about techniques and tools for developing code, refactoring, and debugging legacy code.
I'm going to break this up into separate blog posts because of the length.

 

Agile 2010 part 4: Effective coaching and how to g

posted: 14 Aug 2010

I spent quite a bit of my time during the week in this area. I attended four sessions that directly or indirectly addressed the topic, and participated in two informal group discussions about how to improve the state of the art of coaching in the agile space. The coaching-related sessions I attended were very different from one another, and all were presented by people who do this sort of coaching for a living: David Draper, Arto Eskelinen, Sami Honkonen, Rachel Davies, Gino Marckx, and Michael Sahota. It's worth noting that there were many more sessions on the topic of coaching; it was a strong focus of the conference program.

Retrospective for an agile coach (David Draper)

David shared three past experiences in coaching agile teams. He set out a timeline for each, consisting of a horizontal line representing time and a series of boxes highlighting key events in the project. The timeline was general and did not represent absolute increments of time. Each box contained a small smiling or frowning face icon, and some boxes contained one of each.

David summarized the course of events in each case and described what had happened at each of the key points in the timeline. Then, working in small groups, we talked about what had happened and how a coach might have approached each situation differently.

The value of the session was that each of us gained new insights and ideas from our peers that can help us handle similar situations in our own work.

At one point, David described an incident in which the team presented a project roadmap to the business stakeholders that demonstrated the team would have to increase its velocity exponentially (and here I do not use the word in its loose sense) in order to meet the prescribed deadline. The stakeholders did not understand the implication, and told the team to proceed with the plan. David wrapped up the story with a quotable quote: "When you tell people what they desperately want to hear, they don't necessarily detect the fact you're saying it tongue-in-cheeck."

That's a very valuable lesson to take from the story. All too often, we who understand how agile works assume it's perfectly obvious one cannot play games with velocity and force a given scope to fit into a given schedule. It's not so obvious to others, especially when an on-time delivery of the full scope is in their vested interest. We have to lead them by the nose toward understanding.

Effective questions for an agile coach (Arto Eskelinen and Sami Honkonen)

This session was very different in content and in presentation style from David's. In a very well-structured series of short explanations and short workshop activities, Arto and Sami introduced us to a coaching model called GROW: Goal, Reality, Options, and What To Do. They reinforced a concept we should all know, but that we can forget in the heat of the moment: Our role as coaches is to create awareness and a sense of responsibility, and not merely to hand people instant solutions to their problems.

A key point was that the way in which we formulate questions has a great deal to do with whether we will receive meaningful and actionable responses. They pointed out that a good coaching question has the following characteristics:

  • It leads to exploration
  • It aims for a descriptive answer
  • It avoids judgment
  • It avoids unproductive states of mind
Generally, these will be What? When? How many? How much? questions. Who? and Why? questions can easily lead to defensiveness and blame-shifting.

 

Arto and Sami suggest that unproductive states of mind can be created by asking questions that cause:

  • Laying blame
  • Justification
  • Denial
  • Guilt
  • Shame
  • Obligation
It's hard not to notice the similarity between this list and the levels of personal responsibility defined in Christopher Avery's Responsibility Process (short of the final level, Responsibility):
  • Denial
  • Lay Blame
  • Justify
  • Shame
  • Obligation
So, if we can keep in mind basic principles of personal responsibility, we won't go far wrong in formulating constructive questions with coachees.

 

In discussing the session with Arto and Sami before we began, I offered the following negative example: "Did you write that code on purpose, or did you just let your cat walk across the keyboard?" They agreed that this was indeed a negative example. It in no way leads to exploration; it does not guide the person toward a descriptive answer; it is highly judgmental; it creates a defensive state of mind.

I must admit it was not a random example. Not so long ago, I was sitting with a programmer on a team I was coaching and reviewing some of her code. I asked, "Have you heard of the Single Responsibility Principle?" She cheerfully recited a textbook definition of SRP. The code itself did not bespeak a profound understanding of SRP, but at least I had dodged the sarcasm bullet. It was sheer dumb luck that she took the question so literally. In any case, the question did not lead to any improvement of awareness or responsibility.

In the real case, I found that in subsequent interactions with that team member I was able to help her see potential improvements in her code by asking more specific questions about particular sections of the code, and by focusing on questions about how the structure of the code might affect future modification and support of the application. It would have saved us both some time had I participated in this session first.

The coachee might not realize there is a problem at all. In that case, we need questions with the characteristics listed above to help him/her reach an understanding that there is something to be done. In the personal example, the team member did not perceive any problems with her code. A few exploratory questions could have set the stage for a constructive working session.

In the Goal phase, we are trying to find out what problem the coachee wants to solve, and how he/she will recognize success when it happens. We want to help identify a goal that is:

  • high enough
  • positive
  • meaningful
  • specific
In applying this advice to the personal case, I find myself at a loss to define a good goal prior to working through at least the Reality and Options portions of the GROW model. A preliminary goal statement might be:
  • Carry out a refactoring that takes the class closer to a clean design.
There's no assumption that we can completely refactor the class in one fell swoop, so at least it isn't unrealistic. It sounds positive. On the other hand, it isn't really meaningful, and definitely not specific. We don't have enough information yet to know which of the several responsibilities the class supports would be the best choice for the initial refactoring exercise.

 

At this point, I'm thinking the GROW model doesn't necessarily have to be applied sequentially, although that's the way the workshop exercises introduced it. In retrospecting the real case, I thought that maybe the Reality phase would enable us to craft a better goal statement.

According to the GROW model, good Reality questions are designed to explore the coachee's view on the current state. We want to find questions that:

  • test assumptions
  • explore different angles
  • expose feelings
Relating this to my personal example, it was clear that the team member needed guidance to discover the problems in the structure of her code. I did get around to this, despite my poor opening question. Feelings exposed: Pride of workmanship. Assumptions tested: What does SRP actually look like in real life? Different angles explored: What sorts of changes would improve this code? What sorts of changes are possible? New feelings exposed: Desire for greater pride of workmanship. No feeling of blame, judgment, or obligation. So far, so good; but we still don't have a good goal statement.

 

The Options phase of the GROW model looks for alternate paths to the desired state. To that end, questions should:

  • state existing ideas
  • challenge limitations
  • include discussion of "stupid" ideas
  • bring out at least 3 alternatives
I have to pare down the details of the real case considerably in the interest of space. Suffice it to say the class in question was to be part of a webapp generated by a relatively rigid framework. The framework would call methods in this class to execute custom code as part of its built-in request/response cycle. The obvious and crude implementation (and the one recommended in the product tutorial) was to jam all the custom code directly into the callback methods. The problem in real life is this approach results in a bloated class that performs several distinct functions.

 

Relating the Options phase of GROW to the personal case, I could have used well-crafted questions to help expose the team member's existing assumptions about the structure of the class and the possibility of refactoring the code, to challenge the apparent limitations imposed by the rigid framework, explore a few "stupid" approaches in hopes this might spark some out-of-the-box thinking, and bring out 3 or more alternative approaches to improved design.

At this point, we could have returned to the Goal step armed with information obtained from the Reality and Options steps. We would have enough information to craft a goal for a first pass at refactoring the class that was high enough (it wouldn't be easy in any case), positive (it would extract the largest chunk of secondary functionality), meaningful (it would clearly improve the readability of the code, if nothing else), and specific (it would address one particular secondary responsibility).

That takes us to the What To Do step, the W in GROW. Arto and Sami described this as "agreement to walk the path." In the case of my personal example, we could define a specific refactoring that had a clear definition of done, an observable outcome, and a target date. The session was about coaching in general, and not all coaching has anything to do with technical issues. Agreement to walk a path of improvement in mindset, process, or principles may mean that the coachee agrees to begin walking the path, or to resume walking a path from which he/she has strayed. In the more general case, the What? questions should examine:

  • What to do?
  • When to do it?
  • Observable outcome?
  • Obstacles removed?
  • Uncertainty cleared?
You might think that the GROW model applies best in general coaching situations, and may have less to offer in a technical coaching situation. In fact, technical professionals have very sensitive emotions, especially with respect to their own code. There are a lot of programmers out there who perceive themselves as very advanced, very "senior" professionals who already have a long track record of success. Yet, most of them have no inkling of Software Craftsmanship. We have to be able to explore relevant information and bring out unpleasant facts without touching the wrong buttons. Otherwise, we won't get anywhere.

This was a truly excellent session. If you have a chance to see it on another occasion, don't miss it. For more information, download http://sami.honkonen.fi/checklists.pdf.

Agile Coaching Dojo (Rachel Davies)

Rachel was intrigued by the coding dojo concept and wanted to see whether it could be used to improve coaching skills. This session was a run-through of a dojo structure she created for coaches.

Working in groups of six or seven, we took the roles of seeker, coach, observer, and facilitator. The seeker was the person seeking help. The seeker is a coach who has a challenging coaching situation and seeks advice from his/her peers. Each group had one of these. The coach is just what the name implies. Each group had 2 or 3 coaches, who interacted with the seeker one at a time for five minutes each. The observer did not speak, but observed the nature of the interaction between each coach and the seeker, and provided feedback after all the coaches had taken a turn. The observer's feedback was not coaching advice, but was limited to comments on the interaction itself. The facilitator kept time and kept everyone within the boundaries of their roles.

Everyone chose the role they wanted to play. The first step was for the seekers to choose a coaching problem from their own experience. They discussed it in roundtable fashion with their group. Then, each seeker left their original group and joined a different group, where no one had heard about the problem. Then the kata began. Each coach interviewed the seeker for five minutes, one at a time. Other group members did not speak during the interviews. Observers observed, and made notes if they wished. After the interviews, the observers discussed their observations of the style of interaction with the rest of the group. At the conference, there was insufficient time for multiple rounds, but in an actual dojo everyone would have a chance to play each of the roles at least once.

It seemed a worthwhile exercise to me. The seeker at our table said he came away with some concrete ideas about how to coach the people in his organization through the problem he was having. It reminded me somewhat of a session at Agile 2007 along the same lines. In Coaching 101: A Role-Playing Session, Jeff Nielsen and Dave McMunn led us through contrived coaching situations in groups of three: Coach, coachee, and observer. The session leaders and some of their colleagues acted as facilitators, to answer questions about the process and to keep time. In that session, the facilitators created contrived situations; coaches could just as well use real scenarios. The three-person format may allow for more-focused interaction and faster role changes than the seven-person format.

Another possible advantage of the three-person format is that there may not be enough coaches in any given local area to form even a single group of seven for regular practice. For example, David Draper was in our group, and he mentioned that a coaching dojo would be a great thing for his company, but they have only five agile coaches in the London area. Using the three-person format, his company could organize an internal coaching dojo with the five coaches plus another employee in a role that calls for coaching skills, such as a project manager or Scrum Master. The six could have a very productive 90-minute dojo session with ample opportunity to act in all three roles, and to explore several coaching scenarios from their various clients. Coaches who work in the same general area, but who work for different companies or independently, could also organize their own dojos.

Regardless of the format, repeated practice of the basics is as important for honing coaching skills as it is for honing programming skills. I hope Rachel will continue to refine the idea and move it forward.

Quadrants of Effectiveness Game (Gino Marckx and Michael Sahota)

This is a board game for five participants designed to illustrate strategies for maximizing the effectiveness of a team. It's described on Gino's blog and in this set of presentation slides. After the session, I asked Gino if the game materials were available for purchase, because I want to use the game with my own clients. He said the materials would be available sometime soon.

I won't go into details about game play. The basic idea is that each group of players makes moves based on cards, as in trading card games. Players move markers around a board. Each player has two markers — one that tracks the urgency of the current move, and one that keeps track of the number of points the player has accumulated.

The point of the game is to demonstrate the effect of different strategies for maximizing value. Certain aspects of the scoring are not explained at the outset, so that players will craft individual strategies. At some point, the facilitator explains that the real goal is for each team to maximize its points. The five players at each table constitute a single team, not five individual competitors.

At our table, people played competitively at first. Once the team scoring aspect was explained, the team's dynamic changed to a collaborative style. Using the collaborative strategy, we finished the session with the second-highest score. The highest-scoring team had also changed to a collaborative strategy early in the game.

The results also brought out another aspect of team dynamics: The speed with which decisions are made. We did not count the number of rounds played at each table, but we speculated that the speed of game play may have accounted for the difference in scores between the top two teams. Our team was very slow to make each move. The team members discussed options and possible moves for some time before reaching a decision. We wondered whether this caused us to play fewer rounds in the allotted time, and therefore to accumulate fewer points than the top-scoring team.

The relevance to real team dynamics is that we cannot take too long to make decisions in our projects, because events will overtake us. We have to be able to make decisions quickly based on limited information. That is not an explicit part of the Quadrants of Effectiveness Game; it is simply an observation based on the outcome we obtained in the session.

In general, games are a great way to bring out important points we want our clients to understand so that they can make well-informed decisions during the agile transformation. This particular game is especially powerful and simple enough to learn quickly. I relate this session to the theme of improving coaching skills because I like to use games in my coaching of non-technical people and non-technical practices.

Informal discussions of coaching in the agile space

On Tuesday morning, I stumbled into a great conversation. In search of breakfast in the hotel restaurant, I encountered a group of people waiting for a table. I knew some of them, so I asked if I could join them for breakfast. They warned me they were going to have a breakfast meeting, so I could sit with them as long as I kept my head down and my mouth shut. I had planned to keep my head down anyway so that I could shovel food into my mouth, so I agreed.

This was no random assortment of breakfast-seekers. The instigator was Rachel Weston, Director of Services at Rally Software Development, and the group comprised Lyssa Adkins, author of Coaching Agile Teams: A Companion for ScrumMasters, Agile Coaches, and Project Managers in Transition; Rachel Davies, author of Agile Coaching; Alan Atlas; David Chilcott; Julie Chickering; and Mark Kilby — all agile coaches.

What brought these big brains together was the growing need for coaches in the agile community, and the lack of support for coaches who want to improve their skills or exchange ideas with peers. These individuals, who are leaders within the agile community in the area of coaching coaches, are not necessarily leaders in the broader field of coaching. They were full of ideas, and Rachel Davies agreed to set up a forum on the Agile Alliance website to serve as a collection point for links, contacts, and other information relevant to coaching the coaches.

Unfortunately, I have been unable to access that forum. I had a series of problems getting logged into the site at all, and ultimately gave up. I'm very interested in being part of the coaching-the-coaches initiative, but not interested in websites that prevent people from getting in. I expect eventually there will be another entry point for internet access to these resources. Rachel made it clear that she was willing to get things started by creating the forum, but that she would not be able to carry the load of volunteer work to keep the thing going. I think we will need a user-friendly site that doesn't ask for passwords and annual membership fees.

The group agreed to host a discussion in the Open Space area later in the week. I attended that, and was pleased to see at least twenty professional agile coaches in attendance. The group generated even more good ideas, which were noted on flip chart sheets. I think this initiative will result in a higher level of coaching skill throughout the community.

Agile 2010 part 3: How agile adoption can affect p

posted: 14 Aug 2010

I attended a couple of sessions aimed at people who usually work in roles different from my own. I wanted to gain a better understanding of how they perceive agile methods and how they see themselves participating in agile projects.

Dance of the Tester (Janet Gregory)

I came to the agile community from the standpoint of a programmer. I had been working in the role of Enterprise Architect at the time our company embarked on an agile journey. I dropped the title to join an XP team in the role of Team Member, and learned a lot about how different types of work are performed in an agile workflow. Although I perform the work of other roles, including tester, analyst, project manager, Scrum Master, technical lead, and team coach, it was informative to learn how an agile implementation looks and feels from a perspective other than that of a programmer.

Janet Gregory's intent was not to provide me with insight into the perspective of a software tester. Dance of the Tester is an introductory overview of the role of tester on an agile project. Although it is an introductory session, I learned a lot from it because it looks at agile work from a perspective other than my own. I filled the back of an evaluation form with specific praise for the session, and could have filled another sheet.

She emphasized certain aspects of agile thinking, including whole team, fast feedback, verbal communication, automation, retrospectives, and (interestingly) discipline. In describing the difference in mindset between a traditional tester and an agile tester, she used a phrase that struck home and that I intend to borrow: constructive skepticism. The tester's goal is to contribute to the whole team's effort to deliver high-quality software, and not merely to look for bugs in the code. A mindset of constructive skepticism keeps the tester's role positive and collaborative. It's a powerful concept.

To help explain how different types of testing occur on an agile project, who typically performs them, and why they are done, Janet used the canonical agile testing quadrants diagram to frame this part of her presentation:

Janet described a comprehensive roadmap for testing activities in an agile context. It spanned project planning, release planning, iteration pre-planning and planning, testers' direct contributions during iterations, system testing, and support for deployment and releasing. It was clear throughout that the information was based on direct, practical experience and any discussion of theory was pinned to field experience.

If you want more from Janet Gregory, check out her website at http://janetgregory.ca/ and her book (with Lisa Crispin), Agile Testing: A Practical Guide for Testers and Agile Teams.

Beyond Staggered Sprints: Agile user experience design teams in the real world (Jeff Gothelf)

Another session that provided great insights into non-programmer perspectives was Jeff Gothelf's "Beyond staggered sprints: Agile user experience design teams in the real world." I was especially interested in this because I'm currently working on a project that is heavily driven by the user experience design, at a client that is implementing agile methods for the first time. Jeff described the experience of the UX design team at his company after the software development group decided to implement agile methods — without consulting his team or anyone else in the company. It was a classic bottom-up agile adoption (and still ongoing).

There was good attendance at the session. I didn't count, but I think there must have been at least 40 people in the room. I didn't recognize anyone at all. That was my first indication that I was in the right place to learn about an unfamiliar perspective.

Jeff described each of four "attempts" to align the UX design team with the agile processes the software group wanted to use. At the outset, the UX team operated as an internal service that did work for several projects concurrently on behalf of four lines of business. This structure posed no problems in the context of the waterfall process they had been using at the company. When the software group suddenly and unexpectedly (from the point of view of the UX team) introduced "agile" methods, the UX team had to adapt without having any previous experience or knowledge of agile, and without the benefit of any training or coaching. Jeff scoured the internet to find that there was a dearth of information focusing on UX designers. He found some value in Jeff Patton's work, and engaged Alistair Cockburn to have a look at the process, but on the whole the UX team was on its own.

First attempt. The software group adopted a two-week iteration length. So, the UX team's first attempt to align with this model was to squeeze the waterfall process into two weeks, complete with all the handoffs, reviews, and quality gates. I trust I need not tell you how well that worked. They also mimicked the software team's use of whiteboards. They put 14 rolling whiteboards in the UX design work space. Jeff used internal surveys to gauge people's response to the changes. One respondent to a survey during this phase of their transition wrote, "The whiteboards do not help organize the UX team's work at all. Instead, they block out natural light from the windows and create a harsh and uncreative visual environment."

The UX team also stopped doing functional specs in advance. They did not understand how to compensate for this in an agile process, so they began to use the cards on the whiteboards to capture the functional specs. The boards had multiple purposes and were cluttered with all kinds of information. The information that was no longer being provided by functional specs also bled over into the wireframes. They came to be heavily annotated with all sorts of details that normally are not included in wireframes. This was a direct consequence of their being told that functional specs aren't "agile," while no alternative was offered.

The UX design team produced a large chart to explain how they felt about the whole thing. Their perspective was aptly summarized by a box on the chart labeled, "Agile = project never complete." As far as they could tell, agile work amounted to an endless series of small tasks. There was no endpoint, no celebrations, no larger context. Some of the other boxes on the chart (and there were many) read as follows:

  • Negative environment that fosters failure and generates low morale
  • Best experience never gets built
  • Deadlines before information
  • Conjured sense of urgency
  • No insight into larger goals
  • No ideation time - skills not being used fully
  • Minimum viable product (?)
  • "Going for the bronze!"
My main take-away from this is that all those who will be affected by a change of process have to be engaged during the planning stages for implementing the new process. All these problems occurred because the UX design team was blind-sided by the sudden shift to agile methods. Under the heading of Misery Loves Company, I can say that people in other roles also experience these feelings during the first few months of agile adoption.

 

Second attempt. The UX design team crafted a Style Guide to provide people with standard information about design elements, so that every project would have have to start from Square One. It contains color palettes, font standards, HTML forms standards, and many other details. They also started to build a library of reusable UI assets; widgets already customized according to the Style Guide. These innovations drastically reduced the amount of time needed to generate UI elements for projects. It also opened the door for people other than UX designers to pick up design tasks, depending on how the workload looked at any given time.

And the downside...

  • "Everybody's a designer" — zombies?
  • Cycles valued over creativity
That's a much shorter list of downsides than we saw from the first attempt, but these are qualitatively serious downsides. Why would a designer continue to work in such an environment at all?

 

Looking at it from a programmer's perspective, I can say that we like to abstract the mundane, repetitive stuff out of our way so that we can focus on the more-interesting problem solving aspects of our work. We use reference architectures, design patterns, third-party libraries, frameworks, templates, UX Style Guides, and anything else that helps us avoid repeatedly creating the same code over and over again. What is left over is the real brainwork — the reason we entered the field in the first place.

When we do the same thing with UX design work, what is left over after everything has been abstracted away? UX designers want to create the first design of a New Thing. They want to create an experience, not just another cookie-cutter website. When all they have to do is make selections from a predefined library of assets, where's the creativity? What's the point?

Third attempt. The UX design team was feeling disconnected from the whole process. They put everything in-line; that is, the UX designers are now part of specific cross-functional Scrum teams, each dedicated to one line of business. They carry out usability testing every other week; they bring in real (external) customers for this. The UX team incorporates feedback from these sessions into their current and future projects. Things got better.

Fourth attempt (ongoing). Jeff described this as bringing everyone together and then pulling everyone apart. Before starting the routine iterations of a new project, everyone involved — not only the UX design team — participates in a design session. They do collaborative sketching in which they generate 6 ideas in 5 minutes. They then take 5 minutes to boil it down to 3 or 4 ideas. Another session results in a single idea. They keep all the sketches for future reference.

Jeff showed us examples of sketches that were created during these sessions. He asked the group to identify the primary role of the person who drew each sketch. It was easy to see which were created by programmers, managers, and so forth as opposed to "real" designers. They are finding value in this deep level of collaboration, even if most people aren't trained artists.

This is where the company stands as of the day of the presentation. It has taken them over a year to get to a place where the UX design team does not feel excluded and discouraged. Jeff said that the programmers are now asking the UX team to size their tasks in terms of story points. They will try that very soon. Personally, I'm not too enthusiastic about that. The nature of the work is very different. There is no reason to assume both should use the same techniques to manage their tasks. But time will tell.

Personally, I wanted to ask Jeff what the original business drivers had been to try agile methods in the first place. Had it been a response to business problems that had been measured and assessed, or had it just been something the technical staff was keen to try? I didn't get a chance to ask him directly. What raises the question in my mind is the fact the UX design team had no idea such a profound change was about to happen in the company.

Agile2010 part 6: Bloody Stupid Johnson

posted: 14 Aug 2010

One of the highlights of the conference was the session presented by Arlo Belshee and James Shore, "Bloody Stupid Johnson Teaches Agile." The presenters acted out a play they had written that poked fun at every agile stereotype you've probably heard of. By popular demand they repeated it later in the week. It was recorded and I hope to see it appear online soon.

Using a variety of hats and props, Arlo and James played the roles of several stereotyped characters from the agile world that were easy to recognize: The Consultant, The Salesman, The Bishop, The Champion, The Wizard, The Jester. Their goal: To craft the Perfect Agile Process (PAP).

In a hilarious and beautifully performed play, they called out many commonly-heard criticisms (both valid and otherwise) about agile itself and about those who promote it. I can't even begin to list the jabs at agile thought-leaders and well-known speakers and writers in the community, including the presenters themselves.

We in the audience were advised to call out "PAP! PAP! PAP! PAP!" whenever we heard pap from the stage. To help us understand all about agile, we were given Instructions (cans of Bud Light). The Instructions were warm, but much appreciated.

There were some very useful audience participation activities, too. In one, the group was divided into two teams. The Waterfall team received a set of instructions, and the Agile team received a different set. The Waterfall team was assigned to tear paper into as many small pieces as they possibly could in the time allotted. Meanwhile, the Agile team was assigned to fold as many paper airplanes as possible in the same amount of time. Then, the two teams threw their results at one another. The team whose products flew better was the winner. Agile was clearly more effective than Waterfall.

In another exercise, the two teams are lined up in single-file along opposing walls. The Waterfall team has to organize into functional silos and take a set of instructions through a series of handoffs with quality gates. The Agile team has to repeat the phrase "Yay Agile!" to one another down the line. By the end of the time period, the Agile team has delivered its message to the facilitators, while the Waterfall team is still in the Design Phase. Agile wins again! And it's scientific-like, too.

According to the session proposal, the presenters' goals were to highlight mistakes we often make in implementing agile methods:

  • Giving in instead of removing impediments
  • Focusing on planning practices to the exclusion of delivery practices
  • "One True Way"ism
  • Ignoring high-bandwidth communication
  • Leaving out the customer
  • Not integrating testers
  • Process navel gazing
  • Directive leadership
I would say they achieved those goals.

 

After the play, they iterated through the characters and polled the audience, asking how many of us had seen the character in our work, and then how many of us had been the character in our work. Then they led a discussion of which characteristics of each character we would want to keep, and which we would want to eliminate. For example, The Bishop keeps us on track by reminding us of fundamentals, but can be dogmatic and lose sight of practical matters. The Wizard answers every question by saying, "It depends." When asked On what? he replies, "It depends on the context." And that's all anyone can get out of him. On the plus side: He recognizes that we can't just drop a cookie-cutter agile process on top of every situation. On the minus side: He can't bring himself to take any action; he's stuck in analysis paralysis. There is a similar dichotomy in each stereotyped character.

One of the best things about the session was that The Salesman teamed up with Bloody Stupid Johnson to create an official certification program. We received printed certificates attesting to our status as Agile Software Specialists. I'm going to frame mine.

Agile 2010 part 5: Writing code, refactoring, and

posted: 14 Aug 2010

For fun and relaxation, I attended a few sessions devoted to hands-on programming...or at least sessions that sounded as if they would involve hands-on programming.

Pairing games as intentional practice (Moss Collum and Laura Dean)

I expected the session to be about games designed to bring out good and bad practices in pairing, and I expected to be able to play the games to hone my pairing skills. It started out well, with the facilitators asking participants for their good and bad experiences with pairing. There were some high performers in the room, including Corey Haines and J.B. Rainsberger, and many others who had interesting experiences to share. I expected we would then simulate some of these problems and apply some new techniques to solve them.

No such luck. The entire presentation consisted of the two facilitators talking. There was no hands-on exercise and no "games" as such. The games were simply different approaches or styles of pairing. Even so, some of them look pretty useful as "shovels" we can use to dig ourselves out of particular types of problems, when a pair may be stalled for one reason or another.

They discussed four such "games." Ping Pong is simply the basic ping pong pairing style, in which partner A writes a failing unit test, partner B writes the code to make it pass and writes the next failing unit test, and partner B makes that test pass and writes the third test, etc.

Farsighted Navigator uses the basic driver-navigator style with a twist: The navigator can't speak about anything of larger scope than a single method, and the driver can't write anything other than what the navigator talks about. It may be useful when a pair is unsure what to do next and keeps changing direction, or remains at too high a level of abstraction to get any actual code done.

Socrates is another variation on the driver-navigator style. In this case, the driver is not allowed to respond verbally to anything the navigator says, but can only "answer" by writing code. It's a bit harder to see the value of this, unless perhaps the pair has fallen into a pattern of talking too much about what they should write, instead of just writing something and letting the code tell them what the design should be.

In the No Talking game, neither partner speaks, and they try to maintain a good flow of development while passing the keyboard back and forth quickly. They might use a chess timer to create a sense of urgency about keeping the micro-coding sessions very short. I'm somewhat at a loss to understand what problem this solves. I can see that it might shake things up a bit if team members are getting bored with the routine of work.

I think Moss and Laura have a lot of useful information and experiences to share, and I hope they will consider restructuring this session to make it a bit more engaging. One possibility might be to blend the "games" content with the presentation approach we used in "Effective Pairing: The Good, the Bad, and the Ugly" at Agile 2008 (with Brett Schuchert, Lasse Koskela, and Ryan Hoegg), Agile 2009 (with Brett Schuchert, Lasse Koskela, and George Dinwiddie), and Devoxx 2009. We acted out specific anti-patterns in pairing, and then asked the audience to identify the problems with the interaction and suggest corrective action. The "games" Moss and Laura came up with could be used as specific suggestions to overcome the anti-patterns.

The worst of legacy code: Forensic development (Jason Kerney and Llewellen Falco)

This session wasn't about "agile," but it was still very interesting and practical. Each of the presenters has had the privilege of working in a production support role and dealing with extremely difficult legacy code. When you have to fix a bug in production, you don't have the luxury of time to study and understand the application fully. You need techniques that help you isolate the source of the bug quickly. In this session, the presenters introduced us to two specific techniques: The Peel and The Slice.

The presenters themselves can explain what the session was all about better than I can. They prepared a video and posted it to YouTube, here: http://www.youtube.com/watch?v=yAKL6rlEF_s. The video goes through basically the same presentation as they made in person to introduce the two techniques during the session.

The debugging exercise was based on a snippet of legacy code that one of the presenters actually had to support in a past job (I don't recall which one). The code is available online here: http://pastie.org/843507. The bug report stated that when three of the loans is associated with "Tom," then two records are written to the database for "Tom" instead of one.

The fact the code is in Russian is amusing, but not really relevant to the exercise except that it illustrates an interesting fact about finding production bugs: You don't need to know what the application is supposed to be all about. You just need to find the bit of code that isn't behaving properly, according to the bug report. With that in mind, it can be misleading to read comments, method names, and class names that do not accurately reflect the behavior of the application. If you can't read these things, then they can't mislead you. There's nothing you can do other than run the code and make it reproduce the reported behavior.

The basic idea is to get the code to run so that you can isolate the cause of the error. A chunk of code out of context won't run, of course, because there's no database, no framework, no container, no nothin'. In a time-critical debugging situation, you don't have time to set up a test environment with all the bells and whistles. As the video demonstrates, you can peel away unnecessary code to get at the code you need to run, and/or slice out the code you need to run and omit deeply-buried "pits" that prevent you running it.

To help you, you might use a coverage tool to tell you when you have successfully set up the code to run completely, and a mocking library to help you fake out external dependencies so you can focus on the problematic code. Although mocking libraries are often associated with agile-style development, this is not really a development exercise and has nothing to do with "agile" as such.

The presenters used EasyMock to help them fake out dependencies. When they did so, a "Boo!" was heard from the back of the room, followed by laughter from the group. EasyMock seems to have a bad rep these days. As I watched the presenters step through the exercise, I realized that an advantage of EasyMock in this situation is that it is "strict" by default, but not nearly as cumbersome to set up as JMock2. While most of us would prefer JMock2 when we need a strict mocking framework for new development, the relatively simple EasyMock is a good fit for a debugging situation. We want the strict behavior by default to expose hidden method calls when we are exploring the code (by executing it).

The process consists of getting the code to run somehow, usually by creating a class with a main method and bringing in just enough of the production code to get it running. We expect to get a number of stack traces along the way. That is how we learn about hidden method calls we need to mock. To keep the stack trace relatively free of clutter, it's just as well to run with a main method and dispense with test runners. It's interesting that you can get into a pretty good flow using the Slice and the Peel, and isolate the true cause of the bug in far less time than it would take to try and learn the code base "properly." The code you write for exploration purposes is throw-away code.

Check out the video and the code snippet. If you ever have to debug a production problem, you might find the techniques useful. The examples were done in Java, but the Slice and Peel are techniques and are not language-dependent.

Large-scale refactoring using the Mikado Method (Ola Ellnestam and Daniel Brolund)

In this session, we learned about another useful technique for working with legacy code, and it, too, is not specific to "agile." The Mikado Method is named for a pick-up-sticks game called Mikado. In Mikado, one of the sticks is substantially more valuable than the others. It rarely happens to wind up on top of the pile when you drop the sticks, so you have to work carefully and in a step-by-step manner to reach it. Similarly, when you have to refactor a legacy code base to add a feature, the bits you need to cull out of the Big Ball of Mud are rarely obvious or easy to access.

Mikado takes the approach of beginning with the end in mind. You begin by creating a dependency graph starting with the end goal. Then you identify the immediate prerequisites to reaching the goal, and continue identifying dependencies in that way until you reach a reasonable starting point. What is a reasonable starting point? That will be a judgment call and context-dependent. In any case, once you've got the dependency graph, you start working your way back from the leaves toward the goal, step by step.

Despite the session title, this isn't really a technique for large-scale refactoring. Indeed, you perform only the minimum amount of refactoring necessary to achieve the goal. The problem space for the Mikado Method is the addition of a new feature to an existing code base, especially when the existing code base is not already well-factored.

Here are some online resources about the Mikado Method, mostly written by Ola and Daniel:

 

Doing agile right using the right tools (Hadi Hariri)

This was not a programmed session, but a vendor demonstration on behalf of JetBrains. Hadi demonstrated BDD development in .NET using MSpec and Resharper. MSpec is short for "Machine Specifications."

Oddly, I was the only attendee until about halfway through, when we were joined by a second. Not really sure how that happened, except that the demo took place during a lunch break. Lunch breaks were 90 minutes, so there was plenty of time for people to do other things. The Open Jam areas were always busy, anyway.

MSpec is an interesting tool and a useful addition to the .NET toolkit. As Microsoft winds down the IronRuby project, MSpec may become more important since it is unclear how we will be able to use Cucumber for BDD in the .NET environment. The tool looks good, and Hadi's skills as a presenter and demonstrator are excellent. Time well spent. Here are some online resources:

 

Nano-incremental development: Elephant carpaccio (Andrew Shafer and Alistair Cockburn)

This was a hands-on coding session in which we practiced breaking requirements down into the thinnest possible slices and delivering them in very short iterations of 9 minutes. We were divided into teams of two or three.

This is an exercise Alistair devised to help programmers understand that it is actually possible to build solutions in very small increments, and then to help them understand how to do so. Alistair describes the exercise in detail on his website, and he encourages people who have gone through it under his guidance to use it in their own organizations.

Given an assignment that was pretty small to begin with, we had to build and demonstrate very small increments of the solution. One member of each team functioned as the customer, and could clarify the requirements in any way he/she saw fit so that the team would understand the relative value of each nano-feature.

Our team consisted of three people. We managed to get through the complete assignment on time, although we might have done a bit more refactoring of the "production" code if we had had the time.

In the debrief, Alistair asked for a show of hands to see how many teams had completed all the requirements. About half the teams raised their hands. He then asked how many teams had used a rigorous test-driven development process, including incremental refactoring of both the production code and the tests, to build the solution. Several teams raised their hands. The intersection of the two groups comprised one member: Our team.

Alistair apparently had expected that no one would complete the full assignment using rigorous TDD, since the assignment was so small that the overhead of the TDD cycle could easily overwhelm a team's ability to deliver anything in just 9 minutes. He also suggested that because of time pressure, agile methods tend to encourage sloppy code. He ignored the fact our team's hands were raised, maybe because it was statistically insignificant, maybe because he just didn't notice, or maybe because we didn't have much time left for discussion.

In any case, I disagree with the idea that agile methods (or any other methods) particularly encourage or discourage sloppy code. I think it's a question of self-discipline and attention to craftsmanship. Our team of three consisted of people who had never met, who represented three different age cohorts, who came from different countries, who represented different genders (only two of those, of course), spoke different languages, had different levels of formal education, and had different professional backgrounds. We collaborated at all times and discussed different alternatives at each step in the exercise. This diverse team was able to apply strict TDD to the problem and complete everything within the short time frame allowed, while maintaining discipline in TDD and code craftsmanship. Even with the very short timeframes used in the exercise, there was time to do all these things properly.

Today's best practice is tomorrow's proces

posted: 06 Aug 2010

On Tuesday next week, I'll be facilitating a collaborative session to explore the phenomenon of addiction to pain-killers, as it occurs in software development organizations.

A pain-killer relieves pain for a short while, but does not treat the underlying condition that caused the pain in the first place. When the pain returns, the course of least resistance is to take more of the pain-killer. The patient becomes conditioned to the pain-killer and requires progressively stronger doses to relieve the pain. The patient runs the risk of developing a dependency on the pain-killer, while the underlying condition remains untreated.

Software development organizations experience "pain," too: The pain of delivery delays, missed targets, high defect rates, Death Marches, misunderstood requirements, and many other issues. In this context, a "pain-killer" is a work-around to a problem that is causing delays or other difficulties in delivery. It is a tactical solution, usually addressing the observable symptoms of a deeper problem.

Since a work-around exists, people don't bother to look for the original problem. They become careless, comfortable in the knowledge that a formal, well-defined work-around will take care of any symptoms that may arise. This is equivalent to the body becoming conditioned to a pain-killer, and needing ever-stronger doses to relieve the pain.

As the work-around is repeated and refined, the organization can become dependent on it. The organization may create formal job descriptions for people who specialize in the work-around. It becomes more and more difficult to dislodge the work-around from the organization's culture and procedures. When people hold retrospectives or plan Kaizen events, they focus on improving their implementation of the work-around rather than on identifying and eliminating the original problem.

People cease to think of the work-around as a work-around; they cease to perceive the problem as a problem. In effect, the organization institutionalizes the underlying problem rather than solving it. With employee turnover, new hires are indoctrinated to believe the work-around is a necessary and important part of the organization's culture and procedures. As older employees leave, their memory of the origin of the work-around is lost to the organization. "That's the way it has to be done."

Eventually, the specialists responsible for the work-around may form a professional organization of their own and start to hold conferences, publish trade journals, and present awards to one another. Other companies see all this and rush to implement the same underlying problems in their own organizations, so that they can begin to practice the successful work-around, too. "We'd better learn to do this, too, because that's the way it has to be done."

To bring this little rant back around to the overall theme of the Agile 2010 conference, remember that the Snowbird meeting in 2001 was originally called the "lightweight process summit." Its participants were interested in identifying the key characteristics of software development and delivery processes that tended to yield good results, and to eliminate as much of the waste from such processes as was practical.

In those days, the mainstream processes for software development and delivery were "heavyweight" — characterized by cost overruns, schedule overruns, long lead times, functional silos, indirect means of communication, hand-offs, large batch sizes, massive work-in-process inventories, and a debilitating lack of alignment with the consumers of the software. The authors of the Agile Manifesto were looking for ways to make those processes "light" by trimming the fat.

The idea of keeping things light and focusing on the delivery of value is fundamental to agile thinking. It wasn't invented with agile thinking, but it is fundamental to the agile approach to IT work. When organizations re-introduce fat into their processes as a way to alleviate short-term pain, they take a step on the slipperly slope back to the Old Ways. That's why the topic is relevant to this conference.

In the previously-referenced article, you can read about two obvious examples of this: The extended User Story, which becomes more and more like an old-style Requirements Specification Document, and the Mini-Waterfall-Within-Each-Iteration anti-pattern. A colleague on my current project mentioned another example that he has experienced on more than one team: A daily team meeting (not a stand-up) in which team members warn one another about which parts of the code base each developer touched during the day. The red-flag question in that one is this: Why don't team members already know the status of the code base, without the need for a special meeting about it?

A few additional examples are noted in the session proposal, the text of which is reproduced below. It is my hope that participants will bring many more examples to the table, as well.

Session proposal

The premise: A “best practice” at one level of maturity becomes a “process smell” that guides us to the next level of maturity. There is a tendency to “lock in” a set of assumptions and practices and to assume this represents “agile best practice.” Rather than locking in, we should be guided by the principle, “question everything.” This applies especially to our own assumptions about what constitutes agile best practice. Otherwise, our thinking will ossify and we will cease to improve the overall agile toolkit and our own ability to add value for customers.

The session begins with a brief introductory talk to describe the premise and seed the discussion. After that, the remaining time is given over to participants to self-organize and collectively lead an examination of various “good” or “best” agile practices that are typically assumed to be de rigeur for agile environments. Depending on the number of participants, we may split up into small groups and then have each group present its findings at the end of the time slot.

The speaker wonders: Do we really “question everything,” or have we settled on certain assumptions about how things must be? Do we question (for instance):

  • Organizational structure
  • End-to-end delivery process
  • Budgeting
  • Recruiting and hiring
  • Incentives and rewards
  • Practices and techniques
  • Traditional roles
  • Habitual language
  • Conventional career paths
Examples of widely-used good practices we may wish to examine in a new light:

Practice: Sashimi technique for organizing user stories into sizes that fit into an iteration.
Value: Addresses the problem of fitting variable-size user stories into fixed-length iterations.
Smell: Seems as if the goal of this work is merely to fit stories into time-boxes. Customers don’t buy time-boxes. Therefore, this is muda - non-value-add activity.
Questions leading to the next level: Is there a way to deliver value without this activity? Why do we need to fit the work into fixed-length iterations? Would it be a smoother work flow if we just worked the stories in priority order, regardless of size? What would that approach imply about upstream preparation of the work queue, backlog, or portfolio? What must we do to enable the organization to handle the lighter-weight approach?
Categories: End-to-end delivery process, practices and techniques

Practice: HR checklist to help HR screen candidates and to give teams some warning and preparation when a new team member is to be added.
Value: Addresses the disruption caused when new arrivals are assigned to a team by management without warning and without the team’s consent.
Smell: Agile principles of “stable team” and “self-organizing team” do not appear to be honored. Seems “reactive” rather than “proactive” - assumes management can and should dump new people onto teams helter-skelter. Ignores effect of changing team composition on team dynamics; fails to acknowledge that velocity or throughput will be reduced (temporarily) with each occurrence.
Questions leading to the next level: Why isn’t the team itself recruiting and auditioning new members? What is the appropriate role for HR?
Categories: Organizational structure, recruiting and hiring, traditional roles, habitual language

Practice: Planning Poker, story sizing, assorted improved estimation techniques
Value: Better than traditional time-based task estimation for short-term (iteration) planning and for release planning.
Smell: Estimation is not customer-defined value. Time spent doing estimation is time lost to value-add work. High-level estimation at project or program scope is valuable; low-level estimation of MMFs, User Stories, or technical tasks is largely useless.
Questions leading to the next level: Is there a way to do short-term planning and keep the work flowing without bothering to do small-scale story- or task-level estimation? What are the dependencies of taking this approach to short-term planning? How can we take care of those dependencies?
Categories: End-to-end delivery process

Practice: Customer collaboration
Value: Addresses organizational issues such as lack of feedback, poor communication, low trust, features out of line with needs, long lead times, rework, and misunderstandings.
Smell: The word “collaboration” implies cooperation between two or more distinct entities that are working toward a common tactical goal, but that are otherwise separate. The implicit assumption is that the entities must or should be separate, with bridges for communication and cooperation under the heading of “collaboration.”
Questions leading to the next level: Is there a way to achieve seamless integration rather than merely “alignment” and “collaboration” among all interested parties? Can we re-think old assumptions about organizational structure, functional silos, professional career paths, and roles in a way that would eliminate the need for explicit “collaboration” and make collaborative work natural?
Categories: Organizational structure, end-to-end delivery process, traditional roles, conventional career paths

Practice: Agile portfolio management
Value: Alleviates problems inherent in traditional portfolio management, such as the pursuit of unnecessary projects; duplication of effort; misalignment of effort with strategic business plan; budget approval on a different schedule than project and program planning, resulting in “scurrying” toward the end of the annual budget cycle to “use up” funds so that we can qualify for the same or a greater budget next year, without reference to the business plan.
Smell: The existence of an “IT portfolio” indicates a disconnect between the central IT department and the “business side of the house.”
Questions leading to the next level: Why do we need a portfolio at all? Why do we need discrete projects at all? What prevents us from treating business ideas as Real Options? How can we manage budgeting on a just-in-time basis?
Categories: Organizational structure, end-to-end delivery process, budgeting, habitual language

Learning outcomes

  • Refresh the healthy (and fundamentally agile) habit of questioning everything, including our own assumptions and our habitual methods and techniques.
  • Examine several examples of widely-applied good agile practices to understand the specific problems they are designed to solve, and consider whether locking in those practices might prevent us from taking the next logical steps on the path of continuous improvement.
  • Return to the real world with a renewed ability and willingness to examine our most comfortable assumptions with a fresh eye.

 

That's the end of the session proposal. On a side note, I will be facilitating a session on the topic, "The IT Portfolio as a Form of Waste," at the Lean & Kanban 2010 conference in Antwerp this September. It starts with the idea that the very existence of an IT portfolio is a work-around for some other source of pain, as a starting point for looking at the whole problem of corporate IT services through the lens of Lean thinking.

I would like to post a link to that conference's website, but unfortunately it will not display for me. They have (too) cleverly included code to see which browser I'm using, and refuses to let me in unless and until I "upgrade" to the latest version of one of the browsers they prefer. Their code is faulty, because I'm already using the latest version of one of those browsers. I suspect their code only works for Microsoft Windows and, possibly, Mac OS X. I'm running on Linux. Unfortunately, again, the site (too) cleverly prevents me even from submitting an email to inquire about it.

It's interesting that the official website of a conference dedicated to improving the value of software to its users has such a glaring, ridiculous, and unacceptable bug. It just goes to show that we still have a lot of work to do to bring IT work to true professional status, and that the place to start is inside the organizations founded and staffed by the (ostensible) thought-leaders in the field. They ought to practice what they preach, even if no one else does.

What's next for the agile community?

posted: 06 Aug 2010

I'll be facilitating a discussion at Agile 2010 next Monday concerning the current state and future direction of the "agile" community/movement. The session is titled, "Whither Agile? A Participant-Driven Round Table."

After some ten years of activity in the IT industry, things are very different than they were when the Agile Manifesto was written. In my opinion, the time has come for the agile community to reconsider and update its goals and purpose in light of these changes.

Consider the innovation adoption curve presented by Everett Rogers in his 1962 book, Diffusion of Innovations, as refined by Geoffrey Moore in Crossing the Chasm (1991 and 1999). The image is from Wikipedia.

The ideas expressed in the Agile Manifesto quickly took hold, and many individuals and companies took them forward in different ways. Circa 2002, the agile approach stood at the brink of the "chasm," where it might or might not become a part of mainstream IT thinking and practice. By 2008, the good news for proponents of agile methods was that the ideas had, indeed, crossed the chasm.

And that was the bad news, too.

With agile ideas and methods now beyond the peak of the early majority adoption phase, the situation is very different than it was when a small group of enthusiasts shared a common vision as they promoted a well-defined conceptual framework for improving the state of the art in software development and delivery. Today, "agile" has been absorbed into the fabric of IT thinking and practice. It is no longer an innovation — everybody does it (whatever "it" is).

As "agile" enters the late majority adoption phase, the ideas and methods have been diffused into the general methods of IT organizations. There is no single, narrow definition of "agile." The definition has expanded along with the variations in practice that have emerged over the years. A very broad range of ideas, methods, and techniques may be deemed consistent with the definition of "agile."

A consulting firm that targets the late majority adopters with a branded package of agile practices will emphasize predictability, repeatability, and governance; values that hardly seem innovative. At the same time, a leading-edge innovator who pushes the envelope of project management and software development practices, constantly looking for ways to improve delivery effectiveness, may move away from canonical agile practices for completely different reasons, and in completely different ways. Neither of these follows conventional agile methods anymore, and each has taken the ideas in a different direction; yet, both can credibly claim a place in the agile community.

Once an innovation passes the peak of the early majority adoption phase, it is no longer an "innovation" in the true sense of the word. It may not yet have been adopted by everyone, but by that stage it is hardly a cutting-edge idea. The fact laggards haven't caught up with the mainstream doesn't make old ideas new again. One of the characteristics of the agile community today is that many of its members still see their mission as one of introducing innovation; and they are still introducing the same innovation as they were ten years ago. It is as if they have remained stationary while the world has changed around them.

Consider this representation of the Satir change model. I snagged it from Dale Emery's site because it doesn't have a copyright notice. (Don't tell Dale.)

In 2000, ideas, methods, and practices like Scrum and Extreme Programming represented a foreign element that promised to address many of the endemic problems in the IT industry, even before the buzzword "agile" was coined. Agile as defined in the 2001 meeting at the Snowbird lodge targeted those problems well; it targeted the old status quo in IT as it existed at the time. To many who were frustrated with the bureaucracy that had grown up around them in the IT field, agile was a breath of fresh air. (I myself was about to leave the IT profession altogether when I discovered agile, and found it alleviated those problems effectively enough to keep me interested in IT work.)

As agile passed beyond the peak of the early majority adoption phase of the innovation model, it became the new status quo in the Satir model. As agile progressed down the far side of the early majority adoption phase toward the late majority adoption phase, it replaced the old status quo. The status quo in IT today is not what it was ten years ago. Should it surprise anyone that ten-year-old solutions don't seem very innovative?

The aforementioned stereotypical consulting firm (of which there are many real-world examples) understands that as agile moves into the late majority adoption phase, it is time to stop innovating and start milking the cash cow. This is not "wrong" or "anti-agile." It's just good business.

At the same time, the innovators among us are dissatisfied and frustrated with repeating the same first-generation agile material. Most of the people they encounter in industry are already familiar with the basics, and are already practicing some subset of agile methods. The innovators are keen to move forward; to put into practice the notion of continuous improvement. They want to identify the next foreign element, and continue to work at the level of innovators and early adopters.

So, the question comes down to this: How do we wish to define ourselves going forward? Is there still a need for an "agile community?" If so, on which aspects of this expanding and evolving set of ideas and practices does the community wish to focus? Do we wish to embrace the full range, from innovators to cash cow milkers and everything in between? Which emerging ideas and trends are moving into the early majority adoption phase as of 2010, and do we want to consider any of those in redefining or clarifying our mission? Does the word "agile" still serve our needs? Is there more to change than just a word?

Here's the full text of the proposal I submitted to the conference.

Proposed session

When a breakthrough idea crosses the chasm it passes through a make-or-break stage of maturation. Without leadership, even the best ideas can evaporate into nothingness or ossify into dogmatism. Agile stands at just such a turning point now. The message has become fragmented, diluted, co-opted for profit, mischaracterized, abused, and attached as a label to just about anything. What can the community do to see that the proven value of agile principles and practices is not lost in the general chaos or surrendered to commercial interests that have a branded, packaged solution to sell?

“Agile” has been a very useful word. It has served as a point of focus for thought leadership and as a rallying-cry for those who were interested in improving the state of the art in IT and who had been pursuing disparate paths toward improvement. The effects on industry have been profound and far-reaching.

Today, the word “agile” is all but meaningless. The fundamental concepts have been absorbed into the everyday working assumptions of nearly all IT organizations. Like any other transforming idea that is widely adopted, “agile” itself has been transformed by the many people who have applied it to different situations. With each new adaptation, the word loses some of its distinct meaning and becomes integrated into the general fabric of IT work. It becomes less necessary, less powerful, less useful because there is less and less difference between agile thinking and the status quo.

That is the definition of success for any transformative movement in any field of endeavor.

One indication this has been true for some time already is the fact that Forrester has finally gotten around to noticing it. See http://tinyurl.com/y8zvksv.

As I see it, the agile community today has the following characteristics:

  • Tacit and near-universal avoidance of questioning anything that might be controversial, such as changing organizational structure, eliminating obsolete specialized roles, or growing beyond the need for the sacred cows of first-generation agile
  • Addiction to “pain-killers” - that is, work-arounds to organizational problems that begin as temporary expedients and eventually become treated as “necessary” aspects of an agile work flow; we try to get better at doing the workaround instead of coming back to the original root cause that led to the workaround in the first place
  • Allowing commercial interests to take control of the definition of “agile” to match whatever they happen to be selling
  • Confusing ends and means - Instead of remembering the practical concerns that led us to change the way IT work is done, we ask “How agile are you?” and set goals such as “Be Agile,” as if agile itself were the end rather than a means to an end
  • Balkanization into camps that promote one or another branded packaging of agile ideas and practices, and that snipe at other camps
  • Coming at problems in the spirit of one who has a hammer and sees nails everywhere

 

I have been asked to be more specific about what I would like the outcome of this discussion to be. I don’t know. I don’t have the answers. That is the whole reason to suggest a discussion. I don’t even know whether there will be any interest in this discussion in the agile community; I’m afraid I might be questioning things that are too close to the heart. But I’m more afraid of shying away from such questions.

In my view, there are three broad areas in which IT work was completely dysfunctional throughout the 1980s and 1990s, and problems in these areas gave rise to various attempts at improvement, including agile:

  • Process issues - The mechanics of taking ideas from concept to cash.
  • Human issues - Job satisfaction, commitment, motivation, enjoyment, and work-life balance.
  • Technique or (as we would say it today) software craftsmanship - doing work that teaches us something and in which we can take pride.

 

We have lost sight of these areas of focus (with some notable exceptions) and have become distracted by the attempt to perfect specific activities for their own sake. This is not a proposal for 90 minutes of congratulating ourselves. I think we need to figure out which way we want to take “agile” thinking from this point forward…if anywhere.

Learning outcomes

  • Identify trends and issues currently threatening to derail or discredit agile adoption
  • Identify behaviors within the agile community that may exacerbate these issues
  • Identify conflicts within the community, discover common ground, and brainstorm approaches to resolving them
  • Begin a broader dialog regarding desirable future directions in growing and directing the agile movement

That was the session proposal. If you want a voice in defining what comes next, join us in the discussion next week. I hope we will at least get started with redefining the community.

Looking back on the first Certified Scrum Develope

posted: 17 May 2010

The question of the value of certification continues to be bandied about the software development community. I'm on record as generally against certification. Yet, I happily participated in the first official Certified Scrum Developer course, presented on the Lean Dog boat (Cleveland, Ohio) last week by Ron Jeffries and Chet Hendrickson, and I'm very pleased to hold the new certification alongside 10 outstanding software development professionals.

How can this be? How can a person be both for and against certification? It depends on what you think "certification" means. As I see it, the CSD course represents a starting point for software professionals who want to make a personal commitment to technical craftsmanship, customer satisfaction, delivery excellence, and high standards of ethics. The value of the magic C word isn't that it signifies mastery or perfection, but rather that it encourages managers to send their staff members to the class. Many managers won't bother sending their people to classes that result in no certification of any kind. Because of that word, many software developers will be exposed to these ideas who might otherwise never have the opportunity. Once exposed, it becomes their individual choice to do something with the information, or not.

The magic C word will have another effect, as well: Certified developers will be held to a higher professional standard than others. The perceived value of the certification will depend on how they do their work. I hope the fact they hold the credential will act as an incentive for them to continue on their personal paths of professional development. Otherwise, the credential itself will lose its value.

As long as "certification" is understood in this light, I think it is a positive thing. When people treat "certification" as a substitute for skills, then it can become a negative thing. There is always a risk that recruiters and hiring managers will come to depend on a given credential to the extent they neglect to verify what they see on candidates' résumés and what they hear in interviews. Those people will dismiss candidates who do not hold the credential. At the same time, others in the industry will assume no certification can ever possibly be meaningful. They will summarily dismiss candidates who do hold the credential, assuming the worst about the candidates' motivations for desiring certification.

Damned if you do and damned if you don't? Maybe. I figure if a person can't think any better than that — to dismiss those who either do or do not hold a certification — then I probably don't want to work with them anyway, since they are likely to exhibit similarly poor thinking on many other levels, as well. I'm more interested in working with people who are genuinely seeking to improve the quality of their work and the effectiveness of their organizations. The names of particular process frameworks or certification programs are of secondary interest, if even that. The bottom line is that the value of this certification will depend on what we do with it, now that we've graduated from the class.


photo by Gery Petrof

The first instance of the CSD class was a great learning experience for everyone, even if it did not go entirely as expected. There were 11 participants. All had at least some practical experience in applying agile methods, and all were experienced software developers. Several were "known," if you take the word with a grain of salt, since the agile community is a small pond: Jon Kern, George Dinwiddie, Jeff 'Cheezy' Morgan, Paul Nelson, Adam Sroka, and me. The others were equally skilled at software development generally, although less experienced with agile in particular. All were familiar with pair programming and TDD, and nearly all were already practicing these techniques in their everyday work.

Given this relatively high-powered group (after all, Jon is one of the authors of the Agile Manifesto, just over half the participants make their living as agile coaches, and the rest are agile practice leaders in their respective organizations), our esteemed instructors were rather surprised at the results. Of the two teams, on the first day one committed to five stories and delivered only one; the other team committed to three stories and delivered none.

After team retrospectives, whole-class retrospectives, lunchtime and beertime discussions, lectures, examples, war stories, much wailing and wringing of hands, and sleeping on it overnight, on the second day the first team made no demonstrable progress beyond the first day's results, although they insisted they had cleaned up their code base; the second team delivered just one story, but succeeded in setting up a sweet automated acceptance test environment.

Playing the role of customers, the instructors were unimpressed with refactored code and testing frameworks (even sweet ones), and disappointed that they could detect almost no progress toward delivery of their product. After all, according to the Agile Manifesto, working software is the primary measure of progress. Keeping the code base well factored and using effective testing frameworks help us deliver working software, but those items are not measures of progress in themselves.

The work flowed much better on the third day. Both teams used pomodori to time-box half-hour goals. This helped keep the work moving, set a pulse for switching pairs, and kept the teams focused on moving cards across the wall rather than on the finer points of technique. By the end of the "iteration," both teams had delivered the stories to which they had committed on the first day. In other words, after taking three times as long as first anticipated, the teams delivered around 5 out of 29 stories. A very disappointing result from the perspective of the customers.

Donning their instructor hats, Chet and Ron mentioned that in past presentations of their Agile Developer Skills course (on which the CSD course is based), they haven't seen a worse result; an outcome that truly surprised them, given the composition of this class. It surprised the rest of us, too.

The demo and code review revealed that both teams had incurred technical debt in their rush to complete stories. The pendulum had swung to the extreme in both directions in the course of three days. At first, excessive focus on agile technique had allowed stories to remain incomplete. On day three, excessive focus on pushing stories to the "done" column had allowed code quality to suffer. This, from 11 people who already understood the importance of customer collaboration, rapid feedback, incremental delivery, simple design, test-driven development, frequent check-in, continuous integration, and pair programming...11 people who do all these things for a living, and teach others to do them.

The key take-away: Agile development is hard. Agile methods don't automatically cause software to materialize and deploy itself. Success requires personal discipline on the part of everyone involved. When a team skips or forgets any one of the basic principles of agile development, from customer interaction to simple design, the overall result is likely to be unsatisfactory. Even experienced practitioners can go off course, and need to stay conscious of the principles as they work.

Another take-away is simply this: When people talk about agile development as a journey and not a destination, they aren't just making zen-style, quasi-poetic noise. No one reaches the stage that he/she can afford to stop learning, stop practicing, or stop thinking with a beginner's mind. If we learned anything from this three day experience, it must be that.

George and I had a retrospective of our own by phone this week. We talked through a lot of details of how our team had gone off track and what we might have done differently. Most of that is probably not interesting for a blog post, but it may be useful to mention a few points that came up in our conversation.

First, a common theme in both teams' problems was insufficient interaction with the Product Owner. We took the stack of story cards and just dived in, as if we were running a race. We didn't take time to organize our work or decide how we should operate as a team.

Second, the technical set-up of developer workstations was problematic, as it always seems to be in hands-on classes and workshops. With just three days to work with, any loss of time for workstation set-up has a significant impact. We kicked around a few ideas to mitigate the problem in future, such as providing VMs or bootable CDs or thumb drives for participants. Due to the material covered in the course, though, there will still be a need to set up a network so participants can share a version control system and CI server. This may be an issue we just have to deal with on a case-by-case basis.

Third, the instructors did not provide much guidance about expectations; they may have assumed that with a group of advanced participants, the teams would do the right things more-or-less automatically. Of course, this may have been an intentional aspect of the pedagogical approach. Even so, we could have asked for guidance at any time, and we tended not to do so. Did we think we were too "advanced" to ask questions? Not consciously, but we surely should have known better.

Fourth, it seemed as if the participants bent over backward to avoid being opinionated. George and I speculated that this may be because we are all rather opinionated people, and we didn't want to seem overbearing or pushy. George mentioned he sometimes slipped into "coach" mode when pairing with one of the (relatively) junior team members, and he tried to guide his partner gently toward a realization rather than just saying it straight out, as if working with a peer. An interesting result of all that is the teams had difficulty coming to a consensus about how to approach the solution. Everyone backed off and tried to enable other team members to take the lead in design and in the particulars of TDD, even down to the level of deciding when it was appropriate to move code from an exploratory test class into a "production" class. In discussions about those decisions, it felt as if we were trying to make the other guy "win" the argument instead of defending our own view. The result was a lot of circular discussion and delays in reaching consensus.

George had an insight I found particularly relevant: Everyone's goals for the class were not the same. The basic goal of the course is to teach software engineering practices that are usually associated with agile development — the Extreme Programming practices. The instructors were interested in running this new version of the course to see how it would flow, to obtain feedback from the participants, and to get ideas for improvement. Most of the participants already practice XP and were taking the course as a step toward becoming authorized instructors themselves. So, there were three different goals moving in and out of focus and receiving different people's attention at different times.

Cheezy and I were working with the same team the week after the class, and we found some time during the week to discuss our impressions. He largely agreed with George's and my general impressions. Cheezy had slipped into coaching mode in much the same way as George had done. His team had experienced some problems getting set up to work, just as ours had done. Cheezy had been on the "other" team, and he had some interesting observations about the team's internal dynamics that had not been apparent from our side of the room...that is, our side of the deck. (It was the poop deck, thanks to Iggy. Avast!)

Specifically, the other team had the opposite experience of our team with respect to observation #4 above — they did not bend over backward to validate each other's opinions and design approaches. To the contrary, some team members were quite insistent that the team follow their preferred approach. This tended to result in long debates culminating in different understandings about the team's design direction. Compounding this problem, there was little exchange of ideas or internal review of interim results. When our team observed Cheezy's team using pomodori, we thought it was a good idea and adopted the practice ourselves. However, it turns out that the other team was not switching pair partners with each pomodoro. Pairs stayed together and continued with the design direction each had chosen.

To their surprise, with less than one hour remaining in the final iteration of the class they found the code the separate pairs had developed would not integrate easily. They delivered about five user stories by the end of the class. With the exception of the file I/O framework they had developed earlier, all that code was built in the final 40 minutes of the class. Although they had established a very good testing framework that included both unit and acceptance tests, in the end they resorted to a last-minute sprint (not the Scrum kind) to deliver results.

Ultimately, all the problems experienced by both teams boil down to a single factor: Communication. The class provided a powerful object lesson in the importance of frequent feedback and close collaboration. The lesson will not soon be forgotten by any of the participants.

Notwithstanding all the problems, on the whole the experience was very positive and I think it's safe to say everyone learned a lot and had fun. I would definitely recommend the course to any developer who is serious about honing his/her professional skills. I suspect that each instance of the course will be a unique experience. It's not going to become a cookie-cutter course that's always the same. It might become a course that people will want to take multiple times, maybe once every year or two, as a way to continue improving their technical skills.

These are views of the area surrounding the Lean Dog boat. On the left (or top, depending on how the page is rendered) is a view of the Rock and Roll Hall of Fame. Part of Cleveland Browns stadium is visible in the background. A US Coast Guard base is in the foreground. On the right (or bottom) is a view of downtown Cleveland, with the WW II submarine the USS Cod in the foreground. The USS Cod is open to the public in the spring and summer months.

On the left, class participants enjoy lunch on the upper deck of the Lean Dog boat. On the right, Cheezy takes his Scrum Alliance interview seriously.

Remember to include testing in short-term planning

posted: 29 Apr 2010

I've been thinking about a brief exchange of messages on Twitter yesterday, and I think the discussion bears clarification. Twitter is great, but it isn't the ideal medium for every discussion!

Yesterday was an iteration planning day for the team I'm presently coaching. I tweeted a couple of times during the day when I thought the team had done something noteworthy.

When the team was sizing stories to be done in the upcoming iteration, I noticed they were not taking any input from the tester. Since there's no way to take a story all the way to "done" without having automated acceptance tests passing, manual testing complete, and Product Owner acceptance obtained, it occurred to me that they might overcommit if they based story sizes solely on developers' input. We discussed it briefly and the team decided to proceed using only developers' input. I tweeted that a team might overcommit if they don't consider the tester's input for story sizing.

A few minutes later, the team got into a discussion about the reasons why their card wall looked the way it did. There were quite a few cards in Ready for Acceptance, none in In Acceptance Testing, and none in Ready for Release. That meant they had a velocity of zero for the iteration just finished.

The card wall indicated there was a hard stop in acceptance testing, but the discussion revealed that wasn't the problem. The problem had to do with the nature of the work they're doing. It's a batch process that performs a limited sort of extract-tranform-load (ETL) functionality. The only acceptance test that will be meaningful to the PO is the full end-to-end test of the whole batch process. In fact, the team has test-driven all the pieces and parts of this process individually in isolation and everything is well covered with automated tests.

The problem had not manifested previously because until this iteration the team had been developing an intranet webapp, and it had been easy to produce demonstrable solution increments per iteration. In the long term, though, the team expects that most of the work it will be requested to do will be batch processes.

In discussing how the team might plan and track this sort of work effectively, they came to the conclusion that the time-boxed iterative model itself might not be well-suited to their situation. The focus on releasing something every iteration and the emphasis on stable velocity were creating artificial red flags about project status. They decided that after the current project (which will wrap up soon), they want to try an iterationless process. They plan to continue holding retrospectives regularly and to continue delivering incremental results. What they described (without knowing the terminology or being aware of Kanban software development) was a decoupling of their development cadence and their release cadence. They came to it quite naturally.

The team is in a good position to make this sort of shift, and they will be able to revert to iterations if they so choose. IMHO the reason is that their card wall borrows a lot from Kanban. It consists of columns representing value-add activities, each of which has a buffer column in front of it for WIP. We let the team run unfettered at first, and when the natural bottleneck made itself apparent we added a WIP limit to that column. In effect, that activity is the "drum." We didn't describe it to them in those terms because management had asked for "scrum" by name, and not for "kanban."

They are using a combined push-pull kanban process using the wall as the primary visual management tool. Team members pull work from the buffer ahead of the value-add activity they want to perform, and push the result into the next buffer. For instance, when a pair of developers is ready to take a new card, they pull it from Ready for Development into In Development, and they have a conversation with the analyst and tester to be sure everyone has the same understanding of what is do be done. When they've got the automated acceptance tests working on their own, they ask the tester to check it out, and if all seems okay at that point they push the card to Ready for Acceptance. And so on, across the wall. They've been doing this in the context of a two-week iteration time-box. You can see it's a small step from there to an iterationless process, and an easy recovery if that doesn't work out.

I was very pleased that they were thinking about value, flow, and waste without needing any specific guidance, and that they were actively practicing continuous improvement in an apparently egoless way. I would have felt that way no matter what conclusion the team had reached about using time-boxing or flow. To me, their independent application of agile values means our coaching engagement has been a success. So, I happily tweeted that the team had decided to move to an iterationless process.

That prompted a reply from a colleague who was confused because I had mentioned the same team might overcommit due to not taking the tester's input in the story sizing exercise. How can they "overcommit" if they aren't using iterations? Twitter didn't seem to be a good medium to provide all this background information. In any case, it was a question of timing. They're continuing to use iterations for the remainder of the current project. That's how expectations have been set in the organization for the project. Since those tweets yesterday, the team met with their manager to discuss the change, and he's fine with it as long as the team manages their work properly and delivers good results.

There's still a message in the first part of this story for teams that are using time-boxed iterations. IMHO it's better to take everyone's input for story sizing. There can be stories that are easy to code but challenging for automated acceptance testing and/or for live demonstration. The team as a whole won't be able to deliver those stories as quickly as the developers alone might assume.

I don't think there is a single, rigid process model that works well in all cases. I think we need to understand how and why various practices work so that we can apply them appropriately in context. The nature of the work, the characteristics of the surrounding organization, the level of engagement of project stakeholders, the professional skill level of team members, the maturity of the team in using lightweight processes, and the degree of specialization among team members all affect the ways in which agile and lean methods can be adapted to a given situation.

One team might comprise generalizing specialists and not have anyone specifically designated as a "tester." Another team might have separate roles for developer and tester. Another team may comprise only developers, and the test function is carried out by a separate group. In any case, all the various forms of pre-release testing have to be accounted for in sizing or estimation or whatever style of short-term planning the team uses.

If/when the team moves away from time-boxing towards a continuous-flow model, they will still use a just-in-time planning method such as Rolling Wave to defer decisions until the last responsible moment, and they will still have to consider all the elements necessary to deliver a working product. That necessity doesn't disappear when we change our process.

Keep it simple. Please.

posted: 26 Apr 2010

Software developers love to solve interesting, challenging, complicated problems. The more serious among them consider software development a profession, and not merely a job. They are passionate and dedicated practitioners who strive to improve their skills every day. They sign the Manifesto for Software Craftsmanship. They go out of their way to share information and learn from each other at user group meetings, conferences, meetups, coding dojos, code retreats, online code sharing venues like github, lunch-and-learn sessions at their workplaces, anyplace that offers free wi-fi, hotel lobbies and airports when working on the road, and anywhere else they can pull out a laptop and write code...especially when they can pair-program with another developer.

Ah, but there's a paradox. The majority of paying jobs for software developers don't actually require them to solve interesting, challenging, complicated problems. Ever. Developers spend their time pulling data out of a database, formatting it to look nice on a display, maybe doing a bit of basic arithmetic on it, validating input data entered by humans, maybe applying a couple of simple business rules, and sticking it all back into the database. And then they're asked to do it again. And again.

And so on.

Forever.

(Or until they die. Whichever comes first. And there are times when death can't come soon enough.)

Under those conditions, what do the developers do with all that pent-up passion, dedication, and lust for problem-solving? They fall prey to what Neal Ford has dubbed the Rubick's Cubicle Anti-Pattern. If the real work doesn't offer interesting, challenging, complicated problems, then, by George, the developers will make things interesting, challenging, and (above all) complicated.

There are legitimately interesting, challenging, and complicated problems out there that require sound logic and strong software engineering skills. The majority of jobs, however, never come close to any of those problems. The majority of jobs have developers working on CRUD (create, read, update, delete) applications in a business environment. Nowadays, most of these CRUD apps are based on Web technologies; they are intranet apps. Most of them have a relational database on the back end and a Web browser on the front end, and the stuff in between is managed by some sort of web app framework. So, it's basically the same simple app over and over again, based on well-known reference architectures, standardized frameworks, and basic tools like SQL and HTML that, in turn, are usually hidden behind ORMs and code generators. Non-functional requirements such as security, audit trail, and performance are mainly handled by the frameworks and middleware facilities within which the web app lives. There's nothing to be done about these issues at the application code level. There really isn't much thinking to be done, and no creativity whatsoever is required.

That may be the reason most business applications are so insanely complicated. Developers spend their time building tools and frameworks for themselves to play with. They pretend the tools are necessary in order to build the CRUD apps their internal customers have requested. If they didn't have this sort of outlet for their creative energies, developers might start to bleed from the eyes and convulse in their ergonomic chairs, as their creative juices began to corrode their bodies from within.

If I had a dollar for every unique framework I've had to learn and work with whose original purpose was to support a single CRUD app for a single business solution, I could retire in style. It just isn't necessary to re-invent the wheel over and over again. Yet, that's exactly what has happened in thousands of companies all around the world.

A recent project provides just one example among many. The goal of the application is to replace questionable data values with corrected values. A batch process queries rows from a relational database that meet certain user-defined criteria. User-defined arithmetic operations are performed on the queried data. The results are stored back into the database. There's an intranet app to allow users to input the selection criteria and arithmetic operations to apply to the data. That's it. No big deal, right?

Wrong.

The complexity of the solution design and the production environment into which it is deployed is downright astonishing. It's as if the Zucker brothers set out to script a comedy film to mock the idea, "Build the simplest thing that could possibly work." The operative principle here is, "Build the most mind-blowingly wacky-ass pile of crap you can possibly dream up." The team is using (so far) 4 programming languages, 11 testing tools, and 4 IDEs. Current projections are that the effort will take at least 12 weeks to finish; when you look at the meat of the thing, it ought to take about two. The database schema is painful. The home-grown framework to drive the batch processing is excruciating. The technical "standards" are sufficiently out of sync with the industry that none of the conventions built into the web app framework apply, and everything has to be manually coded; might as well not even bother with a framework. None of this complexity is driven by the business requirements.

If this were an isolated case I could probably laugh it off. Unfortunately, however, I've seen all too many examples of the same phenomenon. People have made business solutions excessivly complicated just to make things interesting for themselves during initial development. Others must live with the aftermath as they support these solutions in production and as they try to make modifications or enhancements without bringing the whole house of cards down around their heads.

I appeal to my esteemed colleagues in the software development field: Resist the temptation of the Rubick's Cubicle. Keep it simple. There's enough real work to do. Really, there is. And if you need more, there's always Open Source. Plenty of opportunities to let your creativity fly free. But whenever you're working on a CRUD app, please remember: It's just a CRUD app!

Efficient tweeting

posted: 24 Apr 2010

Twitter seems to have become the medium of choice for informal communication among members of communities of interest. One of its strengths is the 140-character limitation on message length. The limitation forces us to be concise. For me, that's a big benefit, because I tend to be verbose and to digress. The 140-character constraint helps me say what I want to say briefly.

But the limitation on message length can be a hindrance at times. Not every idea can be expressed so concisely. It occurs to me that the Chinese writing system packs more meaning into each character than any other writing system. With the advent of online translation services like Google Translate, we don't actually have to know how to read and write Chinese to use the language in tweets.

Here's how it could work. Let's say I want to tweet the English statement:

If you write in Chinese, you can post more words with fewer characters.

The statement uses 71 characters. The same statement in Chinese (according to Google Translate, anyway) is:

如果你寫在中文你可以發布更多的單詞在更少的字符。

That version uses 24 characters, for a saving of 47 characters. We've freed up about 1/3 of a tweet's worth of characters already, and there is still more we can do to reduce the length.

People don't really expect tweets to consist of fully-formed, grammatically-correct sentences with all the fancy punctuation marks and capitalization and all that jazz. If we munge the grammar a bit, we can achieve even greater savings. Let's revise the original statement to shorten it:

Write Chinese > word < char

That's 27 characters compared with the original 71 characters for the English version. Now if we run that through Google Translate, we come up with the following Chinese version:

写中文> 字<字符

Comparing the original English statement with the reduced Chinese version, we've reduced the length of the tweet from 71 characters to a concise 8 characters. That means we can pack 140 / 8 = 17.5 times as much information into the tweet.

Best of all, there's absolutely no loss of fidelity. When we pass the above through Google Translate to convert it to English, the result is perfectly comprehensible:

Write Chinese> characters <character

On agile failures

posted: 24 Apr 2010

I can relate to the story Rob Myers shared in a recent article on StickyMinds.com. I've seen similar situations in mid-sized to large corporations. Have a look at his article now, if you haven't read it already.

IMHO there are a couple of meta-problems that lead to these situations. One of them is "our (the agile community's) fault." Since most of us came to agile thinking from the perspective of the software development and/or testing disciplines, most agile initiatives to date have begun at the level of individual development teams. Most agile coaches are hired, and desire, to coach "uh team." When you go to coaching workshops, it's all about how to help the team do this and how to help the team do that. All the literature (that I've seen) focuses on working with one team at a time.

That's all good, as far as it goes, but it's very difficult to achieve organizational transformation that will improve the end-to-end delivery process and be sustainable after the coaches leave the building when we begin at the team level, and the surrounding organization is not involved in the (attempted) transformation. The nascent agile team is seen as an oddity or a temporary nuisance to be eliminated as soon as the coaches have left; and good riddance, too! Geez, all they do is make the rest of us look bad, and they have fun doing it, too! Our problem is that we usually go into these organizations at the wrong level. We've already lost the game even before the first play.

The other meta-problem isn't "our fault," but we may be able to address it if we are aware of it and have an opportunity to talk to the right people in the organization before the parameters of the engagement are finalized.

I've observed a general tendency for management to dump all of their fire-suppressant directly on the point where they see smoke emerging from their process, without first discovering where the fire itself is located. Since traditional organizations are characterized by (among other things) blame-shifting and information-hiding, any problems in the delivery process are shunted downstream, and any metrics to be reported at a given point in the process are sanitized. The scorecard always looks beautiful until the work reaches the end of the delivery process.

What's at the end of a delivery process that culminates in the release of software? Software development and testing, of course. By the time problems reach that stage in the delivery process, there's no one available downstream to take the blame. The sort of material that flows downhill in traditional organizations comes to rest on the development and testing teams. All the accumulated problems suddenly become visible. There's the smoke. So, management brings in consultants and trainers to "fix" the development and testing teams. Most of the time, they are not the problem.

Sure, most development and testing teams might potentially improve the quality of their work. We all could potentially improve the quality our work. My point is that these teams might not be the best place to invest in process improvements. Consider the canonical "chain" metaphor that people use to explain Theory of Constraints. The strength of the whole chain is equal to the strength of its weakest link. If we invest in strengthening some other link, even if the link we choose could stand a bit of improvement, it will have absolutely no effect on the strength of the whole chain. In most organizations, even if the software development and testing functions could conceivably be performed more effectively, the single weakest link in the delivery chain is almost never one of those areas. The weakest link is almost always somewhere upstream.

Coaches or trainers come in and help the development and/or testing teams, and the teams improve, and the teams feel happy for a while, and there's no perceptible improvement in the delivery process, and the new methods are dismissed as a "failure," and management still has no idea where the fire is located, and they try some other new process or new training course or new project management tool or what-have-you. They dump even more fire-suppressant on the smoke instead of on the fire. They still have no idea what the root cause of the problems really is.

Every prospect who has called me in the past three years has asked for "agile" or "Scrum" by name, and has asked for help with software development and/or software testing teams. Most had already purchased software products they believed would make those teams "more agile," and wanted help with training their staff and bringing them up to speed with the new tools. In no case — no case — were the tools they had purchased relevant to an "agile" style of work. After discussions lasting from one to three hours which suggested different root causes for the observed problems in their organizations, when the manager realized he/she really needed to do something a bit different than he/she had originally assumed, in no case did he/she have budget or flexibility to change course. All had committed everything at their disposal to dumping fire-suppressant on the smoke they could see. They had expended all their internal political capital to obtain approval and budget for that purpose. They had no resources left to investigate the location of the fire that was generating the smoke. Not one of them has the slightest hope of fixing their true problems.

So, what can we do about it? Apparently, most managers are unaware of the tools available for root cause analysis and for general analysis of process effectiveness. There are quite a few tools from disparate sources such as Systems Thinking, Lean Manufacturing, Six Sigma, and assorted organizational change methods that can help people understand where their "weakest link" really is. Knowing that, they will be better equipped to understand whether they need outside help, where to apply that help, what sort of expertise to look for, how to tell whether they've had any real effect on the root causes of their problems, and many other issues. We can offer to help them learn these tools, help them analyze their organizational structure and delivery processes, or at least point them to relevant information they can study on their own.

What we should stop doing is dropping onto "uh team" in the middle of an ocean of dysfunction, as if from a rescue chopper, and expecting meaningful and sustainable organizational change to occur just because we stick a few story cards on a wall in some conference room cum team room. It ain't gonna happen, my droogies.

Why is it so hard to find a clear definition of &q

posted: 22 Mar 2010

It seems as if everyone's talking about agile software development these days. Everywhere you turn, it's agile this and agile that. It's only natural to want to find out about this thing everyone's talking about. But have you ever felt that it was hard to get a straight answer about just exactly what "agile" is supposed to mean? If so, you're not alone.

I think I know what "agile" means, and I spend a lot of my time helping others figure out what it means. I often hear people complain that there's no clear definition. I've been pondering the reasons why this is so, and I came up with a few. Maybe you can think of more.

1. Agile is a value system, not a predefined process or methodology. The Agile Manifesto is the defining document of "agile." It lists 4 fundamental values and 12 guiding principles. That is the full definition of "agile." There is nothing more. Based on those values and principles, we are expected to derive processes and practices that are consistent with the agile approach. This is very different from the way new ideas historically have been introduced to the IT industry. People are accustomed to following a predefined process step by step. Agile takes the approach of allowing (and expecting) people to derive the concrete from the abstract. This way of guiding change is unfamiliar to many people in the IT industry. People keep looking for (and insisting on) a predefined agile process they can follow by rote. When they don't find one, they feel as if there is no clear definition of "agile." Well, it is what it is and it ain't what it ain't. It's only a manifesto, after all.

2. As agile gained popularity, consultants and software vendors soon saw the profit potential in jumping on the bandwagon. They quickly re-cast whatever they were already selling as "agile." This introduced a multitude of different and sometimes conflicting ideas, practices, and processes into the mix, all labeled "agile" but few actually derived from the agile values and principles. As time went on, some consultants and software vendors began to incorporate at least a few agile ideas into their products and services. These were often mixed with traditional ideas and/or based on a faulty understanding of the values and principles. The result was to muddy the waters still more. If you ask three consultants or software vendors what "agile" means, you will get at least six different and probably conflicting answers. Because of point #1, this situation works well for consultants and software vendors. Not so great for you. Ah, well. So it goes.

3. Practitioners who understood the values and principles actively developed better and better processes and practices based on agile thinking. As creative people usually do, they took the basic ideas in different directions. Some incorporated useful ideas from other schools of thought, using the values and principles to guide their choices and their tailoring of those ideas. Different practitioners enjoyed success in different environments by using different "flavors" of agile. Many in the agile community enjoy debating philosophical points and their favorite approaches to agile development. Thus, someone who is trying to get a clear definition of agile hears a wide range of opinions, many of which conflict with one another, and all of which come from well-qualified individuals. The difficulty in sorting out these opinions is that none of them is wrong, although many of them contradict one another, and although some of the proponents of certain ideas tend to disparage proponents of other ideas as they press their point of view in a debate. This can be confusing for someone who is trying to find a clear definition of "agile." The bad news is it won't get any better. (What do you mean, "What was the good news?")

4. Once agile had crossed the chasm and was in use by early majority adopters for proof-of-concept efforts or small-scale initiatives, IT professionals began to feel as if they would not be perceived as "leading-edge" or "cool" unless they could qualify for the agile badge of honor. In some cases, managers read trade magazine articles praising agile methods, and commanded their underlings to "be agile, starting next Monday morning." In a rush to become cool or to obey orders, they looked at the practices agile teams were using and tried to mimic them. Unfortunately, they mimicked the practices without understanding the values and principles; they don't understand how or why the practices work, and so they don't enjoy the results agile proponents had described. To compensate, most such teams have re-instituted many of the Theory X practices and metrics that agile methods are supposed to supplant. In most cases the result has been to form cargo cults around the buzzword, "agile." Real cargo cultists in the South Pacific have grown weary of waiting for the ancestors to resume shipments and are turning to other faiths. Similarly, cargo cult agile teams tend to drift back toward familiar practices. When people who are curious about agile ask their colleagues in other companies or in other departments of their own companies about their experiences with agile development, they receive many different and contradictory answers, most of which have little or no connection with agile values and principles.

5. It seems to be a recurring pattern that when people experience success using some new way of doing things, they want to extend the new way beyond the boundaries of the activities for which it was designed. They reckon if the new way worked in one context, it ought to work just as well in another. Today, we hear a lot about "agile organizations" and "extending agile to the enterprise." The Agile Manifesto is about software development activities. The wording of the document is quite clear about that. It's all about delivering valuable software, measuring working software, collaborating closely with the customer of the software that is being built, and so forth. The authors made no pretense that agile was intended to solve all the world's ills, or even just those of a business enterprise. They were software development professionals who were interested in understanding the common features of the various approaches each had used with success in past software development initiatives. They wanted to discover whether there might be a few guiding principles that could help anyone who needed to develop software. They were not trying to craft a process to manage the operation of a whole enterprise. Their thoughts about lightweight processes began at the point where the work enters the IT arena, and ended at the point where the IT staff delivered working software to some target deployment environment. Agile does not deal with any larger issues. The attempts to extend agile beyond the realm of software development create puffed-up definitions that add to the confusion.

6. Some practitioners have become highly proficient at applying a specific process framework or methodology to agile projects, and they have conflated the notion of "agile" with the details of that particular process framework or methodology. Over time, this has resulted in the emergence of different "camps" in the agile community centered on specific process frameworks. When interested parties ask a practitioner from any given camp for a definition of "agile," they receive a mixture of general agile tenets and process-specific buzzwords and practices. When they ask a practitioner from a different camp, they receive a different mix of tenets, buzzwords, and practices. This makes them feel as if there is no clear definition of "agile."

7. Because the Agile Manifesto focuses on software development activities, practitioners must apply sound software development techniques in their work. Agile concepts are always applied alongside software development practices. As a result, many practitioners have come to view "agile" as inclusive of software development practices. Others see "agile" as pertaining to cultural and procedural issues while software development practices are orthogonal to process. When an interested person asks different practitioners to define agile, they may hear from one practitioner that specific development practices are part and parcel of agile, while another practitioner claims sound software development practices are not specific to agile and can be used with any approach. The result is a fuzzy definition of agile that seems to have no clear borderlines with other concepts or other methodologies.

8. Software vendors who want to sell agile-friendly project management tools build many options into their products. They do this because agile allows for many variations in practice, in keeping with its basic philosophy and the fact it does not define a specific methodology. Some tools allow for traditional processes to be managed alongside agile ones, since most organizations are mixed and management wants to avoid having several different tools to perform the same function. Many novice agile teams rely on their management tool to guide them in learning a new process. When the tool offers a multitude of options, it is hard for novice practitioners to understand what to do. They lack the experience to know how to craft their own process based on agile values and principles. Since the tool allows them to run the project in many different ways, they feel as if there is no clear definition of "agile."

My wikispaces site will close at the end of Februa

posted: 21 Feb 2010

My Wikispaces subscription will expire at the end of February, 2010, and I am planning to let it lapse. If you have any links or bookmarks to the site, at http://davenicolette.wikispaces.com/, please be aware the links will stop working when the subscription expires.

I have no complaints about the Wikispaces service. They do a fine job, and if you need a hosted wiki I would gladly recommend Wikispaces. They have very good availability, editing content is easy, the site is responsive, and pricing is reasonable.

Unfortunately, over the years I've accumulated content on multiple hosted sites and I have material scattered all over the place, including a good deal of obsolete and redundant content. I need to get it consolidated and organized for logistical and cost reasons.

Some of the material from my wikispaces site will appear online again eventually. In the meantime, please feel free to download anything you want from the wikispaces site while it is still available.

Merging the developer and tester roles, part 2

posted: 16 Feb 2010

A few days ago, I wrote about the idea of merging the roles of developer and tester, and touched on some of the ways this could improve software delivery processes. That post considered resistance to the idea on the part of people who self-identify as testers. What about resistance on the part of people who self-identify as developers?

Naresh Jain posted an interesting tweet the other day that is relevant to this topic. It reads as follows:

Software developer means "person who writes code" while the English word developer implies EVERYTHING required to build something.

The well-known testing specialist Michael Bolton, in a response to a comment on his blog, recently wrote:

But when it's not universally practiced, practiced at varying levels of quality, and addressed towards discovering solutions rather than discovering problems, I don't see any reason to relax vigilance.

In context, "it" refers to test-driven development, or more generally, good software development practices that are not widely used at present. By "vigilance" he means vigilance in looking for the kinds of bugs developers really ought to be embarrassed to deliver to testers in the first place, but (strangely) aren't.

Personally, I've been quite astonished at some of the things developers have said to me with respect to testing and code quality in the past few years. In 2008, I spent a day visiting development teams and IT management at a fairly large company, as a sort of extended interview process pursuant to a coaching opportunity. I asked the developers how they ensured their code was sound before handing it off to the testing group. I expected they would say they did a bit of ad hoc manual testing, at least, before proudly showing their code to anyone else. They didn't. They found the question puzzling. They felt that it was the responsibility of the testing group to find bugs in their code. Quite honestly, I did not understand how they could be satisfied with that; how they could look themselves in the bathroom mirror every morning and feel that they were really professionals. I asked them, What do you do, then? Do you just compile the code and throw it over the wall? They found that question puzzling, too. They glanced at one another for moral support, and shrugged. And that is not the only example of this attitude that I've encountered over the years. I can't help but wonder, if a developer doesn't bother to make sure his/her code works, then what function does he/she serve in the organization?

If I were one to theorize on the basis of very little data (and I am, as it happens), I would theorize that this attitude is largely the result of two influences:

  1. Behavioral conditioning. Developers have produced crap for so many years, and testers have ferretted out trivial programming errors for so many years, that everyone assumes this is the way things must be. We have become so accustomed to the problem that we accept it as normal. We don't even see it as a "problem."
  2. The lure of the next challenge. Most developers enjoy figuring out a design that solves a problem. Once they've figured out how to solve a problem, following up to tie off all the loose ends just isn't as much fun. They want to move on to the next problem. It takes a certain amount of self-discipline to finish up all the little details of the current problem before moving on. If you have a handy excuse, like "it's the testing group's job to find my bugs for me," then it's easy to succumb to the lure of the next interesting problem.

 

For the very simplest of applications, it may be feasible to promote code directly from development to production. But what about software that runs in enterprise IT environments that have stringent non-functional requirements? What about embedded software that has to be tested with the hardware before the product can be mass produced? Depending on the domain and context, additional testing may be required in between development and production; testing that the development team may not be set up to perform. This is where testing specialists can add significant value. I think this will remain true even when (or if) my vision of development generalists becomes a reality.

For context, let's assume an enterprise IT organization with certain characteristics. This picture will be familiar to those who work in large IT shops. The organization has a SOA environment that provides an ESB for accessing shared technical assets, back-end legacy applications, and externally-hosted services. The SOA environment provides single sign-on support, most of the -ilities, and supports multiple classes of service with an appropriate charge-back structure for internal departments that use IT resources. Typically, business applications are based on Web technologies, even though they are internal applications. The technical infrastructure comprises different network subdomains with different security profiles. The internal webapps use the familiar MVC-based layered architecture and may be deployed in multiple subdomains on various configurations of real and virtual servers, depending on operational requirements.

For development, business application development teams work in a technical environment that is far different from the production environment. Let's assume the company uses Java and one of the popular Java-based webapp frameworks for these internal business apps. For development and basic testing purposes, people may deploy their code to a single instance of a container such as Jetty or Tomcat. This may be embedded in an IDE, such as Eclipse or JetBrains, or it may run in standalone mode on the developer's workstation and/or on a test server or continuous integration server. Nearly all the development and testing work in this environment uses a monolithic deployment of the application; that is, all architectural layers are deployed together under one instance of the server or container, and there is one instance of each layer.

In the production environment, different layers of the application may be deployed in different subdomains. For instance, the presentation layer may be deployed to the outer DMZ, the business layer to the inner DMZ, and the persistence layer also to the inner DMZ, where it interacts with the ESB and not directly with database systems or other back-end resources that may reside in the core subdomain. Presentation and business layer application components may not run under the same container, and neither may run under the containers used in the development environment. Client-side niceties such as Web 2.0 scripts actually live outside the corporate network perimeter altogether. The presentation layer code is subject to various mechanisms external to the application, such as single sign-on support, reverse proxies, load-balancing switches, or what-have-you. To support changing workloads, operations may redeploy portions of the application to more or fewer virtual server instances. To support availability, recoverability, and performance requirements, operations may move the application from one server cluster to another (and no application can depend on "owning" a server), add or remove hardware resources, and so forth. All the business applications publish events that feed logging, audit trail, chargeback, fraud detection, license management, and business intelligence facilities. None of this is directly testable in the development environment, and yet new or modified applications must not cause problems in the production environment.

That means there is another level of testing to be done in between development and deployment. This is where testing specialists can add the most value.

The purpose of this level of testing is not to "find bugs." The purpose is to gather information about the operational parameters of the software, to be used in capacity planning, workload balancing, hot failover site configuration, quality of service management, and other matters above and beyond the basic functionality of the application. To do this, testers must be given software that works; not software so full of trivial bugs that it won't even run.

Development teams have the professional responsibility to deliver code that meets all functional requirements and as many of the non-functional requirements as are feasible to test in the development environment. Testing teams have the right to expect the applications they receive already work. Otherwise, they cannot perform the level of testing they need to perform for the benefit of the enterprise.

Some developers choose not to "believe in" contemporary software engineering techniques that help us deliver working code with relatively little personal stress. That's fine. Everyone has the right to "believe in" whatever they please. As long as the developers deliver code that meets all functional requirements and as many of the non-functional requirements as possible, they are free to use any methods; even methods that make their own lives harder than necessary, if they insist. The only thing they are not free to do is to continue delivering crap, as in the previous century.

One of the most straightforward ways developers can learn to deliver working code effectively is to learn the skills of testing. Those of us who pursue a generalizing specialist path in our career development have found that there is a sort of mental "switch" to be flipped; we shift ourselves from development mode to testing mode. Thinking as developers, our goal is to ensure our code performs its intended functions and handles "normal" error conditions gracefully. Thinking as testers, our goal is to find the cracks in the wall; find the ways our code can be broken. This is something that requires awareness and practice, but is not such a great mental leap that it actually calls for two different individuals on the team, not to mention all the waste inherent in silos, hand-offs, WIP inventories, and back-flows between the two specialists. More than anything else, though, it requires us to accept our professional responsibility to deliver working code, and stop depending on testers to find our trivial programming errors for us. They have better things to do than that.

Toward fully-automated meetings

posted: 12 Feb 2010

A February 12 article in New Scientist, Boring conversation? Let your computer listen for you, describes the state of the art in Automated Speech Recognition (ASR) software. According to the article,

IBM Research in Almaden, California, [...] has developed a system called Catchup, designed to summarise in almost real time what has been said at a business meeting so the latecomers can... well, catch up with what they missed. Catchup is able to identify the important words and phrases in an ASR transcript and edit out the unimportant ones.

This technology offers an exciting breakthrough for managing meetings in the workplace. Given software that can interpret speech, all that is missing is software that can generate speech, and we can achieve fully automated meetings that do not demand the attendance of any real people.

There used to be a word game, way back in the dark ages when people used paper, in which one composed business-speak sentences by choosing words randomly from several lists; verbs, adjectives, nouns. It would be straightforward to develop software that generates sentences from lists of words. Combined with Catchup, we could realize a system to carry on both sides of any conversation without human participation. This would enable employees to go about their work without having to attend any meetings. A company would be able to get at least three times the amount of work done without adding staff.

Consider these word lists, for example:

List 1 List 2 List 3 List 4 List 5 List 6
in the next quarter
in the next fiscal year
to maximize shareholder value
to reach our financial targets
unquestionably
generate
facilitate
productionize
leverage
quantify
innovate
superior
market-leading
innovative
customized
customer-focused
Byzantine
products
services
innovations
marketing
successes
breakthroughs
overtake
supercede
win
grow
achieve
results
marketshare
growth
profit
cashflow

From lists like these, software can assemble meaningful statements to contribute to the discussion:

Assemble: Random(List 1), ', we must', Random(List 2), Random(List 3), Random(List 4), 'to', Random(List 5), Random(List 3), Random(List 6).

Examples:

  1. To maximize shareholder value, we must facilitate customer-focused innovations to supercede market-leading growth.
  2. In the next quarter, we must generate superior breakthroughs to grow marketshare.
  3. To reach our financial targets, we must innovate innovative innovations to win results.

 

Then Catchup can pick out key words and phrases

Keyword 1: 'value', Keyword 2: 'customer-focused', Keyword 3: 'growth'

and feed them into another meeting attendee's avatar:

Assemble: Random(List 1), 'Clearly, we should', Random(List 2), Keyword(3), 'and', Random(List 2), Keyword(2), Random(List 4), 'to', Random(List 5), Keyword(1).

Examples:

  1. Clearly, we should productionize growth and generate customer-focused innovations to achieve value.
  2. Clearly, we should quantify marketshare and leverage breakthroughs to overtake quarter.
  3. Clearly, we should innovate innovations and innovate innovations to innovate innovations.

 

...and so on in the same vein. For the duration of the scheduled meeting, the avatars can engage in a frank discussion of critical issues facing the enterprise, such as this:

Frank: Unquestionably, we must leverage market-leading successes to overtake profit.
Francine:Clearly, we should quantify innovations and generate superior breakthroughs.
Francisco:I definitely agree that we must facilitate customized services to win growth.
Francesca:Certainly, if we innovate Byzantine marketing, we will achieve profit.
Franco:I feel I must remind everyone that when we leverage superior products, it will be possible to supercede cashflow.

Then, as the meeting time nears an end, the facilitator's avatar can say something like this:

Frank: It's time to wrap up now. I think we can all agree this has been a very productive meeting. The future looks bright!

This is truly an important breakthrough innovation to leverage superior successes and productionize customer-focused productivity.

Two lightweight, hosted project management tools

posted: 11 Feb 2010

I don't usually promote tools, and even in this case I'm not sure if I'm promoting anything, really. But I've been playing around with these two lightweight project tracking and collaboration tools, and I think they're pretty good. People should be aware of them. One is called Agile Zen and the other is called LeanKitKanban.

Some of the key features that I find compelling about these products are:

  1. They are hosted services. You don't have to buy/run/administer/support/babysit servers to host them in-house. That means your employees can spend their time on activities that differentiate your business from the competition rather than on back-office application support.
  2. There is no requirement to use any particular operating system or browser as the client. Any "standard" browser on any OS will work.
  3. The pricing is reasonable. If you have "enterprise" needs, then you may need to pay for "enterprise" tools. Unfortunately, most commercial software vendors seem to want all your money just for the privilege of using their tool. These tools are priced low enough that the subscription fee won't be a decision factor in your choosing an appropriate tool for your needs.
  4. These tools support collaboration over the Internet. That means teams can have dispersed members (not the agile ideal, but often a practical reality), multi-team projects can have teams located in multiple locales, individuals can telecommute when it makes sense (again, not the agile ideal and yet sometimes an important aspect of the organizational culture), and stakeholders can see up-to-the-minute status of their projects at any time, wherever they are.
  5. These tools are lightweight and simple. There's no need for a training course just to learn how to enter project information or get analytic reports out of the products.
  6. These tools don't lock you in to a vendor-defined process framework or delivery cycle. They're based on general principles of lean software development (as described on the website of the Limited WIP Society, for instance), they give you a useful lean-based structure that visually resembles a Kanban board, and they leave the rest up to you. While most commercial products try to support different customers' needs by defining a gazillion configuration options, these tools do so by leaving most of the details undefined and allowing you to enter whatever you please (or nothing at all) in most of the input fields.
  7. It's easy to set these tools up with information radiators. Put large monitors where needed and log them into your company's account. Every update to project-related information will appear on the monitors. No stress, no bother, no distractions from the good work of delivering customer-defined value.
  8. Both these companies offer a single-user basic level of service free of charge so that people like me can experiment with the tools, show them to our clients/employers, and recommend them. They don't collect contact information and they don't require any sort of evaluation license. This is a wise marketing strategy that more software companies ought to consider. It is working for them right now, as you read this blog post. Making things easy for people is good business. (IBM, Microsoft, Oracle: Listen up!)

 

  Agile Zen LeanKitKanban
Pricing (as of 11 Feb 2010) 1 project, 1 user - free
3 projects, 3 collaborators - $9/month
10 projects, 10 collaborators - $29/month 20 projects, 20 collaborators - $59/month Infinite projects, infinite collaborators - $99/month
For up to date pricing, see http://agilezen.com/pricing
1 board, 5 users - free
5 boards, 5 users - $19/month
10 boards, 10 users - $100/month
20 boards, 25 users - $200/month
For up to date pricing, see http://www.leankitkanban.com/Account/Registrations/Pricing
Collaboration Features
  • Updates are shown on all screens that are viewing the same board
  • You can subscribe to email or Instant Messenger notification of updates
  • When another user updates a board, a notification of the action appears on your screen
  • Updates are shown on all screens that are viewing the same board
  • You can subscribe to email or RSS notification of updates
  • When another user updates a board, a notification of the action appears on your screen
Kanban Features Supported
  • Lanes - user-defined. Backlog and Archive are predefined.
  • WIP limits (when exceeded, the background color of the lane changes automatically to provide a visual indication)
  • Lanes - user-defined. Backlog and Archive are predefined.
  • Queues - they call this "horizontal swim lanes" (vertical and horizontal splitting of lanes is supported)
  • WIP limits (you are prompted for a reason to exceed the limit at the time you drag a card into the lane)
Kanban Features Not Supported
  • Queues. You would have to handle this by convention on your team; suggest pulling work items to the rightmost lane in the (imagined) queue.
  • Order points. This is not usually needed for software-only projects or software product support work.
  • Order points. This is not usually needed for software-only projects or software product support work.
Freedoms
  • No fields are required with the exception of the card Title.
  • The "tag" feature enables you to label, filter, and sort work items in any way that makes sense for your working context. There are no predefined work item types.
  • When you exceed a WIP limit your train of thought is not interrupted with a dialog box. The background color of the lane changes, providing a visual indication that you have exceeded the WIP limit.
  • No fields are required with the exception of the card Title.
  • Backlog and Archive (special types of lanes) can be split horizontally like any other lane. This feature can be used to manage releases or to support an iterative process model.
Useful Guidance
  • Simplicity of design and a thoughtful UI minimize the need for textual documentation.
  • Context-sensitive help is easy to access and very clear.
  • Each lane is of one of the types Ready, In Process, or Completed
  • Context-sensitive help is easy to access and very clear
  • The online help wiki is comprehensive and well-written
Handcuffs None. It's easy to adapt the tool to your process. Types of cards - You are limited to predefined values: Feature, Task, Defect, Improvement
Ease of Use
  • Smooth and consistently responsive. Occasional brief hiccup on the server side.
  • Automatically saves updates.
  • Feels a bit clunky, perceived responsiveness is uneven.
  • Updates are not automatically saved. You have to remember to click the "save" icon.
Analytics
  • Cumulative Flow Diagram
  • Cycle Time
  • Lead Time
  • Wait Time
  • Work Time
  • Efficiency
  • Phase Breakdown - where all the work items are in your process
  • Cumulative Flow Diagram
  • Cycle Time Diagram
  • Efficiency Diagram
  • Card Distribution - where all the work items are in your process

Screenshots from Agile Zen:

Screenshots from LeanKitKanban:

Merging the developer and tester roles

posted: 10 Feb 2010

Warning: This post expresses ideas that are not politically correct. It will offend different people for different reasons. Parental guidance is advised.

In the past few years, one of the most interesting and positive trends in IT has been the merging of professional roles and the growing popularity of the idea of the generalizing specialist.

Business application development doesn't usually call for narrow-and-deep specialized skills in any one particular area, except possibly for short consultations on occasion. Instead, that sort of work benefits from generalists who can keep in mind the big picture and who can carry out whatever technical tasks the project calls for at any given moment. The agile approach to software development is especially well-suited to business application development, and it has become very popular over the past several years. Agile practices such as collocated team, cross-functional team, dedicated team, stable team, and collective ownership, as well as the guideline to maximize a team's truck number or bus number, have meant that technical specialists who previously worked in separate silos were working side by side for extended periods of time, spanning multiple projects. It was quite natural for them to start learning one another's specialties, at least to the extent necessary for business application development and support.

Those who have worked on business application development projects will relate to this. The level of database-related work involved rarely rises to the level that requires a narrow-and-deep specialist in database technology. There is no reason why an application developer cannot take care of the database-related work necessary for a business application development project in nearly all cases. A DBA can be brought in on occasion when necessary. The same general rule holds for other technical specialities, as well: Network technology, security, user interface design, etc. While user experience design (UX) is a genuine specialty, most business applications don't require more UI design than to follow conventional, well-known guidelines. While technical architecture is a genuine specialty, business applications almost always follow one or another reference architecture; every CRUD webapp doesn't require a fresh architectural design. The old saw that technical people and business people don't know how to talk directly to each other has been false for many years. So it goes for other specialized roles, as well. The result has been the emergence of the technical generalist — an ideal skillset for most agile-style application development and support work. The term generalizing specialist refers to a specialist who learns other skills to level-up his/her skillset.

From a lean point of view, a team of generalists alleviates flow problems stemming from all three of the famous M words: muda, muri, and mura. Its most direct effect is to reduce mura: unevenness or irregularity in work flow. To visualize how a team of generalizing specialists can achieve smoother work flow than a cross-functional team of specialists, consider the canonical Rational Unified Process (RUP) process diagram:

RUP was a step on the journey from the dysfunctional methods of the 1980s toward improved methods of delivering value through software. To address problems of long lead times and misalignment of results with business needs, RUP defined an iterative development/delivery process. Notice that the horizontal representations in the diagram of various types of activity on a project grow thicker and thinner depending on the RUP phase. This shows how the project's need for particular types of work grow or shrink depending on exactly what the team is working on at the moment. Sometimes there is more analysis going on than coding; sometimes there is more testing going on than analysis; and so forth. When the team comprises specialists, each of whom carries out only one type of activity, the work flow is uneven. Although work is always happening on the project, individual team members experience long periods of relative inactivity punctuated by short periods of very intense activity and, probably, overtime. This is mura. This sort of mura results in some misalignment of results with business needs and some amount of needless additional lead time, although it is far better than the linear and spiral processes that predate it.

There is a sort of domino effect in this process model, as well. The mura leads to muri (overburden) in that during the periods of intense activity for any given specialist on the team, that specialist is trying to complete a great deal of work in a short time. Whenever humans do that, they inevitably overlook details due to fatigue or excessive mental context-switching. Other team members are idle while they wait for the results of the specialist's work. Waiting is a form of muda (non-value-add activity). While his/her teammates are waiting, the specialist's mura leads to defects. Defects lead to rework, which is another form of muda. In extreme cases it can lead to employee burn-out and high turnover, which reduce the organization's effectiveness in delivering value on a larger scale and for a longer time than the scope of any single project.

The problem is not inherent in the RUP model as such, but is a natural consequence of maintaining traditional role specialization. You can see the same effects using Scrum or Kanban or any other process framework, if the team members are specialists. It just so happens that the popular RUP diagram illustrates the natural variability in activities quite nicely. Variability in activities is normal; problems occur when we assume each activity requires a different specialist to perform it.

The growth of the generalizing specialist did not stop with purely technical specialties. As IT professionals have pulled testing forward in the process, increasingly emphasized behavior-oriented examples over implementation-based test cases, and improved the tools to support executable requirements, collaborative elaboration of requirements, and acceptance-test driven development (for example: FitNesse, Cucumber, Freshen, etc.), the traditionally-separate roles of Business Analyst and Tester have grown together into a single role. Even in the early years of agile adoption, Testers found it frustrating to deal with code bases that were not "stable," because the tools available to them assumed all testing would be done after-the-fact. To deal with continuous testing during development, Testers found the only logical thing to do was to sit down next to the Business Analysts. Many agile team leads, ScrumMasters, and project managers have reported that even when they do not explicitly plan to combine these roles, team members tend to fold the two roles together anyway. It's understandable — it would be difficult to separate the activities of Business Analysts and Testers in an agile context. If the functionality of a system is described in the form of executable examples, and the definition of done is that the examples run successfully, then it's only natural for analysis and testing work to be done by the same team members. Indeed, when those roles are formally separated on a project that is ostensibly "agile," the separation itself is a process smell.

A similar phenomenon has occurred at the line management level. Scrum is an iterative process framework that supports empirical process control; a natural fit for agile-style product development work. Some of the proponents of Scrum predicted that the traditional Project Manager role would evolve into something quite different; something along the lines of the Scrum role called ScrumMaster. This is a process facilitator, team coach, mentor, enabler, obstacle-remover role; but explicitly not a commander or boss role. The ScrumMaster role is formally defined to have responsibilities but no authority. As agile methods, and Scrum in particular, became part of mainstream IT, what actually happened is not that PMs transformed into SMs, but rather they became generalizing specialists who combine the functions of a PM and the functions of an SM. The result has been a shift in the IT industry from a Theory X management approach, common in the 1980s and 1990s and fundamental to the dysfunction of IT in that era, to a Theory Y approach that is consistent with agile and lean thinking.

At the present time (early 2010), we seem to have reached a cultural barrier when we attempt to fold together the roles of Developer and Tester. People who self-identify as testing specialists are especially resistant to the idea of merging the roles. They go to great lengths to explain why this isn't feasible, or why developers are genetically incapable of testing software. Most of the examples they present to support this view are very weak. I'll describe one example shortly.

I think the reason for this is we have reached an old status quo stage of transformation, industry-wide, with agile adoption (in terms of the Satir Change Model). Professional testers have carved out a niche for themselves as specialists in the agile world. They know exactly how to function and how to add value on agile projects provided role definitions do not change again. As Satir observed (paraphrased), people prefer the familiar to the comfortable; the comfortable to the better. The Satir Change Model is well-documented. It's usually illustrated like this:

In the context of step-by-step agile transformation, I like the way Willem van den Ende explains it in his blog. His illustration of progressive, incremental improvement represents each improvement as a foreign element introduced to an old status quo. The pattern Satir identified is repeated as each improvement is introduced and gradually becomes part of the new status quo. It's easy to visualize the growth of the generalizing specialist concept in this way, as work practices have evolved over the past decade.

The foreign element being introduced to the old status quo at the moment is the notion that development and testing activities can be carried out by the same individuals on any given software development team, in the context of business application development. (I keep repeating the bit about "business application development" because other domains have other requirements.) The goal is to improve delivery effectiveness by reducing mura (unevenness of flow, caused by hand-offs between specialists on the team), which in turn will reduce muri (overburden, caused by peaks and valleys in demand for specialists), which in turn will reduce muda (non-value-add activity, in particular defect correction and rework, but also inventory in the form of "completed" programming tasks awaiting formal testing).

When trying to explain why developers "can't" test their code effectively, testing specialists sometimes come up with rather weak examples. In describing a "new" category of software defect he calls Ellis Island bugs, the respected testing specialist Michael Bolton describes a small program that illustrates the Ellis Island problem:

The program takes three integers as input, with each number representing the length of a side of a triangle. Then the program reports on whether the triangle is scalene, isoceles, or equilateral. [...] no matter what numeric value you submit, the Delphi libraries will return that number as a signed integer between -128 and 127. This leads to all kinds of amusing results: a side of length greater than 127 will invisibly be converted to a negative number, causing the program to report "not a triangle" until the number is 256 or greater; and entries like 300, 300, 44 will be interpreted as an equilateral triangle.

He goes on to repeat the example in C and Ruby, and reaches the conclusion that this represents a category of software defect that is very difficult to predict, and that developers are likely to miss. Each language and each set of underlying library functions makes different default assumptions about how to interpret the input strings, and each platform handles integer values differently. Therefore, the only way to catch this type of defect is by exploring the behavior of the code after the fact. Typical boundary-condition testing will miss some Ellis Island situations because developers will not understand what the boundaries are supposed to be.

The italicized phrase in the preceding sentence illustrates what I mean about "weak examples" to support status quo role segregation. These programs, in whatever language, accept any input without validating it. That's just plain bad programming. In the context of agile-style development work, we would not begin work on a feature that was so ill-defined that boundary conditions were not understood. When we don't clearly understand what "done" will look like, we continue our discussions with stakeholders until we do understand it. That's common to all agile processes.

Using the typical behavior-driven approach that is popular today, one of the very first things I would think to write (thinking as a developer, not as a tester) is an example that expresses the desired behavior of the code when the input values are illogical. Protection against Ellis Island bugs is baked in to contemporary software development technique, and is redundantly supported by other agile practices the team would normally follow: Pair programming, automated unit testing, customer collaboration. When you add to that the growing trend of software craftsmanship, the idea that developers will tend to overlook Ellis Island conditions doesn't hold watir. (Just trying to lighten it up, since this is one of the offensive bits.)

I can definitely agree that if a development team did not follow contemporary software engineering practices, then they would need someone to tag along after them to look for bugs in their code, like the shovel-bearing clowns who follow horses in a parade. The real problem in that case would not be that developers categorically write unbelievably bad code like the triangle program, but that the team in question was not following well-known effective development practices. The real solution is to fix the real problem, and not just to grab another shovel.

Eventually, I foresee a different set of professional career paths than those which evolved from traditional software development processes. At one level, there will be narrow-and-deep specialists who build technical infrastructures and deal with edge-case non-functional requirements for high throughput, high availability, high performance, high volume data storage/access, or extreme security needs. These people will usually work in the central IT department of a large corporation, or as innovators in start-up firms that have creative new ideas for using information technology. These will be the software craftspeople. At another level, there will be generalists who can assemble the building blocks of business applications cleanly and with appropriate attention to quality, following standard guidelines for issues such as UI design, solution architectures, and data storage. In contrast with present-day organizational structures, they will work in the business units that need the software, and not in a separate department. They will have deep expertise in the business domain and reasonably good general technical skills.

I think the resistance we see at the moment to merging certain traditional roles is a point-in-time snapshot of a longer-term process of change. The choice before us as individual practitioners is whether to prepare for and progress with the changes, or find a niche job where we can play out the remaining years of our working lives within our current comfort zone. The cheese is being moved, with our without our approval.

More about velocity and Iteration 1

posted: 09 Feb 2010

Dave Rooney attempted to post a comment to the recent post about initial team velocity but he had problems with Tripod's captcha mechanism, as have many others. He sent me the following email instead:

Actually, management's expectation was that the originally selected Velocity was at best a WAG. The manager in question had no problems at all with the progress, but others that report to him were concerned that he would.

That fear was based on previous experience using their previous process.

I do understand, though, that creating an artificial Velocity in absence of any real indication may have some pungence. What I do when I'm coaching, though, is to have the team select the amount of work that they feel "looks right" for the 1st iteration. By that time, I have already told them numerous times that people and teams are generally overly optimistic about their capacity. If they don't complete all the stories, it's an excellent teaching mechanism. If they do complete the stories, which has happened to me as many times as it hasn't, then I just sound pessimistic. ;)"

I've wavered back & forth over the years with this one. At times I've thought that for Iteration 1 you just pick the top Story, finish it, and pick the next. Repeat until the end of the iteration. I've seen human nature creep into that one, though - if you don't provide a goal (i.e. 2 or 3 or 'n' stories), then people will only get 1 done.

I'm finding that picking what looks reasonable to the team has worked OK. If they don't meet that goal, then they reflect on why in the Retrospective. At this particular client, there was a LOT of infrastructure work that had to be done to even start the iteration. I brought that up at the Retrospect - why is it so hard to get going? Should effort be expended on making it easier?

The other thing at play here was that the VP of Soft. Dev. was away during the Iteration. There was fear that he would freak when there was only one story completed at the end. In speaking with him prior to his departure, I didn't think he was going to be upset. I think the perception that he would be upset was based on his previous frustration with the quality of the company's product, and not on the effort of the development team. In the end, he wasn't upset with of course the caveat that expects the team to improve over time. :)

I hope Dave doesn't mind my posting his comments publicly, because I think there are some valuable gems here.

Selecting an amount of work that "looks right" to the team for the first iteration is exactly the right thing to do, in my opinion. I completely agree that people tend to be optimistic about their delivery capacity at first. Some people call this commitment-based planning, and some teams use it to the exclusion of story sizing or estimation on an ongoing basis, not just for the first iteration. It's a good technique. When teams are optimistic about their capacity, and we use the outcome as a teaching opportunity (as Dave mentioned), the focus seems to be on improving the accuracy of short-term, small-scale estimation. I'm not too crazy about that as a goal, because customers don't purchase estimates, and estimation is therefore a non-value-add activity no matter how accurate the estimates may be. But that's a different topic.

What I was more interested in was the implication that management expected a certain number of stories to be completed in the first iteration. Apparently, this wasn't a hard-and-fast expectation in this case, although some people were worried about management's reaction. Often, however, there is a hard-and-fast expectation. IMHO we need to avoid creating such an expectation. It is a question of managing expectations.

When consultants try to convince traditional management to try an iterative process such as Scrum, there is a tendency to overplay the positives and underplay the negatives. This is understandable as a sales approach, but it can backfire once the fun begins. I think we should be careful not to set unrealistic expectations. I try to be very clear that the performance of the team will be lower initially than it has been under the organization's old process. Furthermore, the performance of the team will start out low and then rise to some natural, sustainable level. This level of performance is not to be set as a target. It will be observed based on the team's performance over the first few iterations. By using real observations of outcomes, we can have confidence in velocity as a yardstick for release planning. Once we begin to set velocity targets, teams will game the numbers and velocity will become worse than useless as a planning tool. ("Worse than useless" means "misleading.")

When I took the CSM course from Ken Schwaber a few years ago, he handed out a list of considerations that can affect a team's initial velocity. The list was several pages long and provided an adjustment factor for each consideration. Most of the adjustment factors were very small. I think that is more fine-grained than necessary. Including so many considerations doesn't actually help us arrive at a reasonably accurate adjustment factor for initial velocity. What I've found practical is to consider just three things: (a) Has the team worked together before? (b) Is the team familiar with the business domain? (c) Is the team familiar with the technologies used in the solution?

When the team hasn't worked together before, I assume that's a 40% hit to velocity initially. For each of (b) and (c), I assume a 20% hit (for general business application development). In a nutshell, that means whatever the observed velocity turns out to be in iteration 1 represents a percentage of that particular team's norm on that particular project. They will ramp up to their own norm in 3 to 6 iterations. I've found that this provides management with a fairly accurate forecast of the team's performance over time. Managers like to see numbers, and when the numbers are explained in this way it helps to set realistic expectations.

Consider this spreadsheet, for example. It presents numbers from a real project and illustrates this approach to communicating the meaning of velocity to management when working with a new team. The first sheet, Projection based on Iteration 1, shows what would happen if we took the actual observed velocity of the team at the end of Iteration 1 and used it to project the likely completion date for the defined scope. This assumes the observed velocity in Iteration 1 is the team's norm (and this assumption is quite wrong). The projected completion date will be far out in the future and probably unacceptable to management. If we showed this to management as-is, they might cancel the project and throw Scrum right out the window. This is an example of what not to do.

And now for a positive example. In this particular project, the team had not worked together before. They were familiar with the business domain and the solution technology. So, I assumed the observed velocity in Iteration 1 represented about 60% of the team's norm, accounting for the approximate 40% impact to initial velocity as the team gels. The project comprised two broad sets of features. The team used an agile planning technique called User Story Mapping, elaborated by Jeff Patton, to determine the minimum features needed for the first release, and the minimum acceptable implementation of those features to meet business needs. They then did a rough sizing of those User Stories. We deferred analysis of the second set of features and just assumed the general scope would be about the same as that of the first feature set.

It's typical to include a buffer when planning releases, regardless of methodology. For traditional projects that have a comprehensive WBS as of the start of development, we've typically used a planning buffer in the range of 25-35%. For agile projects, characterized by emergent requirements, I've found a buffer of 100% to be about right for typical business application development. What tends to happen is that scope increases as we learn more about the specific requirements, and decreases as the Product Owner decides to drop features from the solution based on incremental business value delivered and on working with the incremental results, typically resulting in a net change of approximately +100%. In this particular case, that gave us an initial SWAG of 204 points.

For management reporting at the end of Iteration 1, I assumed the observed velocity was 60% of the team's norm and that the team would ramp up to its norm in 20% increments iteration by iteration. (There's no magic in the 20% figure; it was my best guess based on my familiarity with the team. This is one of the places where judgment and experience play a role, and there's no cookbook.) The sheet Projection after Iteration 1 shows how we reported the projected delivery date based on these assumptions. The value for Iteration 1 is real. The value for Iteration 2 was plugged in; Iteration 2 hadn't happened yet, so we had no empirical observation of velocity. The value represents 80% of the teams ostensible norm, based on the assumption that the observed value for Iteration 1 was 60% of the norm. Values for iterations 3 through 6 were also plugged in, and they represent 100% of the team's norm based on the same assumptions. You can see that the trend line provides a much more palatable projection than when we failed to apply the adjustments.

Subsequent sheets in the spreadsheet file show the velocity chart as of Iterations 2, 3, and 6. As of Iteration 3, we generated the trend line based solely on empirical observations of velocity. We only used the "plugged" values for reporting Iterations 1 and 2. In reality, the project followed these projections pretty closely. I've found this to be fairly reliable on other projects, as well.

In my long-winded way, the point I'm finally coming around to is that velocity isn't about "estimates" or "targets." It's useful for release planning if and only if the numbers are real observations, and not some sort of "gamed" numbers or guesses. Since we know a team's first iteration is likely to be less productive than a normal iteration, it makes sense to take this into consideration when reporting the results of early iterations.

A few related points:

  1. Velocity depends on fixed-length iterations. It is not meaningful in a continuous-flow process. We use different measures for that sort of process.
  2. The example above assumes a fixed scope, and uses velocity to project the likely completion date for that scope. You can also use velocity to project the amount of scope that is likely to be deliverable by a fixed release date.

 

The bottom line in regard to Dave's story is that we should be careful not to lead management to have concrete expectations of delivery for the first iteration of a new project. We simply don't have a basis for that sort of expectation (yet). By Iteration 4, we will have enough history to do so, in most cases.

There is no velocity before there is a velocity

posted: 07 Feb 2010

A certain tweet I saw a few days ago has been on my mind. Something about it was bothering me. I think I understand what it is. The tweet reads as follows:

Tomorrow is end of 1st iteration with new client. Only about 1/2 the stories will be done. Management response will be interesting. :)

So, what's wrong? In the first iteration, the team is uncalibrated. There is no velocity yet. Therefore, the concept of "1/2 the stories" is meaningless. There should be no hard-and-fast expectations of the team's delivery capacity as of the first iteration. The fact management has a firm expectation of delivery from the first iteration is a process smell.

Temptation

posted: 07 Feb 2010

I remember hearing a radio play on NPR way back in the 1970s entitled "Temptation." I wasn't able to find any information about it just now, but I recall the story line. It seems a monk sought refuge from a storm in an inn in a village resembling a place in medieval Europe. As he sat in the pub talking with some of the locals, they asked him why he chose the life of a monk. He explained that the world was full of distractions and temptations, and he found the isolation of the monastery peaceful and relaxing. He had few duties to perform and no real worries. The locals nodded and murmured their approval of his choice. Yes, they acknowledged, it would be very tempting to avoid the trials and tribulations of ordinary life, and go to live in a separate place where basic needs were taken care of. Very tempting, indeed!

In the ordinary life of our world, trials and tribulations arrive in the mail every month like clockwork. This year, coaching opportunities seemed to get off to a slow start. Yet, the aforementioned trials and tribulations all have due dates attached to them. They all seem to want money, for some reason. How tedious! So, while waiting for one of several possible coaching opportunities to transform from Maybe to Yes, I took a short-term contract as a software developer. It was really one of the most enjoyable and relaxing things I've done in the past few years.

The client company has a fast-paced environment. It's not a start-up, strictly speaking, but it's a fairly young company that is currently on a sharp growth curve. It's not an agile or lean environment, and they didn't bring me on board to help them with that sort of thing. (Sorry to be crude about it, but they didn't pay me appropriately to do that, either.) I was just there as a programmer. Of course, I wrote code the way I always do — using TDD. I can relate to Slartibartfast in a way; he always does fjords because...well, that's what he does. Sometimes he wins an award for it. Usually it goes unnoticed. I always do TDD because I'm uncomfortable delivering code that may or may not work. Sometimes that is appreciated. Usually it goes unnoticed. Unlike Slartibartfast, though, as soon as someone comes up with a better way to deliver working code than TDD, I'll change.

Sometimes the coaching game gets a little frustrating. It seems as if everyone says they want to improve their processes, their methods, their effectiveness, their skills, their habits. When it comes down to doing so, it's a different story. I know that's not universally true, but sometimes it's nice to take a break from all that and just write code. No political battles, no cultural barriers, no psychological resistance to change.

This client had an application to build and deploy on an aggressive release schedule. The other developers felt they were under some pressure because of this. I didn't feel any pressure. My scope of responsibility was much smaller than it usually is. I think I mentioned already this is not an agile environment. There was no collective ownership. I was only responsible for delivering "my piece" of the solution. The rest of the group (I can't say "team," really, although the group worked very well together) didn't use TDD or any other agile development practices. So, they spent a certain amount of time with their heads in a debugger, and they were frequently surprised by unexpected behavior of their code once it was integrated with other code. What little testing they did was manual, ad hoc, and after-the-fact. I didn't experience that kind of pressure personally, because my test suite protected me from most of those issues.

As far as release date pressure is concerned...well, when the release date was in jeopardy, they just moved it. That's the time-honored approach of traditional methods. They didn't use dimensional planning or user story mapping or a similar agile planning approach to deliver a valuable solution increment early. Heck, they didn't even have user stories. I'm not really sure they had a clear definition of done for the project as a whole, let alone for individual features.

Despite the fact the guys were supposed to deliver a solution by a certain deadline, they were constantly interrupted to do other work; there was no concept of dedicated team. They didn't interrupt me, since I was an outsider and couldn't have helped with problems that popped up in other departments. I was free to keep chipping away at examples and building up the code for "my piece." Lovely!

So...yeah, not an agile environment. Okay. But still, a fun environment. A good, meaty programming assignment. A vibrant, bustling, noisy work area; just the way I like it. A group of highly skilled colleagues with a geeky sense of humor. An opportunity to learn another programming language. Company-supplied coffee. The chance to contribute to a product that improves quality of life for customers. It's all good. And on top of all that, a relaxing and enjoyable work experience. Not a lot of money, but that really wasn't a big deal. Sort of like living in a monastery, I guess. No responsibility for changing the world; just a few duties and no real worries. It's tempting to walk away from coaching and organizational transformation in favor of this sort of work. Very tempting, indeed!

But the reality is a person can add a great deal more value by improving an organization's end-to-end processes than by sitting at a desk writing code. Despite the temptation, it would be selfish of me not to rejoin the fray. So, after this refreshing break, I'll be returning to the battlefield in a week or so. One of those organizational coaching opportunities finally transformed from Maybe to Yes. It's time once again to brace for political battles, cultural barriers, and psychological resistance to change. It's time to leave the monastery and return...home. Where I belong, fighting the good fight.

Still, it's nice to know where to find the monastery, for future reference. I have a feeling I'll be back.

How to compare elephant herds

posted: 02 Feb 2010

Many project teams use a relative sizing technique for short-term planning, such as Planning Poker based on Story Points. Although the technique has been around for quite a few years and has been amply defined, described, and discussed, there are still some managers out there who believe it is meaningful to compare Story Points across multiple teams, projects, and organizations. And some of those managers want to use the numbers to set performance targets.

It's becoming difficult to think of new ways to explain how wrong this is. For lack of anything better to do, and not because there is any hope of success, let's try again.

How many Elephant Points are there in the veldt? Let's conduct a poll of the herds. Herd A reports 50,000 kg. Herd B report 84 legs. Herd C reports 92,000 lb. Herd D reports 24 head. Herd E reports 546 elephant sounds per day. Herd F reports elephant skin rgb values of (192, 192, 192). Herd G reports an average height of 11 ft. So, there are 50,000 + 84 + 92,000 + 24 + 546 + 192 + 11 = 142,857 Elephant Points in the veldt. The average herd has 20,408.142857143 Elephant Points. We know this is a useful number because there is a decimal point in it.

To earn a bonus this year, your team must deliver at least 20,408.142857143 Elephant Points.

If we throw the numbers into a spreadsheet and produce a pretty chart, we get this:

Therefore, herds A and C get a bonus. Herds D and G are fired.

The Blackbird Effect

posted: 24 Jan 2010

The SR-71 Blackbird, a global reconnaissance aircraft developed by the United States Air Force, first flew in 1964, and was in service from 1968 to 2001. Even at the time of its retirement, it represented relatively advanced technology as compared with most aircraft. In 1976, a Blackbird set the speed record between New York and London at just under 2 hours. The Blackbird's speed record for manned air-breathing aircraft still stands, although manned rocket-powered aircraft and at least one unmanned air-breather have gone faster.

The Blackbird was designed to operate under extreme conditions: Flying long missions at over 85,000 feet at speeds up to around Mach 3.2, the airframe is subjected to heat and pressure extremes no conventional jets would be able to survive. It is not enough to say the Blackbird can fly high and fast; it would be fair to say it must fly high and fast. Some of the engineering that makes it possible for the Blackbird to operate as it does (or as it did, while in service) also make it problematic for the aircraft to operate at "normal" altitudes and speeds.

Some of these engineering details are described in a Wikipedia article about the SR-71. For example, the skin of the aircraft was made of titanium. Titanium not only tolerates the high temperatures and pressure of Mach 3+ flight, but tests of operational aircraft showed that the metal grew stronger as a result of the heat treatment it received during flight. The structure of the aircraft used 85% titanium and 15% composite materials. To allow for expansion under high heat and pressure, key surface areas of the inboard wing skin were corrugated rather than smooth. A conventional aircraft's smooth skin would split or curl under the operating conditions for which the Blackbird is designed.

To account for expansion, and also because there was no fuel sealing system that could survive the high pressure and temperature at speed and altitude, the fuselage panels were designed to fit loosely. Once the aircraft was at altitude and speed, thermal expansion and air pressure held the fuselage together. On the ground, fuel leaked out. The aircraft would take off and climb rapidly, then be fueled again in flight before moving up to its operational altitude and speed.

The fuel that pooled on the runway prior to take-off was special, too. It is called JP-7. Its chemical composition is designed to produce exhaust gasses that are hard to detect; one aspect of the aircraft's stealth design. JP-7 also has a high flash point, so that it can be used safely at high temperatures. This characteristic also made JP-7 useful as a coolant and (because it is very slippery) hydraulic fluid during flight, prior to its being burned. The JP-7 would not even ignite unless it was lit by injections of triethylborane, which ignites on contact with air. To start the engines initially, and to light the afterburners, triethylborane was injected into the fuel flow by the pilot. That was important, because the special jet engines on the Blackbird are designed to fly on afterburners continuously.

That's not the only unique characteristic of the Pratt & Whitney J58-P4 engines. The engine is a hybrid that operates as a turbojet at "normal" speeds and as a ramjet at high speeds. The Wikipedia article cited previously, as well as other published sources, describes this aspect of the design much better than I can. It's quite impressive, really.

There are many more fascinating details about the special design of this aircraft. For my purposes here, the point is that in order to create an aircraft capable of operating at the speed and altitude of the SR-71 Blackbird, it was necessary to build an aircraft that could not, for all practical purposes, operate under "normal" conditions. Loose panels, leaky fuel, and corrugated skin wouldn't last very long flying low and slow, or even moderately high and fast. The fleet of retired Blackbirds can never be repurposed as pasenger aircraft, fighter-bombers, fire-fighting aircraft, or crop-dusters. They can only function at Mach 3+ and 85,000+ feet, or as museum pieces.

And that brings us around to the subject of commercial "enterprise-class" software products, and when it makes sense to choose them over Open Source or home-grown alternatives.

Information technology professionals love to argue, and it sometimes seems as if they especially love to engage in passionate, circular arguments that have no chance of ever resulting in a useful outcome. One such argument that has been popping up here and there in cyberspace recently is the debate between those who favor commercial software solutions and those who favor Open Source and home-grown solutions. Proponents of both sides of the argument appear to believe that there is exactly one rational answer to the question of commercial vs. Open Source software: Their own. Proponents of both sides of the argument declare proponents of the other side of the argument to be suffering from a cargo-cult mentality.

I'm pleased to report that they are both wrong.

Enterprise-class software has certain characteristics that make it suitable for the largest of large-scale processing requirements. Many managers choose products in this performance class because they reckon any software that can handle extremely high loads with good performance and high availability must certainly be able to handle moderate loads with reasonable performance and acceptable availability. Managers also choose such products because they tend to be optimistic and forward-looking in their assessment of their own company's needs. It may be true that we aren't as big as the largest companies in the world today, but we are well on our way; so let's buy the same tools as the largest companies use.

It all makes perfect sense, on a certain level. However, there are four common problems with the line of reasoning many managers take when deciding whether to go with a high-end commercial product. First, the assumption that a high-end product will easily be able to handle less-extreme loads is not necessarily accurate. Just as the SR-71 Blackbird cannot operate as a crop-duster, a high-end enterprise-class product doesn't support moderate processing loads very smoothly. Like the Blackbird, enterprise-class products are specially designed to function well under extreme operating conditions. The design features that enable them to do so also introduce significant internal overhead at low and moderate loads. The second problem is that many managers have an incorrect notion of just how big "big" is. They think the operational loads their company must support are big, when in reality they may only be moderate, in the grand scheme of things. Third, when the company really does need an enterprise-class solution in one area of the business, many managers assume they need such solutions in all areas of the business. Finally, many managers underestimate the total cost of ownership of enterprise-class products.

Recently, I've been working with a certain enterprise-class software product from IBM called WebSphere Commerce. It is a tool for supporting a high-volume e-commerce website. Extremely large e-commerce operations such as Amazon or Land's End can make use of a product in this class to support their online stores. I don't know that these particular companies actually use WebSphere Commerce; I mention them only to establish a frame of reference for judging a company's real need for scalability, performance, and availability. (I understand IBM has one competitor in this space, ATG. There are so few customers that legitimately require a product of this kind that there isn't room in the market for many vendors.)

I have been absolutely blown away by the technical architecture of IBM WebSphere Commerce. The level of detail the engineers have reached in tailoring the product for each supported platform is downright amazing. Technically, it is impressive as hell. It runs on IBM z-Series and i-Series machines as well as on AIX boxes and on commodity Intel hardware running RedHat Enterprise Linux, Suse Enterprise Linux, or Microsoft Windows. Although the product is based on a cross-platform technology (Java EE), it does not run on "any" Java Virtual Machine. This is because it has been heavily tailored to take full advantage of the hardware/software architecture of the handful of supported platforms on which it runs.

One could say, if one were inclined to say such things, that WebSphere Commerce's Pratt & Whitney J58-P4 engines are explicitly designed to operate only under specific conditions, and to operate very well indeed under those specific conditions. The architecture of WebSphere Commerce is heavily customized to support high performance, high volume, high availability, and high reliability under heavy transaction loads. If you fly this bird at "normal" altitudes and speeds, its internal mechanisms to support high loads start to thrash, and it starts to leak virtual JP-7 all over the server room floor. You won't get the pay-back you might have expected in exchange for the high cost of ownership. It will work, but there's more to running a cost-effective IT operation than just that. A lot of solutions will work.

I've mentioned cost of ownership a couple of times already. Let's explore that angle a bit more. Obviously, a commercial product comes with a price tag. That is only the beginning of cost of ownership, though. An enterprise-class product such as IBM WebSphere Commerce isn't sold outright like a car or a pack of chewing gum; customers pay licensing fees. Products in this class are designed to be very difficult to learn, configure, administer, customize, and operate. Therefore, customers also pay for support, consulting, training, books, certification programs, specialized development tools that "know" how to interact with the product, vendor-sponsored publications, and user group memberships. Apart from the costs paid directly to the vendor and its business partners, customers also pay for facilities, utilities, personnel, insurance, and anything else necessary to keep the e-commerce site operational.

Hey, wait a minute: Why would a software company design their flagship products to be difficult to learn, configure, administer, customize, and operate? Remember the small size of the market, as I mentioned earlier. The big software companies invest a significant amount in research and development of their enterprise-class products. If you doubt that, then I invite you to read up on the architecture of IBM WebSphere Commerce, as I've been doing recently. I've worked with several other enterprise-class products over the years, as well; not only from IBM but from other vendors. They all share the characteristics that they are hard to learn, configure, administer, customize, and operate. And all are architected carefully to maximize their scalability, performance, and availability.

To recoup the development cost and start reaping profits, the software companies have to sell more copies of the product than there are customers who really need an enterprise-class solution. They are selling solutions that are appropriate for Fortune 100 companies to thousands upon thousands of mid-sized and small companies. In addition, they need to sell secondary products and services to try and squeeze a bit more revenue into the picture: Training classes, certification programs, consulting services, progressive levels of support, update subscriptions, custom development tools, and anything else they can dream up. If the products were as easy to learn and live with as Open Source solutions, the software companies would not be able to sell all this extra stuff.

There's another reason for the high cost of ownership and the relatively high complexity of commercial products. Because the software companies must sell as many copies of their products as they can, the products must be flexible enough to accommodate the needs of a wide range of customers. Flexibile software is configurable, customizable, and extensible. Configurable software can be made to operate differently by setting options in configuration files. Customizable software allows for the replacement of functional elements with custom versions of those elements. Extensible software offers a way to add functionality beyond what comes out of the box, through a plug-in architecture, user exits, or some other mechanism. Software vendors need their products to be highly flexible so that they can work for many different customers. If they limited the flexibility of the product, they would also be limiting their potential market.

From the point of view of any one customer, however, all that flexibility is merely unnecessary complexity that adds no value. Any one customer is interested in only one configuration of the product. But there is no way to get the scalability, performance, and availability characteristics of enterprise-class software without paying for the flexibility as well; and that is of benefit only to the vendor.

It sounds as if I'm weighing in on the side of the Open Source aficionados, doesn't it? Not so fast! There's more to the picture than cost. We have to balance cost against revenue generation. One of the sales people from the IBM business partner that sold WebSphere Commerce to the small firm where I'm presently engaged told me a story of a client of theirs that runs a large WebSphere Commerce operation that brings in $35 million in revenue annually, against costs paid directly to IBM of around $500,000 per year. I would surmise they are paying another $500,000 in non-IBM-related operating costs to support the operation, as well. So, they are enjoying a 35:1 return. (Of course, this ignores other costs such as the cost of the merchandise they sell, but you get the picture.) I think that's a fantastic deal. They have software that is really capable of supporting a high-volume, high-availability e-commerce operation over the web; they get support and help from IBM (from my recent experience, the responsiveness and quality of IBM support are very good); they get the training they need to babysit the thing (no easy task). It's well worth the cost for this company, because they obtain a significant return on investment. I'm hard-pressed to imagine most companies ever reaching that level of e-commerce traffic. When they overbuy, it's an example of The Blackbird Effect.

What do I mean by The Blackbird Effect? It's the phenomenon whereby companies sign up for very expensive, enterprise-class software products when they don't really have an objective business case for them.

The sales person who told the story of the happy client wanted to give the impression any company that wishes to earn $35 million a year from its e-commerce site ought to pony up and start paying IBM $500,000 a year as soon as possible. This sort of reversal of cause and effect is typical of the sales pitch that seems to be so effective in overselling Blackbirds to companies that will never use more than 10% of the products' capabilities. You don't build your business to the level of $35 million in revenue by throwing your money away in the early years. The message I take from the story is this: If your company has a genuine need for enterprise-class solutions, then you will have the necessary cash flow and personnel to support those solutions. If you don't have the latter, then you'd better think of an alternative to the former until you can build your business.

I only used WebSphere Commerce as an example because it's fresh in my mind at the moment. This isn't about any one software company. All enterprise-class software has the same general characteristics that result in high cost of ownership. All enterprise-class software can perform at a level beyond the capabilities of Open Source alternatives; and in most cases, the Open Source alternatives perform better at low and moderate levels. Those highly scalable and performant products are Blackbirds. I've had to become intimate with several such products from various vendors over the years. I assure you, there's nothing wrong with enterprise-class software as a category. The key questions are whether your firm actually needs that level of performance and has the cash flow to cover the operating costs. Some do and some don't.

There's also an emotional factor at play. Many managers imagine their companies are very large and have very significant processing requirements. Some of them are right. Most of them are not. Even so, I think it's a positive indicator. You can't achieve an ambitious goal unless you are able to visualize success, whether the goal is to qualify for your country's Olympic team or to learn to play the banjo. People who are trying to build a small or mid-sized company to the size of a Fortune 500 firm have to visualize success every day. So, when a software salesman says to them, "If you want to play with the big boys, you have to buy big boy toys," it's only natural that their first question is, simply, "Where do I sign?" They're really eager to get to the top. Sometimes they forget they haven't arrived yet. I'd like to see a bit more thinking happen before the pen comes out to play with the paper.

A few years back I was working at a company that I would consider mid-sized; not large, just mid-sized. It had about 33,000 employees altogether, and had operations in six US states. The IT department comprised about 1,300 people, of whom roughly 300 were software developers. Like any corporate IT department, this group spent around 80% of its budget on operations, infrastructure, and ongoing application support. The remaining 20% was deemed "development," but most of the development work consisted of integrating COTS packages. As a financial holding company, the firm had a number of subsidiaries such as mortgage lenders, banks, and investment firms. For selected business operations, this company genuinely needed enterprise-class solutions. When managers forget the part about "selected business operations," they may become susceptible to The Blackbird Effect. That is exactly what transpired in this case.

A positive example at that financial services company is image processing. They take in a huge volume of paper documents every day, from loan applications to pay stubs to hand-written checks. To get these documents into electronic form quickly enough to support business requirements, they invested well into the 8-figure range in high-end imaging equipment. To consume the output from this equipment, they purchased IBM Content Manager and hired enough staff to customize and support the product properly. I don't mean they bought one copy of Content Manager (or whatever it's named these days). They bought a suite of products around Content Manager, and implemented it all with heavy support for scalability and continuous availability. All of this is very expensive to live with, as you can imagine. But it's a no-brainer for this company. The volume of work they do easily generates enough cash flow to cover the costs. In context, the costs aren't burdensome at all. I don't believe we could have supported the processing requirements of that particular operation by cobbling together a home-grown solution out of Open Source building blocks and long weekends. The company is much better off partnering with a major software vendor that has the resources to support them properly.

Another positive example is the company's ETL processing. That stands for extract-transform-load. It's a very typical sort of requirement in companies that have substantial legacy systems that were built decades ago, were designed to be monolithic, and reside on a range of disparate, incompatible platforms. The company purchased an enterprise-class ETL package, and it solved a multitude of annoying and time-wasting problems in moving data between systems. Yes, they could have built something out of Open Source parts and sweat equity, but it seems very unlikely to me that the result would have been as useful as the commercial solution they chose. Does every company need an enterprise-class ETL facility? Of course not. But this company definitely needed it (and still does).

So far, it sounds as if this company made wise choices regarding enterprise-class software products. Sadly, there are many more negative examples than positive ones. Management decided they needed to build a world-beating technical infrastructure that included the "best of breed" products in every category, just in case the need for them might arise in future. I was either directly involved with, or had visibility into, four of these: WebMethods (which is apparently owned by Software AG now); the Blaze Rules Engine from Fair Isaac; a workflow automation engine made by a local company that may or may not still be in business; and Microsoft BizTalk, a service orchestration platform.

We did have a business case for one of the WebMethods products: The integration server. It was very useful to us. It's really an excellent product. I feel I must reiterate: It's an excellent product, if you really, really need an enterprise-class solution in this category. We did. You might not. Be careful. However, in their quest to have the best-of-breed across the board, management signed up for a multi-million dollar subscription to the full suite of WebMethods products. We really didn't have any business use cases for the other products. So we paid a high annual fee for the right to use software we didn't need. For shelfware.

The rules engine seemed like a good idea at the time. I was one of the people who attended the training course to learn to support the product. What I learned (among other things) was that the real value of a rules engine is the software called the inference engine. To use the inference engine, business rules have to be independent of one another. If the rules are dependent on each other and have to be checked in a specific order, then you set the product to operate in sequential mode. What that means is you've got an if-else structure embedded in the rules engine where most developers won't understand it, and you're side-stepping the inference engine altogether. So, you're getting no value for your money, and you're actually adding needless complexity to your solution. I'll give you three guesses: Were our business rules independent of each other? Very good, you got it on the first guess. Another interesting point is that even if you could invoke the inference engine, the overhead it incurs to build its node structure before it can start evaluating the rules will take more processing time than it's worth unless you have at least 1,000 rules to feed into it. Our worst-case process had 15 rules.

After some months with no usage of the workflow automation product, the manager who had signed the purchase order demanded that the next development project to come along had to incorporate the product into whatever solution they built. He didn't care what the solution might be. He needed to show senior management that the purchase had not been foolhardy. The development team tried to comply. They ended up jamming a complete CRUD application into the "custom in-box" of the workflow product. As they added functionality iteration by iteration, it became more and more cumbersome to do anything with the "custom in-box" code. Eventually the technical team lead and the project manager went to the manager who had demanded the use of the product and explained to him that they were removing it from the solution because it was preventing delivery of useful results to the internal customer. The application they were delivering simply was not a workflow automation solution.

In those days, at that company, there was a dream that we would one day have a robust service-oriented infrastructure. To that end, but long before there were any services to orchestrate, the company purchased Microsoft BizTalk. In a replay of the workflow automation example, after some time of non-usage the manager who had signed the purchase order demanded that the next project to come along had to incorporate the product into whatever solution they built. There was no need for such a product in that project. Eventually the team was able to fake the use of BizTalk by writing a user exit and passing request and response messages through that thin piece of code, bypassing all BizTalk functionality. BizTalk was present on the server and appeared to be active. Microsoft even published an article in one of their corporate magazines to describe this highly successful and exemplary implementation of BizTalk.

My point is not to bash any of these products or companies. All the products mentioned are very good. My point is that a product is "good" only when it is used for its intended purpose and at its intended scale. The examples I gave of enterprise-class products at the financial company include two high-flying Blackbirds and four museum pieces. Just like the real Blackbirds, enterprise-class software can't function in other roles. It's either the high-end of the spectrum or nothing.

Are there Open Source alternatives to these products that may be appropriate for companies whose level of operations doesn't rise to the stratosphere? In most cases, yes. And in most cases, there are even scalable and highly available Open Source alternatives. They won't be as heavily tailored to specific platforms, of course. You could build quite a robust e-commerce solution using Open Source web servers, app servers, and web frameworks. Having spent my fair share of time struggling with configuration of enterprise-class products, I'm confident that a good development team could produce a usable solution in less time than it usually takes to get a complicated commercial product up and running properly in production. Open Source products usually aren't heavily customized to take advantage of specific platforms. For that reason, they won't be able to compare with high-end enterprise-class products for truly high-volume workloads. One point I'm trying to make here is that most companies don't have truly high-volume workloads. They think that, say, 5 million transactions per day is high-volume. High-volume would be, maybe, 20 times that. This perception makes them vulnerable to the "big boys" sales pitch, and to The Blackbird Effect.

This isn't about commercial software vs. Open Source software. It's about assessing your real needs and understanding the cost-benefit balance before making significant financial decisions. Don't try to dust your crops with a Blackbird. All you will do is crash. Manage the growth of your company intelligently, and before you know it you'll be getting invitations to play golf with the big boys. Overspend in an attempt to look and act like one of the big boys prematurely, and you'll be playing Putt-Putt, and liking it.

TDD contributes to employee motivation

posted: 17 Jan 2010

I saw this post by Carl Erickson today on the Atomic Spin blog, and I wanted to share it. It makes good sense and bears repeating.

The customer whisperer

posted: 17 Jan 2010

We got a new puppy last year, and he's just irresistibly cute. He had so thoroughly charmed my wife that there was a period of time when he believed he could relieve himself anywhere he pleased. When she scolded him, he put on such a cute face that she couldn't resist picking him up and cuddling him. Therefore, it was perfectly logical for him to relieve himself whenever he wanted to be cuddled. I'm sure most of us would do exactly the same. It's hard to argue with cause and effect when the pattern is repeated consistently.

In reality, the dog is perfectly willing to behave in whatever way we want him to behave, but he can't read our minds. After watching The Dog Whisperer with Cesar Milan ("I rehabilitate dogs and train dog owners"), we realized that we had taught him that it was okay for him to relieve himself anywhere he pleased. Since then, we have taught him otherwise. He's perfectly happy with that. Happier, actually, because he had been receiving mixed messages from us before, and I suppose that's a bit confusing for a dog. Now he understand what's expected of him and he's pleased to do the right thing (or perhaps I should say, to do things in the right places).

And you know, it wasn't really that difficult. All we had to do was change the feedback we were giving him so that he received positive reinforcement when he behaved properly. Now he knows he won't be cuddled when he relieves himself in unapproved locations.

I had an interesting conversation with some of the IT staff at a small company where they are following a home-grown iterative software development process based on principles of Scrum and Extreme Programming. The funny thing is that the IT folks believe they are doing a great job, but their internal customers on the "business side of the house" (not my favorite phrase, but what's a boy to do?) think the IT department is a complete loser. They never deliver the right things at the right times or in the right way. It's quite frustrating for the IT staff, who really are doing the best they can. I'm sure this sort of situation is very rare. ;-)

I was curious, since in my experience an iterative process usually works pretty well. It's not at the bleeding edge of software delivery methods anymore, but it's not a bad way to go and is often a very pragmatic approach in transitional organizations. So I asked them to describe their process in more detail. They explained that at each iteration planning meeting, they take the requirements that have been documented by their customers and break them down into "stories," then they prioritize the stories, decompose them into technical tasks, size the work, and finally commit to completing a subset of the stories.

Well, that sounded fine to me. It's a fairly decent entry-level style of agile development: The technical team and the key customers of the project collaborate to break the requirements down into — er, wait a sec. No, no, no, that's not what we said. The technical team does all that work, in isolation from the customers. The customers can't be bothered with iteration planning. They're too busy with "their own work."

Ah, yes. Well, that's different, isn't it? I smell something naughty. Bad puppy!

So, how is it possible that both "sides" are fully convinced they are doing exactly what they should be doing, even though the results are never satisfactory? My guess is that the technical team has trained their customers that it is okay for them not to participate in the process. When the customers don't show up for scheduled meetings with the technical team, they experience no negative consequences. Well, not until they take delivery of the faulty software, anyway. So, the negative consequences are merely delayed. What they need is to experience mild negative consequences immediately, so that they will have an opportunity to modify their behavior. It's only fair to them, after all. This is consistent with the agile concept of short feedback loops. Nobody promised that "feeback" would always consist of a bucket full of smiley faces.

I suggested that the team not do any work on behalf of any internal customer who chooses not to show up for scheduled planning meetings. They were horrified at the suggestion. But, that means we will "fail!" In what way? I asked. Isn't it obvious? they insisted. We won't deliver anything for a whole iteration! Yes, exactly, I said. That's the whole point. But that's "failure!" they screeched. Is it? I asked innocently. What about delivering the wrong things at the wrong times in the wrong way, not just in one iteration but in every iteration, in every release, in every project, forever? Is that "success?"

An iterative software development process depends on the active participation of all key stakeholders. Someone from the "business side of the house" is responsible for interacting with the technical team to ensure the right things are delivered at the right times and in the right ways. It's part of the process. It's not optional. If everyone isn't doing their part, the process can't help people deliver value.

But we can't say "no" to our customers! they protested. We don't have the power to do that! You aren't saying "no," I explained calmly. You're helping everyone get to a meaningful "yes." Everyone in a company has more to do than they can possibly get done in the allotted time. Therefore, everyone in a company must constantly juggle priorities and make time for the tasks that are most important at the moment. As soon as your customers decide this particular project is more important than some of the other tasks they have to get done, they will make the time to collaborate with your team. As long as you're delivering something without their participation, they will assume their participation is optional. You have to train them to understand their participation is mandatory. Everyone will be happier in the end.

Since re-training our puppy, we've noticed that everyone is happier. The dog is glad to know exactly what is expected and to do what is expected to please his humans. The humans are pleased not to be cleaning up urine and feces constantly, and discovering same in unexpected places and at inconvenient moments. The whole household functions more smoothly and with far less stress. Smells much nicer, too.

Policy constraints

posted: 16 Jan 2010

In Stephen R. Donaldson's trilogy, The Chronicles of Thomas Covenant the Unbeliever, the main character travels between two realities: One when he is awake, which you and I would recognize as "normal," and another when he is knocked unconscious, which seems to happen fairly frequently. Actually, he may have invited a few knocks on the head, since he suffered from leprosy in our reality, and was healthy in the other. While in that other place, he asks a race of intelligent horses to promise they will visit a certain little girl once each year, because she loves them so. There's a bit of a twist, though; the passage of time is different in each of the two realities. When he is in the "real world" for a few days, many years pass in the other reality. On his return there, after having secured this promise from the horses, he finds that the political climate had changed and the horses were on the bad side of the new lord of the land, who was a Bad Guy. They could have run off to a safer location, except that they had to stay close to the little girl's home in order to fulfill their promise to Thomas. Both they and the little girl, who was now not so little, were miserable about it all and none too pleased with Thomas. He thought that since circumstances had changed, the horses would have been justified in breaking their agreement. After all, the agreement had been an artifact of another time, when things were quite different. But the horses, being honorable creatures, didn't take their agreements lightly. They would rather endure persecution, suffering, and death than to break their word. And that's just what they did.

Theory of Constraints, as the name implies, considers constraints in a process that impede the delivery of value units to customers. Most of the time, constraints are physical. For instance, in a manufacturing operation, one of the machines on the line will be the bottleneck; it will have the lowest capacity among its mechanical friends to move unfinished goods along to the next station on the line. Its capacity sets the pace for the entire line. (It "beats the drum," if you like the drum-buffer-rope metaphor.) When you get to Step 4 of the Five Focusing Steps, "Elevate the Constraint," the thing to do would be to increase the capacity of that station.

Physical constraints are not the only kind. There are also so-called policy constraints. A policy constraint might be a rule, procedure, guideline, or even just an assumption shared by consensus, that leads people to take actions that impede the delivery of value units to customers; or, in plainer English, causes unnecessary difficulties in meeting the organization's goals. There is nothing that physically prevents progress toward the goal; it's just a rule or a custom or some such thing that has outlived its usefulness, or that is applied out of its intended context, or that perhaps never should have been instituted in the first place.

If we look at the horses' situation through the lens of Lean thinking, we might say their agreement with Thomas to make annual visits to the little girl was a policy constraint. We might say a reasonable goal for the horses under the circumstances would be: "Live someplace where the Bad Guy can't hurt us." That goal didn't exist at the time they made the agreement to visit the girl. Their goal changed as circumstances changed. The problem is, they didn't change their policy to correspond with their new goal. The old policy made it impossible for them to achieve the new goal. They would have had to live so far away that it would have been infeasible to visit the girl on an annual basis. Hence, they and the girl were miserable about it all and none too pleased with Thomas.

I read an article in Consumer Reports for November 2009 (page 6) entitled "Skimpy broadband access." It tells the sad tale of rural schools in Oregon, and the difficulty they face in following the statewide mandate that exams are to be taken online by all students at the same time. The reason rural schools have difficulty with this policy is that their communities lack adequate Internet infrastructure to support the load. According to the article, "Downloads take ages; the system crashes; students taking state tests online, as is required, cripple the network."

I didn't contact the Oregon department of education to find out the underlying details. I'm just reasoning through it based on the information in the article. So, if I've made faulty assumptions, I apologize. But here are my assumptions: (a) The state hosts the online exams at a central location; (b) For most school districts, there is no problem in accessing the central location and the load on the network is not excessive when all their students are taking an exam online; (c) For schools located in communities that lack adequate Internet infrastructure, the policy that requires their students to log onto the central exam server is a burden that prevents them from being able to take exams properly.

You might say, as the editors of Consumer Reports said in the article, that the problem is inadequate broadband service in rural areas of Oregon. That is a physical constraint, and certainly represents an opportunity for improvement. The thing is, is isn't an improvement that can be made in time to help students take their next online exam. It would take millions of dollars of investment in physical infrastructure that would not be completed for months or years, plus monthly fees for broadband service on an ongoing basis, charged to school districts with limited budgets. The more immediate (and more solvable) problem is the policy that exams have to be centrally hosted.

One can imagine perfectly valid reasons for the policy. If all exams are taken at the same time, then it is very unlikely that anyone can share the answers before everyone has taken the exams. If all the exams are hosted on the same server, then we can be assured that all students will receive the same exams. If all exam results are stored on the same server, no effort is required to correlate the results. It all makes perfectly good sense...until you consider the effect of the policy on students in rural schools.

The requirement that all exams must be taken simultaneously and hosted from a central server is a policy constraint. What if schools were allowed to host the exam software on local servers, and transmit the exam results to the central site asynchronously? All the good and valid reasons for the original policy still hold, and the revised policy would achieve the same goals. The change would have no effect on school districts that already have sufficient Internet infrastructure, and it would enable students in rural districts to have the same exam experience as students in more developed areas.

Perhaps in your organization there are impediments to meeting your goals that aren't physical, but merely a question of policy. Surely this is easier to fix than a physical constraint. Why not assess your situation and see if you can identify one or more of these easy-to-fix problems. Then talk to your management about changing the policies. (That should be easy, right?)

In praise of subjectivity

posted: 16 Jan 2010

When people want assurance that some proposed new technique or method of delivering software might help them, they often ask to see the results of formal studies; either academic studies carried out under controlled conditions, or assessments of real situations conducted by reputable industry analysts. People seem to be skeptical of so-called "anecdotal" reports of the usefulness of the unfamiliar technique or method. Oddly, people seem willing to trust the conclusions of those who do not actually deliver software for a living, while unwilling to trust the direct experience of those who do so.

Let's think about this facet of human nature in the context of introducing contemporary software development techniques and methods into traditional organizations. One of the activities software development traditionalists consider fundamental and necessary is the fine-grained estimation of the time required to complete individual tasks. One of the steps developers are asked to take in the course of their work is to guess how much time they will need to complete each task they will carry out in the near future. Large-scale projects are estimated from the bottom up, beginning with individual developers' guesses about the time they will need to complete each task in the work breakdown structure (WBS).

With contemporary methods, we treat this aspect of the work quite differently. The most mature teams dispense with fine-grained task estimation altogether. Short-term work planning is done without any task estimation. (Of course, larger-scale, coarse-grained estimation of the significant "chunks" of an initiative are still important; but that level of estimation is done using completely different and more-formal techniques than fine-grained task estimation by developers.) Sometimes this approach to short-term planning is called "naked planning." Once you know which work items are the most important ones to complete next, the thinking is that it doesn't matter whether any given small technical task will take ten minutes or two days. Either way, it is the next piece of work to be done. Period. Therefore, it makes more sense to go ahead and do the work than to spend any time at all pondering how long the work might take. All that pondering and guessing just doesn't help you get the task finished. It's a waste of time.

Less mature teams, or teams that must work in an environment in which their task list (or Product Backlog, in Scrum terms) is not properly maintained (or "groomed," in Scrum terms), may use a technique known as "relative sizing" (or some similar term) rather than time-based task estimation. Through the experience of working together on the project, the team arrives at a consensus (often unstated) of the relative "size" of any given work item, based on the work item's description in the form of a User Story, MMF, Use Case Scenario, or similar artifact. Over time, by measuring the number of work items or relative points or relative units (terminology varies) the team can deliver in a given span of time (a week, a month, an iteration, a Sprint, or what-have-you), we can predict how much work the team will be able to deliver under normal circumstances. It is a useful technique for release planning in situations when the work items are not decomposed into similarly-sized pieces and kept flowing smoothly and regularly through some process based on continuous flow or time-boxed iterations. Just talking about flow or just having scheduled iterations doesn't automatically achieve that. If the work items aren't properly prepared before the day comes to begin working on them, then the team needs a technique to distinguish between large and small tasks so that they can plan their work. That's the reason I call relative sizing "less mature" than naked planning: There is a missing piece to the puzzle - what a Scrum practitioner would call "backlog grooming." Without it, naked planning becomes problematic.

Now, it happens that most of the people who are skeptical of "anecdotal" reports of the value of a given technique or method are, at the same time, quite satisfied accepting the highly subjective time-based task estimates provided by their development teams. They routinely take these subjective guesses and roll them up into larger-scale estimates for releases and projects. (This is not, by the way, one of those more-formal techniques I mentioned for large scale estimation of projects. This is nothing more than accumulating a bunch of subjective estimates, with all their inherent errors, padding, duplication, and variances based on individuals' skills, and treating the total as if it were meaningful...and then betting the company's future on that number. It's one of the reasons traditional methods of software delivery resulted in the dismal statistics reported over the years by industry analysts such as Gartner, Forrester, and the Standish Group.)

So, if a person is willing to bet the farm on completely subjective numbers (task estimates), then what's the problem accepting practitioners' reports of how a given technique or method added value in a real-world case, based on their personal experience? In what way is an "objective" study, conducted by people who do not deliver software for a living, more reliable than the anecdotal report of an actual software professional? I sense a double standard, here. If one subjective report is good enough for you, then so is another. On the other hand, if subjective reports by real practitioners of the usefulness of software development methods are not satisfactory, then you should also stop accepting your developers' fine-grained, time-based, subjective task estimates, and adopt one of the flow-based or timebox-based methods that provide objective information about delivery rates. As the saying goes, "You can't have your cake and eat it, too."

Another reason to love commercial software

posted: 08 Jan 2010

 

Godzilla vs. Megalon (or: Scrum vs. Kanban)

posted: 04 Dec 2009

In the ongoing battle between the competing branded solutions, Scrum and Kanban (which is what I think Jean alludes to here: Escalation is killing our healthy conflict in agile), proponents of one method seemed to have made a sport of denigrating the other method. IMHO this sort of thing is counterproductive. Some examples:

1. Myths about Scrum promulgated by proponents of Kanban

Example 1

From Twitter @bobsarni, December 2009: "1 of the powers of lean is it redefines role of management instead of accommodating it by separating team from it (a la scrum)."

My reaction

This is straightforward misinformation. Scrum does redefine the role of management at the project and program levels, asking traditional project managers to shift themselves into a role similar to the Scrum role called "ScrumMaster," and program managers or business unit managers into a role similar to the Scrum role called "Product Owner."

In mixed environments where agile methods have been introduced in a bottom-up fashion, it is common for agile teams to be insulated from the organization's traditional management so that they can attempt to apply agile development without undue interference. Typically, the team's project manager performs the role of insulator. This is a common situation, and may be one of the sources of this particular misconception about Scrum.

One of the lessons the agile community has learned over the years is that a bottom-up implementation without any senior management support is doomed to failure. When the agile implementation is approached in a comprehensive way, with both top-down support and bottom-up buy-in, the chances of success are much greater. In these cases, there is no attempt to separate the team from management; management is part of the process, as it should be.

With a lean implementation, it is more natural to include management in the change initiative from the beginning, since lean applies to end-to-end process improvement rather than to software development activities specifically. Lean offers no mechanism to separate a team from its own management or from the rest of the value stream, because value stream optimization is the whole point and that sort of separation would make no sense. This may be one reason why the phenomenon of insulated development teams is not seen very often in Kanban shops.

In any case, separation from management is not a basic feature of Scrum or of any other agile process. It is not a Scrum "problem" that is "solved" by Kanban.

Example 2

Vikas Hazrati: Wrong and Right Reasons to Apply Kanban: "Scrum and XP, usually do not release in the middle of the sprint. This is not the case with Kanban."

My reaction

This is generally a balanced article. However, on this point the author overlooks the fact that mature agile teams release whenever it makes sense to release. He does, at least, write "usually," so he cannot be accused of making a blanket generalization. Even so, the point merits clarification.

In practice, potentially-shippable solution increments accumulate in a staging environment where stakeholders can test and review the solution so that they can provide feedback to the development team. Sometimes system-level QA testing occurs in this environment, as well. Releases may occur after several iterations or in the middle of an iteration, depending on what makes sense technically and business-wise.

For a mature agile team, the timeboxed iteration sets the pace of the "heartbeat" for iterative short-term planning and for periodic reflection on work processes for the purpose of continuous improvement, as called for in the Agile Manifesto. (Scrum defines a ceremony called a Retrospective to support the latter.) The agile "heartbeat" is equivalent to the notion of "cadence" in Kanban development. Immature agile teams treat the timeboxed iteration rigidly, while mature teams in effect separate their development cadence (set by the iteration length) and release cadence (determined by pragmatic factors). Many agile practitioners may not be familiar with that terminology, and many Kanban proponents may not have first-hand experience on a mature Scrum team; hence a high probability of misunderstandings.

The reason the timeboxed iteration is canonically treated as a rigid box for incremental delivery is that originally the timebox was a mechanism to break organizations free from their sequential SDLC addiction, somewhat forcibly. I think it is important to remember that is only one stage in the ongoing maturation of an organization or team. It is not an eternal commandment or a necessary characteristic of an agile delivery process. Neither is it a formal feature of Scrum, which calls for potentially shippable solution increments to be produced in each sprint, and not necessarily for an actual release in each sprint. Scrum also does not forbid mid-sprint releases. It's all about what makes sense to the business.

Example 3

(Widely repeated): Scrum and other agile processes don't provide for WIP control

My reaction

One of the basic techniques in Kanban to promote continuous flow is to limit work-in-process (WIP) explicitly. We tune WIP limits in key states of development as a means to exploit the constraint; one of the Five Focusing Steps of ToC. When we have elevated the constraint, we revisit the WIP limits and adjust them as appropriate. Scrum doesn't talk about WIP explicitly, which may cause the careless reader to conclude there is no mechanism to control WIP.

While the canonical textbook description of Scrum does not mention WIP as such, in practice the agile community has learned that work flows more smoothly when teams drive each User Story through to completion rather than starting all the User Stories and trying to context-switch between them throughout the iteration. When teams focus on one or two User Stories at a time, we call it "swarming." Team members or pairs "swarm" a single story (or two at a time to allow for temporary blocks) by dividing various tasks between them. This practice has the same general effect as explicit WIP limits.

Any basic description of Kanban software development will include a discussion of WIP limits. A basic description of Scrum will not mention it. A person who has only read about Scrum or who has not been a member of a mature agile team may not be aware of "swarming," and may conclude that agile methods in general and Scrum in particular do not offer a mechanism to limit WIP. That may be the reason this particular myth arose.

Swarming came about as a response to the problems of excessive context-switching; WIP limits came about as a means to promote continuous flow. So, Scrum and Kanban arrived at nearly identical conclusions from two different directions; two different techniques that have the same general effect. An interesting phenomenon, IMHO.

2. Myths about Kanban promulgated by proponents of Scrum

Example 1

(Widely repeated): Kanban doesn't call for continuous improvement. It's just about pumping the work through as fast as possible.

My reaction

First, Kanban focuses on (a) value, (b) continuous flow, and (c) eliminating waste. If that isn't continuous improvement, then I don't know what is. Second, Kanban is based on lean thinking. If we use the Object Oriented paradigm (loosely, for discussion purposes only), we could say Kanban is a "subclass" of Lean. As a subclass, Kanban inherits all the attributes of Lean. Lean includes Theory of Constraints. Therefore, Kanban software development also includes Theory of Constraints, even if summaries of Kanban software development processes don't constantly slap the reader in the face with it. Theory of Constraints defines a technique called the Five Focusing Steps, which is a mechanism for continuous improvement. In fact, then, Kanban software development has a rigorous mechanism for continuous improvement built in.

What is the origin of this myth? I think it may have arisen because Scrum explicitly defines a continuous improvement mechanism, the Retrospective. Sometimes called a "heartbeat retrospective" to distinguish it from the conventional post-project retrospective (or "post-mortem"), this Scrum ceremony is the event where team members and stakeholders identify opportunities to improve the delivery process, and agree on concrete actions to realize improvements. Other agile practices not specific to Scrum, such as the daily stand-up (Scrum calls it the daily scrum) and pair programming, also offer opportunities to apply continuous improvement. Descriptions of Scrum nearly always include references to the Retrospective, while descriptions of Kanban development usually don't mention the Five Focusing Steps (because ToC is assumed, not because it is missing). Thus, the careless reader may get the impression Kanban doesn't provide any mechanism to support continuous improvement efforts.

Example 2

(Widely repeated): Kanban accepts the status quo and any existing impediments

My reaction

I think this myth came about for reasons similar to the previous one. Scrum defines a role called ScrumMaster, one of whose duties is to remove impediments from the team's path as soon as the impediments have been identified. During the "daily scrum" ceremony, team members state what they have worked on since the last daily scrum, what they intend to work on next, and mention any impediments to progress on the work items they are handling. The ScrumMaster is expected to act on these impediments immediately so that they will not prevent delivery of the team's commitments for the current sprint.

Kanban doesn't explicitly address "impediments" in this way. Instead, the overall focus on continuous flow and elimination of waste leads to gradual elimination of systemic impediments through the mechanism of the Five Focusing Steps. In the shorter term, impediments that cause delivery delays are treated as urgent work items that may take precedence over planned work. Because Kanban handles this issue differently than Scrum, a person who is unfamiliar with lean thinking may conclude that Kanban happily accepts any impediments that may exist. This is a misconception.

If anything, Scrum deals with short-term impediments quickly, but lacks a mechanism to eliminate systemic impediments. For instance, if a given stakeholder has not provided the team with requested information, the ScrumMaster will go to that person's office and get the information; but nothing defined in Scrum addresses the underlying organizational issues that cause people to fail to provide information in a timely manner. Scrum does emphasize cultural values of trust and transparency, which ought to help with the free flow of information within the organization, but it does not provide any concrete techniques to eliminate specific systemic impediments. So, the ScrumMaster will probably have to visit many offices and collect many pieces of information, again and again. Because Kanban is based on lean thinking, ToC problem analysis tools (also known as the Systems Thinking tools) would expose the underlying organizational issues so that the root cause could be addressed, and the general problem of poor information flow could be solved. 

3. Attempts to find what is useful in both

Even as the agile community devolves into two camps engaged in a useless circular debate over misconceptions, some people are interested in finding the best ways to deliver customer-defined value in a wide variety of circumstances. They are finding value both in conventional agile methods and in the emerging Kanban-style methods. Here are a few examples:

Henrik Kniberg: Kanban vs Scrum - a practical guide
http://blog.crisp.se/henrikkniberg/2009/04/03/1238795520000.html

Kenji Hiranabe on InfoQ: Kanban Applied to Software Development: From Agile to Lean
http://www.infoq.com/articles/hiranabe-lean-agile-kanban

Kenji Hiranabe on InfoQ: Visualizing Agile Projects Using Kanban Boards
http://www.infoq.com/articles/agile-kanban-boards

Chris Sims on InfoQ: Are Kanban Workflows Agile?
http://www.infoq.com/news/2009/04/is-kanban-agile

Jon Arild Torresdal on InfoQ: The Current Direction of Agile
http://www.infoq.com/articles/current-direction-of-agile

I can't help wondering whether some of these mis-statements are driven by vested interest. When someone makes statements about either Kanban or Scrum that sound fishy, I ask myself whether that person stands to gain in some way by promoting one or the other process framework. Is he/she selling training classes, consulting services, software products, or coaching services based on the approach he/she is promoting? If so, then I have to take the negative comments about the "other" approach with a grain of salt.

Spoiler

posted: 21 Nov 2009

A friend of mine tweeted the other day:

So now that Java 7 is getting closures at the end of 2010, it'll only be behind C# by what, 5 years? Better late than never.

To which I half-jokingly replied:

So now that C# can run on more than one platform thanks to mono, it's only behind Java by what, 5 years?

In 1986, I bought a Toyota MR-2, a small, lithe, mid-engine two-seater. It could take curves like a slot car. More fun to drive than any other car I've owned. It was red, and it had a rear spoiler. I liked the spoiler. It made me feel cool.

The purpose of a rear spoiler is to hold the back end of the car down on the road when you're driving really fast. The 1986 Toyota MR-2 could briefly flirt with 95 MPH, going downhill, with a tail-wind, and given five minutes to build up speed gradually, provided you had no passenger aboard and you removed everything heavy from the car, and while driving you wished with all your heart that the speedometer pointer would move just one more millimeter, oh please please please. In other words, the spoiler could never have any aerodynamic effect on the vehicle. It was only a decoration. Even so, I liked it. It made me feel cool.

Support for closures in a programming language makes for clean, concise, readable code. All good.

Consider two hypothetical programming languages such that:

Language A

  • Supports closures, and
  • Can only run on one platform, so that customers who use the language for mission-critical apps are locked into a single operating system vendor.

 

Language B

  • Doesn't support closures, and
  • Runs on all mainstream platforms, so that customers who use the language have flexibility to change their technical environment as their business needs change or as the relative cost of ownership of different platforms changes.

 

Which of the two programming languages is "better?"

My friend might say that Language A is better than Language B. Language B is "behind" Language A because it doesn't (yet?) support closures.

I might say that closures are a great feature, or at least an interesting feature, but since Language B offers greater flexibility to customers, it yields better business value overall than Language A.

I've been playing with C# on Mono on a Linux system recently. C# is a cool language, if you consider programming languages outside of any business context, as if you were playing with toys in a toy shop. If a toy will only work in one section of the toy shop, who cares? It's only a toy, after all. It isn't as if the success of your business depended on it. Plus, it's fun to argue with the other children about which toys are the most fun. I suspect a lot of people who choose software engineering as their profession choose it mainly for the opportunity to argue, and only secondarily for the opportunity to write code. The relative amount of time software developers spend arguing with one another about pleonastic niceties of programming style or compiler features seems to support that suspicion.

Mono doesn't (yet?) support .NET completely seamlessly; a C# application has to be written specifically for Mono if you intend to run it on any platform other than Microsoft Windows. Your staff won't be able to run their familiar Visual Studio tools on anything other than Microsoft Windows, even if your target platform is something else. So, we don't quite have cross-platform support for C# that is ready for prime time. We will. Novell is behind Mono, and the product is maturing rapidly. If I had to make a prediction, it would be that Java will have support for closures long before C# becomes a truly viable cross-platform tool. Or maybe it will be the other way around. Eventually, both things will happen and the argument will settle itself. If you enjoy that sort of argument, take heart: There will be some other cool language feature to argue about. Before there were rear spoilers, I hear that people used to like fins on the back of their cars.

As a developer, I like closures. They make me feel cool. And, like any faithful Monty Python fan, I like a good argument. As an IT consultant with the responsibility to make well-considered recommendations to clients, I have to look at programming languages from another angle. The cost of change can be prohibitive when your organization is nailed to a single hardware or software vendor. Those of us who have earned a few gray hairs have been there and done that, probably more than once. Be careful about selecting programming languages or other software development tools that will lock you in, even if they have nifty features that make you feel cool. It's more important to manage the total cost of ownership of your IT assets than to have the coolest programming language in town.

P.S.

I'm talking about conventional business applications, BTW. Given the literal-mindedness of some of the people who have posted comments here, I guess I have to spell that out. Embedded apps, like flight control systems and toll plaza systems, and software that requires low-level access to devices, like video games, have to be closely tied to the hardware. That is not the context of this post. Nearly all conventional business apps are just CRUD apps. There is no valid reason to tie a CRUD app to any particular OS, or any particular desktop, or any particular back-end data store. Building a solution in that way only imposes a vendor-lock on your customers while providing no value-add in exchange.

When success = failure

posted: 18 Nov 2009

There's a lot of buzz in the community about how to drive high productivity at the grassroots level, and how great it is when we can help develop a high-performing team. There are also those in the community who call for a heavy-handed approach to change; sometimes they use the term "shock therapy" to describe a process of instituting radical changes in team structure and organizational workflow in a short time. Some consultants speak of improvements in productivity in the range of 4x to 10x or greater. Although there has been recent movement away from the term, some consultants still seem to like the idea of hyperproductive teams. People still talk about "spinning up" high-performing teams.

In thinking back over a couple of experiences in team coaching and organizational change, I've begun to realize that there is more to success than just spinning up one or two "hyperproductive" teams. One experience was at the first company where I learned agile methods in 2002, and the other was earlier this year, 2009. In both cases, we achieved astonishing success in creating one team or a small group of teams that performed far outside the norm for their respective organizations.

In each case, the teams in question pushed the envelope of state of the art agile practice. In 2002, that meant two-week iterations, relative story sizing in terms of abstract points, task decomposition, and a daily burndown of tasks. In 2009, it meant a one-week development cadence with an independent release cadence, stories split vertically into small enough chunks that no task decomposition was necessary, no daily burn tracking, "naked planning," fully generalized skill sets, and self-management to the extent they screened and auditioned job candidates and handled their own personnel issues. In both cases, it meant a disciplined approach to Extreme Programming practices and an approach to development that today we could call software craftsmanship. The point is, these teams rode the bleeding edge of whatever the state of the art was at the time. Indeed, one could justifiably say they sharpened that edge. They definitely would have qualified for the label, hyperproductive.

So, these were roaring successes, right?

I used to think so. But I have to wonder: If they were successes, then why did "agile" peter out at the first company after an amazing three-year run of successful projects? Why are the people at the second company unhappy with the new way of working, despite the fact they deliver more business value than before and they can pump far more work through their process than they previously imagined possible?

On reflection, I think the basic reasons are these:

  1. Local process optimization
  2. Insensitivity to emotional factors

 

Case 1

In 2001 I joined a mid-sized financial services company with about 33,000 employees, operating in 6 US states, and with an IT department of about 1,300 people. It was a nightmare of waterfallish processes, crushing bureaucracy, cartoonish fake smiles, dehumanizing toadying, and walled-and-moated functional silos. Less than a year later I was fed up and ready to leave both the company and the IT field. At the urging of a colleague, I joined a small group of employees who wanted to analyze the business value the IT department added to the enterprise, and determine how to improve things. Along the way we discovered the Agile Manifesto. We enlisted help from ThoughtWorks to teach us agile methods. Long story short, in a few months we had built up an internal agile practice that could handle several projects at once, and we were delivering value regularly. The "business side of the house" loved our group. Line of business managers started to demand that the IT department carry out their initiatives using our methods. Things just got better and better...

...for a while. There are many war stories to tell from this time period. For purposes of this post, let's skip to the end of that three-year run. As soon as they could, IT management "took control" of the agile group. Within a week, they had (a) cancelled the practice of locating technical teams on-site with their internal customers; (b) scattered the agile practitioners around in the traditional organization, to dilute their influence; (c) conjured negative performance reviews of key agile proponents in the organization as a way to encourage them to leave the company; some were even put on probationary status, to pressure them to leave; (d) assigned some of the top performers to dead-end jobs babysitting relatively unimportant legacy apps; another tactic to encourage them to leave; and (e) re-established the waterfall process (decorated with a few agile buzzwords) as the sole way of delivering projects for internal customers. One year later, there was nothing left of "agile" but the word, much as the Cheshire Cat faded away and left only his smile behind.

Why would people do this, to the detriment of their own company, the organization whose success fuelled their own bonus programs and retirement plans? At the time, the reasons were incomprehensible to me. I chalked it up to incompetence or, in some cases, maliciousness. In hindsight, the reasons are obvious: We had embarrassed IT management. Where they insisted something would be too costly, too time-consuming, or technically infeasible, we went ahead and did it at reasonable cost, in a short time, and with technical excellence. We demonstrated that everything the traditional IT managers believed to be true of software development was wrong. We went around them, and worked directly with the lines of business. We ignored their standards and procedures. We openly disrespected the chain of command, the org chart. We went around the HR department, as well, and recruited our own people to join the agile teams. This made HR feel disrespected, too, since our behavior demonstrated that we did not trust them to screen job candidates adequately. To add insult to injury, we did all that while wearing blue jeans and working four-day weeks...at a traditional financial institution with a very conservative organizational culture.

It was all amazingly productive and added tremendous value to the enterprise. It was also absolutely intolerable. One year after IT management "took control" of agile, only 4 out of 60 agile practitioners were still employed there. Those four had settled back into quiet, secondary roles in the organization where they could accumulate long-term retirement benefits as long as they kept their mouths shut and didn't make any waves. Everybody (who was anybody) was happy again.

Local process optimization. Our group was physically and philosophically separate from the rest of the IT department. It grew up as a skunkworks operation and only survived as long as it did because our customers wouldn't allow IT management to crush it openly, since they were realizing value from our work. We did not improve the company's IT processes across the board. We only optimized our own work, in isolation from the other 1,300 IT professionals in the company. We were so fully isolated that most of those people probably never even realized anything unusual was going on. If you asked them about it today, I'll bet they wouldn't even know anything like this had happened in 2002-2006. The business managers who learned about the value of agile methods during that time moved on to other jobs, or retired. The organization itself has not gained or retained any deep tacit knowledge of agile practices or their value, and has not fundamentally changed its culture.

Insensitivity to emotional factors. We simply ignored most of the individuals whose support we needed in order to achieve long-term process improvement. They responded in a perfectly normal human way: They bided their time until they could crush the agile initiative, and then they did so quickly and firmly. They "punished the innocent" and made sure anyone who had been part of the agile group either sat down and shut up, or left the company. It's possible that those managers left the company eventually, too, and that a new crop of managers has experimented with agile or lean methods again since 2006. If so, they have not benefited from anything we did in 2002-2006, because internal history was rewritten to expunge all memory of it. This huge waste, this huge loss, is a direct result of our insensitivity to the emotional needs and politically-motivated fears of key management personnel. We succeeded and we failed with the same stroke.

Case 2

Early in 2009 I was engaged, along with another agile coach, to help a certain company improve its business processes and to help a couple of technical teams get up to speed with Extreme Programming practices. This is a smaller company than that in Case 1, with under 1,000 employees working in only two geographical locations. We were working with teams and stakeholders at one location, and most of the technical teams worked at the second location. There, they had been practicing Extreme Programming for five years already, under the guidance of an internal agile coach and with the support of their own management team.

The two of us quickly determined that the main problem in the organization was not the technical teams' use of Extreme Programming, but rather backlog grooming upstream from the technical teams. We focused mainly on that, and did not pay any attention to the teams at the second location. In the meantime, I coached the technical teams at the first location in agile methods, which were new to them.

Long story short, in about three months the teams that were new to agile methods were applying more-advanced techniques and delivering value better than the teams with five years of XP experience behind them. They were using methods I mentioned earlier, which represented state of the art practices circa 2009, and were functioning at a hyperproductive level as compared with the organization's general IT performance. They were applying agile values and principles throughtfully and crafting their own practices accordingly, rather than following "rules" from a book by rote. After five or six months, however, they were floundering. The team was far out of sync with the pace of work the rest of the organization could sustain.

Within their own workstream, they were able to pull work much faster than the stakeholders could keep the backlog up to date. This caused a lot of dissonance in customer collaboration. It also forced the team to revert to story sizing. Whereas relative sizing was a state of the art practice in 2002, as described in Case 1, in 2009 it is no longer a state of the art practice. Here, a team that was already skilled at commitment-based short-term planning without fine-grained estimation had to drop back a step and start sizing user stories, simply because the stories were not in appropriate shape for them to pull; they did not have any idea how big the stories might be or any inkling of potential technical risks until the very day they were expected to start working on them. This was a source of frustration for the team, and also a source of stress for stakeholders, who felt as if they were "failing" in some way although they really didn't have time to do any more than they were already doing.

Local process optimization. We had a single team functioning at a level of performance that was out of sync with the rest of the organization. We introduced change too rapidly and in too localized a way, and caused friction in the organization. This particular team was capable of absorbing that much change that quickly, but because they were out of sync with the rest of the organization their very success created stress and frustration for them and their customers. When they had to back off from leading-edge practices, it made them feel as if they had "failed" with agile, in some sense. That is especially unfortunate in view of the fact that even after they slowed down they were operating at a level of agility beyond that of most agile teams in industry.

Insensitivity to emotional factors. In hindsight, I think this was the area where we failed most spectacularly. We failed on two levels: First, we drove change forward as rapidly as possible and ignored the emotional impact of change on the team and their immediate collaborators; second, we ignored the people at the second location (at least, initially), who had built a sense of personal pride and an internal reputation in the company as agile experts in the course of five years of nurturing XP teams there.

The team members could see that they were performing better than they had done previously, yet they were not happy about it. They didn't have time to be happy about it. They had lost their individual cubicles in favor of a collaborative workspace geared for pairing; the Furniture Police did not consider all the aspects of an effective team work space when they set up the team area, and did not consult the team about their needs. As it turned out, this single factor had a significant impact on team morale. The last time I checked in with them, they had lost one senior team member and two others were emotionally "checked out," showing up for work but not really feeling engaged. They are still delivering value at a higher rate than any other team in the company, but they are doing it mechanically, without a deep personal investment in the process.

The in-house agile experts felt threatened by the new ideas the consultants brought. We caused them to feel that way by failing to engage them properly from the beginning. With the consultants now gone, the in-house agile experts have reasserted themselves as the thought leaders for process improvement. They are imposing "rules" such as they use with their own teams, based on state of the art agile circa 2004. That is when they started to introduce XP, and it is the style of agile work they understand best. In hindsight, I think we should have approached change by engaging these individuals from the start and introducing change at a far more gradual pace. We didn't help them understand how agile practices have evolved in the industry since 2004. As it is, the pattern that is emerging is similar to that at the company in Case 1: A brief flurry of localized extreme high performance followed by a breakdown and re-establishment of the organization's previous equilibrium state. Cold comfort: At least in Case 2 the previous equilibrium state is superior to that in Case 1.


Thinking back over these experiences leads me to another question: One year after the boastful consultants have left a client organization, what does the process look like at that organization? Did the changes "stick?" Consultants often quote client testimonials in their marketing material. What would the clients say one year, two years, or five years after they released those quotes?

The question reminds me of a conversation I had with the CEO of a previous employer, an agile services consultancy. There was one client in particular we used prominently in our marketing materials, who had given us a glowing recommendation. Two years after that successful project, we were reviewing the list of upcoming engagements and prospective engagements one day and I noticed we had never obtained any follow-on work from that particular happy client. I asked him why, if the client had been so happy with the results, they had never brought any more work to us.

His answer was along the same lines as this blog post. Our team had performed at a pace that was far out of sync with the client organization. Our customers could not keep the backlog up to date fast enough to feed our team meaningful work. They threw stories together helter-skelter just so they wouldn't be paying us an hourly rate to have our people sit around and wait for something to do. The experience was so stressful for them that they decided they weren't suited for agile work, and they reverted to traditional methods and traditional services firms.

It occurs to me that we may have served our client better had we adjusted the pace of change and the demands for collaboration so that they were more closely aligned with the client's ability to cope with change. The initial project may have taken a bit longer, but we probably would have won follow-on work from the client because they would have learned how to collaborate effectively with an agile services provider. Instead, they came away feeling as if this "agile" stuff can produce impressive results, but it just isn't for them.

Turning all the knobs up to eleven while ignoring the human costs of doing so may not be the definition of success, after all. Beware of consultants bearing gifts of hyperproductivity.

I recall a comment from Ron Jeffries on the Extreme Programming discussion list some time ago, to the effect that if you haven't gone all the way with agile then you don't have a frame of reference to understand when and how to back off from particular practices without losing the value of the agile approach. (Of course, his comment wasn't as verbose as that. Verbosity is an affliction of mine.) Anyway, I've gone all the way more than once and in more than one context. I think I have a frame of reference to understand when and how to back off. I recently had an initial visit with a new client. Until I understand their organizational culture and their values pretty well, I won't be turning those knobs anywhere near eleven.

UMLet UML diagramming tool

posted: 22 Oct 2009

I found a UML diagramming tool that I think is well suited to agile modeling. It's called UMLet, and it runs either standalone or as an Eclipse plugin. The product home page is here: http://www.umlet.com.

The reason I say it's suitable for agile modeling is that it works well, but it isn't slick and it doesn't have ten thousand options for making the diagrams look pretty. When a diagramming tool is too slick, it can be tempting to spend a lot of time fooling with it just because it's fun, and to go into more detail in the diagrams than is appropriate for the agile style of development.

If you like making pretty pictures, or you think our job is to produce highly-polished diagrams rather than working software, or you're a stickler for using the specific icons defined for a particular version of the UML, or if you enjoy playing with diagramming tools to the exclusion of pulling work items through to the "done" state, then you might not appreciate UMLet. If you'd like a simple tool to produce high-level UML diagrams easily, then you might like this tool.

Here are some of the reasons I like UMLet:

  1. It's Open Source software.
  2. It runs on all mainstream platforms: Microsoft Windows, Apple OS X, and Linux.
  3. It runs standalone or as an Eclipse plugin.
  4. It's simple enough to use that it doesn't require a user's manual or a training course.
  5. It supports all the usual suspects - class diagrams, sequence diagrams, use case diagrams, etc. - and is good enough for high-level diagramming without falling in love with its own reflection. It's just crude enough that you won't be tempted to play with it all day long. One might say it's perfect in its imperfection.
  6. It's easy to clip out a diagram and save it as a graphics file, to drop onto your project wiki or into a presentation or document.
  7. It saves the data in XML, so if you have to rescue your diagrams at some future time when UMLet is no longer supported or goes commercial, you at least have some reasonable basis for doing so.
  8. It's easy to check the diagrams into and out of your version control system along with the production code and test suite. Easy to script all that, too.

 

Here's a crude class diagram I put together in a few minutes' time using UMLet. I'm sure it isn't academically correct, and that's okay with me.

The only problem I've seen with UMLet is that it throws NullPointerExceptions out of the AWT dispatch thread quite frequently. It doesn't seem to do any harm, though.

I'm using UMLet on Ubuntu Linux. Here's how I installed it.

Read It Later plugin for Firefox

posted: 22 Oct 2009

When looking for information online, I often come across interesting articles or blogs that I want to read later. I had been bookmarking them in a folder created for the purpose. When I remembered to look there, the items may or may not still have been relevant and timely. In any case, I had to clean out that folder manually, since they weren't really bookmarks I needed to keep for repeated reference. I recall a day when there were over a hundred items in that folder, and I didn't even remember why I had tagged most of them.

One of the things I came across while looking for other information was a Firefox plugin called Read It Later. Its purpose is to manage a special folder in your bookmarks file where you can stick URLs of sites you'd like to read later. This is for items you'd like to read on a one-time basis; it's not a substitute for RSS feeds. It can also save the pages locally for offline reading. It's easy to move items to your permanent bookmarks or delete them when you've read them. The plugin doesn't actually do all that much work, but it's a time-saver and it's very well-behaved code.

Here's a screenshot of a Firefox window with the Read It Later homepage displayed, and showing the elements the plugin adds to the toolbar.

To add an item to the read it later folder, click on the checkmark near the right-hand side of the toolbar. On the right, the drop-down list of saved sites is open in the screenshot. Simple and useful. I've had no problems with it so far. The Read It Later homepage is here: http://readitlaterlist.com/.

Is this a process smell?

posted: 22 Oct 2009

I was just scrolling back through old blog posts, trying to decide which ones to keep when I revamp the site, and I came across a post from last February on the subject of tools that provide continuous feedback. I noticed a comment from Lee Winder that I had overlooked until now. He writes,

...we have a system which alerts designers when a new build is ready on the CI machine.

If we have fast turn-around, they could be getting alerts every 10/15 minutes, which is certainly to much if they only update once or twice a day.

Since the alerts are e-mails, I've recommended they filter them into a CI folder, only checking the alerts when they need a new build to see when the last one was generated.

I wish I had seen this earlier. It seems to me the statement, "they could be getting alerts every 10/15 minutes, which is certainly to much if they only update once or twice a day," suggests a process smell. It may or may not be a real problem, but it sounds questionable in the context of agile-style workflow. It may be worth finding out why (1) the team is only updating (checking in?) once or twice a day, and (2) why they think that is satisfactory.

It isn't necessarily a problem. "Smell" doesn't automatically mean "bad;" it just means, "Find out where the smell is coming from, in case it's a problem." It might only be that you neighbor's cooking smells funny. OTOH, it may be that your curtains are on fire. It doesn't hurt to find out for sure.

If this team is updating only once or twice a day, then by implication they're only running their own build once or twice a day. Lee's company does video game development in C++, and the practical limit to the number of updates per day this team can make may be different than for typical business application development in languages like Java and C#. Game development involves several distinct types of programming, and the practical maximum builds per day may be different for this team than for other teams working on the same game. Even so, it might be good to understand exactly why they are limited to just one or two builds per day, just in case the curtains are on fire. Couldn't hurt.

The notifications aren't from this team's own build; they are consuming a build from another team. That team is updating at least every 10 to 15 minutes; not bad. A simpler solution might be to remove this team from the distribution list for that build's automatic notification. There's no need to have the email server churning when the notifications aren't useful. Team members have to go in and delete all the useless emails at some point, too. Maybe it's low enough impact that it doesn't matter, in context. It sounds a bit wasteful, anyway.

An information radiator is sometimes helpful in cases when one team needs the latest stable build from another team. I've seen cases when a large monitor is set up in each team room showing the build status for all the parts of the project that are in flight at the moment. This team would be able to tell at a glance whether they needed to pull a new build. I don't know which tools they're using at Lee's place, but I've seen this done using Big Visible Cruise in conjunction with Cruise Control. It may be possible with other CI servers, as well. Just a thought.

Agiles2009 Conference

posted: 21 Oct 2009

Ágiles 2009 took place on October 6-9 in Florianopolis, Brazil. It was the second in a series of agile development conferences bringing leading-edge software development and project management practices to Latin America. The first was held last year in Buenos Aires, Argentina. It was organized by a small group of highly motivated and dedicated people, primarily a group that had met during their university years in Buenos Aires and who are driving agile and lean adoption forward in Argentina, but also including professionals from several other Latin American countries who share a passion for agile development. The first conference was a great success. The second was at least as informative and enriching, if not moreso.

Of course, these are not the only agile- and/or lean-focused events in Latin America, but they are the largest and they seek to bring in international participants. With strong support from the Agile Alliance and from key commercial sponsors, the Ágiles conferences bring together agile and lean practitioners and researchers from across the continent as well as from North America and Europe and help tie together the various local and national agile/lean adoption movements in Latin America.

In addition to Latin American thought leaders in agile and lean adoption, the conference included several well-known figures in the general agile movement, including Diana Larsen, Brian Marick, Joshua Kerievsky, Naresh Jain, David Hussman, and others. There were too many excellent sessions to describe in a blog post, and the quality of those presented by Latin American speakers was on par with that of the better-known speakers. Look to this region for the next generation of leaders in moving the industry forward with effective management and software development methods.

Several participants took some great photos of the event. You can find links to their photo albums on the Latin American Agile Community site: http://sites.google.com/site/comunidadagiles/agiles-2009> I didn't take any great photos, but I took a few mediocre ones. Here's a shot of me with some of the Argentines I met last year, as we began to gather in the lobby prior to the opening keynote:

Left to right: Alejandra Alfonso, Adrián Eidelman, Emilio Gutter, Juan Gabardini, and me.

This one is from Joke Vandemaele's presentation on Power Workshops, a technique she has been using successfully to launch complex agile projects at a company in Belgium. The visuals on the wall are the ones from an actual project that she used as an example.

Brian Marick made himself available for pairing during the conference. He invited anyone who wanted to sit with him to work on an application for his wife's veterinary practice. Here he has grabbed a spot in the lobby and is pairing with one of the conference participants as others look on.

After the final session on Thursday, a rock band made up of people from OnCast Technologies, a Florianopolis-based agile consultancy, put on a concert in the lobby and invited others to join in and jam.

The conference proper was two days long, and was preceded by two days of training courses. These included a TDD & Refactoring course taught by Naresh, a CSM course taught by Alan Cyment, a Certified Product Owner course taught by Alexandre Magno, and a course on retrospectives taught by Diana.

My small contribution was a presentation of my session on Agile Metrics. The presentation continues to be popular. This one was standing room only and went overtime, as usual. Although it seems a rather mundane topic, it is clearly one that is on the minds of many project managers and program managers as they learn different ways of planning and tracking projects.

Florianopolis is quite a nice city. In my spare time I walked around and saw as much of the place as I could. I especially enjoyed the natural scenery around the bay.

The weather was cloudy the whole week. The clouds and mist on the mountaintops in the distance made for an ever-changing scene.

Some parts of the city reminded me of certain European cities. This is the old central market plaza, for instance:

All in all, a great conference with great people, and a fantastic place to visit.

With last year's conference in Argentina and this year's in Brazil, we've all had a chance to try the churrasquerias in both countries. Both are world-famous for their beef. The burning question on the minds of the conference organizers, since most of them come from those two countries, was: Who has the best beef, Argentina or Brazil? To me, the question reminded me of Belgians and Germans debating about who has the best beer. It's basically a moot question, since Ireland has the best beer. (In my humble opinion, anyway.) Sorry to disappoint our gracious conference hosts, but I have to say Venezuela has the best beef. Ágiles2010 will be in Lima, Perú, so the true test must wait for a future year. Fortunately, the future of agile and lean will be a long one.

Grooving to the buzz

posted: 30 Sep 2009

An agile team at my current client finally got a proper team work room set up, following lengthy negotiations with the Furniture Police. They had been working in cubicles that were located next to each other arranged as conveniently as you can imagine cubicles being arranged.

Although the cubicle set up wasn't bad, having a true team work space has made a world of difference, even though the whiteboards haven't arrived yet and they've requested wider tables. The team's throughput tripled in the first iteration after the space was set up. The "buzz" of a well-functioning agile team is in the air; it was absent in the cubicle environment, even though it was arranged as reasonably as possible and team members collaborated as closely as they could.

Osmotic communication and promiscuous pairing, managed pomodoro-style, have kept the team's truck number high and have even reduced the need for such basic overhead activities as the daily stand-up. The team has a stand-up when they need to discuss something or when a stakeholder comes to the team room to participate. Otherwise, they communicate seamlessly and continuously about issues and status.

We haven't tried to keep track of every little move the team members make, but I strongly suspect they're enjoying cost savings just by being properly collocated.

I'm more convinced than ever that a proper team work space is a First Domino agile practice. Why would any software development team choose not to work this way, especially after all these years of proven effectiveness in the field?

Rails through thick and thin

posted: 29 Sep 2009

It seems as if everyone likes to blog about the fat model and skinny controller idea when they're just learning about Rails development. I'm just learning about Rails development, so here's my blog post on the subject.

The canonical model-view-controller architecture calls for the controller to be lightweight. There are good reasons for this, including flexibility to scale across architectural layers, loose coupling, ease of understanding the code, and more. As a Java developer, I typically follow this guideline when building webapps with MVC frameworks like Struts2. It's only natural for Rails developers to try and conform with this very logical design guideline.

When you generate Rails code, the out-of-the-box solution has very thin models with most of the logic included in the controllers; quite the opposite of generally-accepted good practice for an MVC app. There's a lot of advice out there for people who want to skinny-down the controllers, and there's nothing new about it. For instance, there's a nice write-up by Michael Koziarski dating from 2007. The "before" version of the create method in his ReportsController is a great example of what can happen when you jam too much logic into a controller. In an even earlier example (2006), Jamis Buck shows the value of pushing logic from the controller to the model. His example resulting in the find_recent method in the Person model class is excellent.

But there's another issue to consider. Rails models are based on the ActiveRecord pattern. As a result, Rails models are tightly coupled to the persistence mechanism. By default, the persistence mechanism is a relational database, although this can be overridden. Because of this coupling, Rails models really don't feel quite like a domain layer; at least, not to me. It "feels" as if we need another layer between the controller and the model. I've heard and read Rails developers resist this idea, sometimes strongly. At the same time, other Rails developers say that pushing too much business logic into the models amounts to substituting a new problem for the old.

Besides all that, it's hard to test Rails models without a dependency on the database due to the lack of separation of concerns. There are tools available to mitigate the problem, such as Thoughtbot's FactoryGirl, but you still end up with some tests touching a real database instance. Tools like FactoryGirl make it easier to define fixtures, but they don't really isolate you from the database. In a large app, tests that are dependent on a real database instance can affect build times, and long build times tend to have a domino effect on other good practices.

So, we want skinny controllers, and Rails models are neither a business layer nor a domain layer, properly speaking. What should we do? I think it's legitimate to treat the models as a thin persistence wrapper, and to add either helper classes or a distinct layer between the controllers and the models to function as a business layer, if you need one. You might not need a business layer, since most business applications are almost purely CRUD apps, and don't actually have much "business logic." Let the models focus on one concern — persistence management — and pull the other concern — the domain model as such — into a separate layer. It seems more loosely coupled and cleaner than the default architecture.

Most of the time, the hand-coded methods people add to Rails models are really just handling a find with special conditions (as in Jamis' find_recent example), or saving an object with a change of state, like from "active" to "inactive." It seems to me this isn't necessarily part of the model's responsibility. It feels more like a choice being made by client code. The default model code already handles find conditions passed in from the client, and already knows how to save whatever values the client passes to it. Custom methods that ostensibly make the intent of the code clearer may not actually do so. They may obfuscate the intent of the code by separating the conditions for a persistence request from the context that makes those conditions obvious.

Going with generated code for models, plus a few declarations to invoke built-in functionality like has_many and validates_presence_of, also alleviates the problem of isolating unit tests or specs for model classes. A rule of thumb for unit testing is that we should not test generated code or the plumbing/framework; we should test hand-written code. When you treat Rails models as a thin persistence wrapper, you end up with literally no hand-written code in the model classes. The problem of unit tests that have dependencies on an external database simply disappears, because you don't have unit tests for the model classes — they consist entirely of generated code. Classes in your true domain layer can be isolated for testing just by mocking the models. In effect, you let the model classes adhere to the Single Responsibility Principle simply by omitting domain logic from them, leaving them with the single responsibility to wrap the persistence mechanism. Now the single reason to change is that you need to change the peristence mechanism from a relational database to something else.

I wanted to think through the question of skinny vs. fat to see whether there might be implications for scaling a Rails app. This is what I came up with, FWIW.

The app you get by default from the generators is designed for small-scale, low-volume CRUD apps that live entirely on a single server. Whether you have the bulk of your business logic in a controller or in a model doesn't make all that much difference. A messy method in a model is just as hard to read as a messy method in a controller, and since the whole mess lives on the same server, it's a wash. It's like squeezing a balloon that is half-filled with air. When you squeeze one end, the air bulges into the opposite end. Think of one end as the controller and the other end as the model, and you can see the effect of moving code between the controller and the model. It really comes down to a question of personal preference.

Architectural niceties start to make a difference when you go beyond the level of small-scale, low-volume CRUD apps that live entirely on a single server. How you need to scale depends on the circumstances, and most forms of scaling will not affect the application code at all because they are handled by assets external to the application, like routers or containers. In those cases, it still will not matter whether most of your logic resides in controllers, in models, or in an additional layer between the two.

Let's say we begin with a small-scale, low-volume CRUD app that lives entirely on a single server. Over time, the user base for our app grows. We need to be able to handle a higher workload than we did originally. This can be done by running the Rails app with a facility like Passenger for Apache. Passenger is smart about caching query results and keeping objects instantiated in memory between requests. It enables a Rails app to handle a higher transaction volume in a way that is completely transparent to the application code. In this case, your personal preferences regarding controllers and models neither help nor hinder scalability; you are free to do as you please.

Maybe the need for scaling is a question of availability rather than transaction volume, or in addition to transaction volume. In this case, we can set up a router in front of the application that sends requests to alternate copies of the application running on different servers. The servers may be clustered and may support hot failover, or other features to support high availability. This can be done in conjunction with a facility like Passenger, too. The different instances of the app can access the same back-end data store or different ones, as appropriate. There is no effect on the application code, so your personal preferences are still completely valid.

One of the ways we scale Java webapps is by deploying different architectural layers independently. That way, we can deploy multiple controller instances or multiple business layer instances and so on as needs change. The advent of virtual servers makes this approach all the more feasible and responsive to dynamically-changing workload demands.

You wouldn't normally separate Rails views, controllers, and models physically. If you needed to do so, you could use routes.rb to route different requests to different instances of the business layer or model layer. I can't actually think of a use case for it, but in any case it still doesn't affect the application source code and wouldn't push you toward thin or thick controllers, particularly. The use cases that drive us to separate components in Java webapps can be supported by deploying multiple copies of the whole Rails webapp.

Another scaling solution is to route requests to different servers based on geographical region or company code or some other piece of information that comes in on the request message. You could create a thin controller layer that does nothing other than interpret the relevant code and pass the request through to another controller residing on a different server. When you have different transaction volumes or different quality-of-service levels for different regions or companies (or whatever), this may be a reasonable way to split the traffic across multiple servers. The base application code still doesn't have to change; all you need to do is create a new set of controllers, or a single controller, to pass the requests through to the appropriate instance of the app.

You could conceivably achieve the same goal by defining special routes that included the necessary routing code, rather than introducing another controller layer. In that case, clients would have to include the routing code in the URLs they used to invoke the Rails app. This may or may not be desirable, depending on circumstances. In any case, it has no effect on the application source code, and so your personal preference for skinny or fat controllers is unaffected.

It's possible that the need for scaling has to do with the volume of data transferred to and from the database. We can accommodate that by changing the database adapter and using a different DBMS product; perhaps moving from MySQL to Oracle or UDB. That's transparent to the application code, as well, unless you've nailed your feet to the floor by using stored procedures.

Even if you've nailed your feet to the floor, you can pull them loose again by doing away with the built-in database access in the models and having them call a service layer instead; something like an enterprise service bus, perhaps. Your database resources (or whatever the real source of data happens to be) can be isolated behind the service layer where the details are transparent to your application code. In this case, your personal preference may affect the difficulty of the change.

This is a fairly typical situation in enterprise IT organizations, including those whose feet are not nailed to the floor. Business apps need to pull data from a variety of back-end sources, and the sources may change without notice provided the interface remains stable. If you're using a paper-thin model that acts as nothing more than a persistence wrapper, it's a trivial modification to call the service layer instead of a local database adapter, and your existing unit tests provide a safety net for the changes. If you have a fat model design, you may have more work to do. Realistically, this sort of architectural issue would probably be known at design time, so it's really an academic exercise. When the day comes that there are a lot of legacy enterprise Rails apps around, it may become more common situation.

Personally, I like the skinny model approach because it simplifies testing. I don't mind seeing a bit of business logic in the controllers, and since Rails is a tool for CRUD webapps, there won't be extensive business logic in most cases anyway. If the controllers start to look too fat, it's easy enough to move some of the logic into helper classes. If the helper classes call the models, then you can think of them as a sort of "business layer," although that's really a matter of code organization and readability and not a matter of architecture, since the Rails app will be deployed all in one chunk. Another way to say it (although in a post this long the last thing we need is another way to say it) is that for a Rails app MVC is a design pattern rather than an architectural pattern.

My conclusion about this matter is that it doesn't make a difference one way or another. Structure your Rails app in the way that makes the most sense to you. If you need to scale it later on, it will make little or no difference whether you've written thick or thin controllers. Of course, this conclusion doesn't automatically apply to other languages or frameworks, or to applications other than webapps.

I reserve the right to change my mind without notice as I continue to learn about Rails development.

Some CFEclipse templates

posted: 25 Sep 2009

I'm just wrapping up an engagement working with a team that uses ColdFusion. I found that ColdFusion requires a considerable amount of redundant typing, and I created a few Eclipse templates to save keystrokes. The templates work with the CFEclipse plug-in and the unit testing framework MXUnit. I'm sharing them here in case anyone else can use them. They are:

Name Description
ad Insert call to assertDatesAreEqual
ae Insert MXUnit assertEquals in a cfset tag
ef Insert MXUnit assertFalse in a cfset tag
arg Insert cfargument tag template
at Insert MXUnit assertTrue in a cfset tag
ax Insert try/catch for MXUnit expecting an exception
before MXUnit setUp function
fail Insert MXUnit fail in a cfset tag
fun Insert cffunction tag template
helper Instantiate a TestHelpers component
if Insert cfif tag template
init Insert template for an init function
loop Insert cfloop tag
ret Insert cfreturn tag
set Insert cfset tag
test MXUnit test function
this= Insert template cfset assignment from arguments

Download this zip file: cfeclipse_helpers.zip. The archive contains:

  • cfeclipse_templates.xml
  • MockSystemClock.cfc
  • TestHelpers.cfc
  • TestHelpersTest.cfc

 

To install the templates, with the CFEclipse view open in Eclipse, go to Window -> CFEclipse -> Editor -> Templates and choose Import. Point to the templates file and import it.

The .cfc files are test helpers that help isolate MXUnit tests that need to mock the system date.

Enjoy.

The new interview

posted: 23 Sep 2009

I'm not sure whether this is a US tradition or if it's more general, but job candidates in this country tend to oversell themselves. Americans (and maybe others, as well; I can't speak to that) either believe they are gods or they assume they have to convince hiring managers that they are gods in order to have a chance of getting a job.

Yesterday, the team I'm presently coaching checked out two candidates for temporary contract positions on their project. This team spends very little time talking to candidates, and quickly moves to the audition phase when candidates pair with different team members to complete a small, contrived application. They have learned through experience that this is really the only meaningful part of the process, and that they cannot rely on résumés or interviews (or certifications — pay attention, ye who advocate developer certification programs!) to find out what a person can do. In fact, most of them don't bother to read candidates' résumés. They have learned that there is rarely anything factual in a résumé.

I was present during the interview phase and observed portions of the audition phase, although I didn't comment or interfere. It seemed to me that both candidates were reasonably good at the work. Neither was extraordinary, but they were okay. Clearly, they had at least a moderate level of practical experience in both the technology areas relevant to the project. It seemed to take them a long time to get through the little project, but they did manage to get it done. They probably would have performed adequately on the project.

The team decided the two were not up to par technically. I'm speculating that the candidates may have screwed themselves during the interview phase.

Both did a wonderful job of selling themselves. Based solely on their own words, one would expect each to be a one-of-a-kind expert in the technologies relevant to the project, and an all-around genius generally. By the time they had finished describing themselves, I was wondering why their names aren't household words, like Einstein or Newton. They didn't perform badly in the audition phase, so I have to wonder whether they set expectations too high during the interview phase. There is absolutely no way they lived up to their respective self-portraits. Frankly, I'd be surprised if anyone could have.

I wonder whether this is a difference between the conventional way of interviewing and the more empirical approach that is becoming more and more commonplace in agile software development circles. People used to have to sell themselves to employers strictly on the basis of words — both written and spoken. Nowadays, the words can backfire unless the candidate can confirm their claims when they sit down to write code. Given the extreme degree of exaggeration typically found in résumés, it's unlikely many individuals can live up to their own sales pitch.

These two may well have been selected had they sold themselves as reasonably competent developers with a balance between confidence and humility who enjoyed a collaborative working style and wanted to work with a high-performing team where they could share knowledge and learn from their peers while delivering value to their customers. They didn't.

Maybe next time.

A shortcut to organizational culture change

posted: 22 Sep 2009

The other day I stumbled across a shortcut to organizational culture change; what you might call a tantric path to cultural change. Metrobus, an operator in the Washington, DC rapid transit system, has been running radio advertisements to announce that they have implemented a new organizational culture that emphasizes responsibility.

Apparently, all you have to do to change organizational culture is make a public announcement. A lot of us who work with clients to support agile and lean adoption have been going about it all wrong. We've been making things way too difficult. It's really much simpler than we thought. We just have to Speak the desired organizational culture into existence.