Bluemini.comBluemini.com

Software best practices

If You Want To Improve, Stop Managing Your Problem

posted: 30 Sep 2009

…and start solving them. Sounds great, but what does it mean? What’s the difference between managing a problem and solving it?

I recently held a workshop on using Scrum to drive process improvement at CIISA 2009, held in Guadalajara, Mexico, where I focused on using Scrum as a process improvement process to find problems with a development organization’s processes and practices and then using the A3 Problem Solving process* to drill down to the root causes and eliminate them. Let me give you a brief overview on what I covered.

Every software development organization can deliver some functionality, at some level of quality, in some amount of time. But can they deliver a specific, committed amount of functionality, at a high level of quality, in a specific amount of time? That is what we are asking for at every sprint planning meeting. Do we get it? Usually not, when we first transition to Scrum. Failing to meet a commitment is not unique to Scrum; the vast majority of traditionally-managed projects fail to deliver the desired functionality within the established schedule and cost constraints. One advantage of Scrum is that you fail faster, while you still have time to do something about it.

Failing to deliver isn’t bad… if you seize the opportunity to improve. Failures happen for a reason. What was the reason? Why did you fail? This next statement may seem tautological in its obviousness, but true wisdom is often found in simple profundity. You failed because a problem exists that prevented you from succeeding. You won’t stop failing until you recognize the problem, understand the root causes, and then make the changes required to eliminate the root causes and solve the problem.

Is this how most companies handle problems? I don’t think so. It’s human nature to only want to see the good things. Twenty-five hundred years ago, the Athenian playwright Sophocles wrote that “No one loves the messenger who brings bad news” to reflect the then-common practice of an angry ruler killing the bearer of bad tidings. Shakespeare’s “Don’t shoot the messenger” has become a common metaphor, advising us to not blame the person for the problem. We see this around us every day; someone yelling at testers when they find a lot of bugs, or rejecting high estimates and demanding ‘more reasonable’ ones, etc. Perhaps we need to rethink our approach to problems that are expressed as bad news.

At Toyota, honesty is prized. Managers are taught to present bad news first, and no one is admonished for doing so. To the contrary, managers are severely admonished if they don’t bring bad news to their superiors. Similarly, Jim Collins writes about the Stockdale Paradox in “Good to Great.” Companies that excel do so because they maintain faith in their ability to prevail in the end while simultaneously confronting the brutal facts of their current reality, whatever those facts are. They embrace the truth, no matter how awful it may be, because they understand problems must be acknowledged before they can be solved.

Ken Schwaber, one of the creators of Scrum, says “Scrum doesn’t solve your problems, it exposes your problems.” Scrum isn’t a Silver Bullet. Your organization needs to solve its problems if it wants to get better. Managing a problem is kicking the can down the road, working around the problem, putting it off until it can no longer be avoided. All too often that is exactly what we do.

For instance, let’s say that one day last week you walked out to your car and noticed the front left tire was low on air, so you went by the local gas station on the way to work and filled it up. A few days later, you noticed it was low again, so you filled it up again. You managed the problem. Now the weekend is here, and you’ve scheduled a round of golf with your buddies in the morning followed by an afternoon at the stadium watching your local team. Now you have a choice: do you change plans so you can go down to the gas station and get the leak found and fixed, or not? Sure, you can keep topping it up every couple of days, but we all know what eventually happens to a leaky tire. It fails at the worst possible time, perhaps a blow-out on the freeway, or else we walk out after a late night at the office to find ourselves staring at a flat. Problems that aren’t solved only get worse, and the solution becomes more onerous.

Solving problems is a choice; it is up to your organization to solve them… or not. You have to have the courage to exercise integrity at the moment of choice,** to make the decision to solve problems because not making a decision is making a decision. Actively decide to solve your problems.

Contact me if you’d like an A3 Problem Report template, along with examples and instructions.

______
*Tom Poppendieck, Gabrielle Benefield, and Henrik Kniberg led an excellent workshop on value stream analysis and problem solving using the A3 Problem Solving process at the Agile2009 conference… thanks!

**The Seven Habits of Highly Effective People, Covey, Stephen

Fail Yet Succeed?

posted: 18 Sep 2009

If you build EXACTLY what “they” tell you, you do it in the timeframe they ask for, and at the cost they wanted to pay, is that a successful project? The project is

  • On time
  • On budget
  • Delivers the requested functionality
  • No defects
  • The team is ready for the next project

Is it successful?

“Yes,” you say. Rightfully so.

But what if I tell you that the above project (project A) was followed by another project (project B) a short while later to change the project A functionality because… drum roll… it didn’t solve the real problem.

Is project A still a successful project?

“No,” you say. “They should have built the right thing, not just built it right.” Right again. The addition of project B makes the building of the right thing the combination of A & B and blows the budget, schedule, etc.

But what if I tell you that the team counseled, cautioned, cajoled, and complained that what was being built in project A was the wrong functionality during project A but “they” INSISTED that this is what they wanted.

Is project A a success again?

“Well, um, yes… but, um, no,” you say. Why the uncertainty? Shouldn’t a project that meets the project success criteria be a successful project regardless of what it produces?

It depends, I guess, a little bit on the angle you take to look at it. From a pure development team perspective, it is a success. From a business perspective, it is a failure.

A friend of mine last weekend was telling me about a new outsourcing move his company made recently. “One of the most frustrating things about our vendor,” he whined to me, “is that they do exactly what we tell them to do an nothing else.”

“Isn’t that what you want a vendor to do?” I asked in all innocence.

“No, I want them to build stuff that works,” he said in all seriousness.

My radical development buddies would tell me the fault of project A lies totally at the feet of the business since they said what they wanted.

My gung-ho business friends would accept some blame but put some blame back on the development team pointing out that the team should have known better and, by the way, the method of development should have stopped them from doing the wrong thing.

My consultant friends would opine that the development team has the responsibility to deliver to the “real” requirements—whatever those are. The issue here is that the only method most of my consultant friends have for getting requirements is asking the “they” what the requirements are. The “they” which INSISTED what was needed and were wrong.

My less helpful friends would say this is a hypothetical and 1) they don’t answer hypothetical questions (practicing to be nominated to the Supreme Court) and 2) even if they did, this wouldn’t happen in real life.

Right. Never happens in real life. Sure. This happens when we ask what “quality” means on a project. When we find that at the drop-dead date nobody drops dead and the project goes on releasing a bit later and still doing well. It happens when dev teams are forced into project constraints (time/resources/functionality) that has very little correspondence to the reality of the work to be done. We find it when the business can only talk in terms of solutions, not problems.

So thinking in black & white for a moment (the world is a lot more gray but it muddles the critical questions), can you be a successful project and yet fail for the business? I think not only is this a possibility but it happens more often than we like. More likely is that we have un-successful projects yet deliver something that helps the business.

So which is better

  • Successful project, failed business
  • Failed project, successful business

Of course we would all like the successful project and successful business. Construx has identified the 10x principles that make that happen. But since that seems kind of rare, which of the above do we choose? Do your methods help you get there?

Given those choices, for me I sacrifice the project. And once I make that black & white choice, then working on software projects becomes a lot less stressful and the muddled gray not so muddled. I can fail yet succeed.

Facebook Page

posted: 16 Sep 2009

I now have a public Facebook page at http://www.facebook.com/n/?pages/Steve-McConnell/198720075270&mid=8a4602G316afb94G1ae8a37G4c . I plan to use this page for small scale blog entries, updates on what I'm reading, announcements, and so on....(read more)

Free Webinar: 10 Deadly Sins of Software Estimatio

posted: 16 Sep 2009

I'll be giving a free webinar tomorrow at 10:00 am Pacific time on the 10 Deadly Sins of Software Estimation. You can sign up here: http://www.sdtimes.com/content/webinars.aspx Here's the full announcement: The average project overruns its planned budget and schedule by 50%-80%. In practice, little work is done that could truly be called "estimation." Many projects are scheduled using a combination of legitimate business targets and liberal doses of wishful thinking. In this talk...(read more)

Estimation Does Matter

posted: 16 Sep 2009

Recently, Mark over on the Agile Project Management Yahoo discussion list posted this little remark.

“A feature will take *exactly* the same amount of time whether the
estimates are "good" or "bad"!

“I swear I'm going to print that on a 10 foot banner and hang it over
my desk for our entire organization to see.

“As a community, I believe we spend wayyyyyyyyyy too much time talking about estimation. A "good" estimate is never going to get a feature done sooner.

“Instead of spending time talking about estimation, we should be focusing on the other engineering practices, e.g. specification driven development, continuous integration. At least they have a positive impact on delivering value.

Mark”

The question to ask is, “Is this true?” Does a feature take the time it takes regardless of how much time we think it will take? That is, if I have a feature that in its essence is one week worth of work and I estimate that the feature will take more than a week to develop or less than a week to develop, it will still take a week to develop, regardless of the estimate.

Or, more simply, does the quality of an estimate, its “goodness” or “badness”, have any impact on the work?

The first shot at saying that the quality of the estimate does have an impact on the work is Cyril Parkinson’s observation in a 1955 Economist essay: “Work expands as to fill the time available for its completion,” commonly known as Parkinson’s Law.

If we were to give my one week feature two weeks to complete, Parkinson’s Law would suggest that the feature would end up taking two weeks regardless of its nature.

Now, I am sure that there is some limit to Parkinson’s Law. If I gave my one week feature ten years to complete, it would most likely not take the ten years. However, it is just as likely to take more than its innate week.

Why do we hold Parkinson’s Law true? Mostly, it is an observation of our collective experience. We can think of trivial examples such as students putting off papers until they are almost due, a lowered sense of urgency while doing the work, increasing the amount of “polish” or tuning we do rather than stopping at “good enough”, etc.

Add to that the planning errors we make. Since I estimated my one week feature would take two weeks , I did not think I could get a second one week feature in this work period and set that second feature aside.

What is the impact of that? The answer depends of course but picking up that feature in the midst of my work period means that the tasks to incorporate that work must be picked up at a point that may not be optimally designed for doing that kind of task. Instead of discussing the feature with the customer who was present at the initiation, I must track the customer down and hold the same discussion, now out of context. Just that extra work of getting the customer, explaining the situation (the first feature was done in “half the time”), and then doing all the tasks I would have done earlier if I had a correct estimate makes the second feature take just that much longer than it inherent nature.

So I think Parkinson’s Law holds true enough for overestimation. What about underestimation? Is there an impact to the work when we estimate it at less time that it really needs?

I think that the answer here is also a definite, “Yes!” Several things conspire to actually increase the amount of time needed to build our one week feature. Some of that increased time is realized on the current work, some of that extra time is taken on as technical debt.

The current increase to my one week feature comes from planning errors and schedule pressure. When I estimate it will done in two days rather than the one week of its nature, I make plans based upon that estimate. Some of those plans involve coordinating activities with other people and resources. When I can not meet those coordination points due to my work taking longer, those plans need to be changed. If the people and resources are the typical in-demand things that they are, then it will take more time to establish new coordination points thus increasing my work on the feature. At the least, I have had the extra work of establishing the coordination points twice.

Steve McConnell also documents in Rapid Development that just the feeling of schedule pressure can drive up defects. We make more mistakes when we believe we don’t have the time to do the job as it needs to be done. These defects take time to correct and, thus, increase the schedule for my one week feature.

Some of the increased time to complete the feature can be deferred as technical debt. Rather than repeat my post on the sacrifice of non-functional attributes to meet schedules, I will note that we can meet almost any schedule if I am allowed to “relax” non-functional attributes of my feature. However, these relaxation of things like “maintainability”, “portability”, “readability”, “usability”, etc. are likely to come back in future feature work and increase the time then.

This concept of estimates having an impact on the work is nicely summarized in a poster that my employer, Construx, offers.

On the right we see the linear impact of Parkinson’s Law; on the left, the non-linear impact of planning errors and technical debt.

It seems Mark may be taking a simple case. However for most of us, the quality of the estimation does have some impact on the time required to complete a feature.

Watching Agile Grow Up

posted: 16 Sep 2009

Is Agile, which was baby a few years ago, growing up to be just another moody development adolescent on the way to becoming a ho-hum mainstream adult? One of the fascinating (or darn scary) aspects of having children is watching them grow up. As they take on more and more decision making on their own, they begin to do things that, frankly, can make you cringe.

I bet that is true for the Agile thought leaders as well. How well are things like empowerment and high bandwidth communication maturing in global enterprises. Are companies turning Agile into something a true zealot would dislike? Have you ever wished your child didn’t mention they were related to you?

What started me thinking about this was a common implementation trick I see in Agile teams, the iteration zero.

In Agile, one of base “rules” is to deliver value (as defined by the value definer) every iteration. That is, every two weeks (or however long your iteration length is) you must deliver something the customer (whoever that may be) can use to meet their objectives (whatever those may be) in some, possibly partial, way.

The thought is that an architecture, while useful for the development team and an enabler of other activities, generally is NOT useful to the customer. The typical value delivery item is some amount of the end functionality actually working in executable code. One can also deliver a document that is required like a training manual but code is strongly preferred.

To pull this off from the very first iteration requires some slight of hand. A common strategy of by-the-book Agile implementations is to develop that executable code in a environment that is not the target environment—say an Excel macro. This will be done with only a small amount of the available capacity of the team while the rest of the team’s capacity is used to build the target environment.

While true to Agile’s values, I don’t see this strategy very often. More likely than not, the Agile team that needs to do some grunt work before they start delivering value will invoke an iteration zero.

In an iteration zero, the team builds the target environment, sets up the regression testing/build tools and, increasingly, does some requirements and design work all while delivering no value to the customer. Hence the zero.

Iteration zero is also often a different length than the follow-on iterations. While a common iteration length is somewhere from 2-4 weeks, an iteration zero can be a couple of months long. It takes time to get ready to deliver value.

Upfront work; what are those Agile thought leaders thinking? Doing requirements (some at least) and design (again some at least) before writing any code. Locking in architectures (mostly). Just like the old school development methods. Are they cringing?

Now, I have watched the Agile thought leaders spin this as this is what they intended all along. I don’t think it is what they intended. You can look at things like co-location going distributed and test-first returning to test-please. Not part of the original plan.

Agile is out of their hands. The child is making its own calls. Agile, as now touted by the thought leaders, is different. Maybe the thought leaders are getting wiser as well.

Next Generation Project Planning Tool: LiquidPlann

posted: 16 Sep 2009

I receive several requests a year to sit on various advisory boards, and I always say no--I just don't have the time. Last year I received a request I couldn't refuse from Charles Seybold, Bruce Henry, and Jason Carlson at LiquidPlanner . I had known Charles and Bruce when they were at Expedia and thought highly of their work, but the real appeal was the tool they were building. They started with the vision of an online project planning tool that would include probabilistic scheduling , in...(read more)

Construx Offers Free Training for Laid-Off Softwar

posted: 16 Sep 2009

After listening to doom and gloom economic reports for the past few months, we decided we would try to do something to brighten our little corner of the world. Here's our official press release about it: Construx Software has designated 25% of its public seminar seats free of charge to software workers who have been laid off. Construx seminars help software professionals improve their technical and managerial skills. Seminar attendees will be more effective when they reenter the workforce. Construx...(read more)

What Marketing Requirements Look Like

posted: 16 Sep 2009

I recently went with trepidation into a class with Pragmatic Marketing called “Requirements that Work”, part of their Practical Product Management series. Marketing professionals have been my foil for bad requirements for years and here I was, ready to hear from the experts themselves how marketing, and not engineering, should be making all the decisions about the software.

I admit, I expected Pragmatic Marketing to recommend to marketers that marketers write requirements like technical people write requirements and to continue to blur the distinction between requirements and design. So I was a little surprised and pleased when Pragmatic Marketing made the line between requirements and design sharper and more clear than I have seen in ages.

In fact, they strongly suggest that marketing people avoid writing requirements using the traditional, “The system shall…” format that has been taught to technical people for years. So what did Pragmatic Marketing recommend? They stated that a requirement is composed of three parts: a persona, a problem statement, and a use scenario.

A persona is an instantiation of a demographic. By giving the demographic/stakeholder-class/user-group a set of personal details, the marketer is able to give a face to a problem that needs to be solved by their product. It seems that relating to a person (even a fact-based fictitious one) improves the analysis and communication of a requirement.

One great insight for Pragmatic Marketing (great because I have said a similar thing for years) is that any given release of the product can typical delight only one persona and try not to make the rest of the personas (i.e. customers) upset. When you try to make all the customers/personas happy, it makes critical tradeoffs difficult, if not impossible, and usually results in a poor marketing message.

The problem statement is a brief or partial sentence that gets at the heart of a problem facing a persona. Tom Gilb would use the term “gist” for the same type of thing. Some examples from the class include: “Collecting customer inputs”, “New laptops come with Vista”, and “Where is the best picture?”.

The problem statement isn’t a requirement in the sense of “you must do this” but an issue or opportunity that is out there that the market has spoken about. Part of prioritizing the requirement is to collect market evidence—the number of times that the marketer has seen the problem/opportunity in the market.

The third bit is the use scenario. This is a sentence or two that puts the problem in a context often encountered by the persona. An example in the class: the problem, “Where is the best picture?” is followed by the use scenario, “Sally takes hundred of photos every month and doesn’t have time to catalog them individually. She wants to find photos taken this year that include her whole family for a greeting card.”

Notice that the problem statement is only the essence (or gist) of the overall use scenario. Also note that we have no idea how to solve it or even if it can be solved. It is clearly a statement of the problem faced by the persona. The use scenario may be a somewhat generalized issue to the market, but it is specific in relation to the persona.

We might be tempted to write that use scenario as a series of “system shall” requirements

  • The system shall automatically categorize photographs
  • The system shall provide a search facility to select photographs by category
  • The system shall rank photos within a category based on photographic characteristics (e.g. lighting, subject to field ratio, smiles, etc.)

However, in doing so, we have started to make some solution choices, even if we didn’t mean to. We seem to imply we need a category-based scheme. Why couldn’t we use some other approach to solve her problem like face recognition without a category? Why use a search? Couldn’t we have the system present its choice of best photograph? Pattern recognition from previous choices? The point is, we have a bit of design creeping in there.

But that brings us back to a really fundamental question: What is a requirement? Is it a statement of the problem (what vs. how) or something—anything—that is “required” of you by somebody else. I choose the former (see my second blog entry called Space Cadet), others choose the latter. At least at this point, I have the marketing experts on my side.

Transitioning to Scrum: Selecting the Product Owne

posted: 16 Sep 2009

Many teams moving to Scrum have questions about the Product Owner position. Is the Product Owner a member of the Scrum team? What role does the Product Owner play in the day-to-day life of a Scrum project? How do we map current functional roles to Scrum roles, specifically with regard to the Product Owner? Who should we select as our Product Owner?

Let me start by saying the Product Owner is perhaps the most important role in Scrum… something you don’t often hear from Scrum folks. The Scrum process defines the Product Owner as being the person responsible for the team’s return on investment, i.e., the Product Owner will be judged by whether the project’s outcome justifies its cost. Another, more direct way of saying this is to identify the Product Owner as “The Single Wring-able Neck,” or the person whose head is on the figurative chopping block if the project fails. Interestingly, the Project Management Institute has a similar definition for project managers: the person assigned by the performing organization to achieve the project objectives (PMBOK Guide, 4th Edition, p13). Therefore, based upon both the Scrum and PMI definition, the Product Owner responsibilities are equivalent to those of a project manager. This is one reason why I have found knowledge and competence in project management to be a key ability of a successful Product Owner. Selecting a Product Owner therefore naturally starts with identifying candidates who have that knowledge.

There are other skills and abilities required, including sufficient technical knowledge to understand the problem domain and the technical aspects of proposed solutions. A successful Product Owner doesn’t need to be the best software developer on the team, but he does need to be able to understand the technical decisions well enough to know whether they make sense.  Eliminate candidates who lack sufficient technical ability.

Be especially careful not to view the Product Owner as the project's ‘driver.’ Scrum is about empowered, self-managing teams which are led (pulled) rather than driven (pushed). Scrum means never having to be driven. Candidates who can’t or won’t embrace the servant-manager philosophy, or who insist on directing the team should be disqualified. Nothing will cripple your Scrum implementation more than a de jure Product Owner who sees himself as a de facto team manager.

By now, you should have narrowed the candidate list to those who have demonstrable technical, project management, and interpersonal skills. Who among the remaining candidates has the ability to understand the customer, the market, and the business? Are there candidates with entrepreneurial experience? Owning a business, starting a business, or working in a leadership position at a startup where everyone wears a multitude of hats and understands making money is the true test of whether or not the customer is satisfied is invaluable. People with this experience understand what really matters because they’ve lived it. People who have had experience in customer support, QA/test, or sales and marketing at larger companies may also have an understanding of the customer.

Now you should be down to the final few candidates. I like the Toyota Production System concept of a ‘Chief Engineer’ with extensive technical, project management, and business knowledge who leads the team to successful project completion. We’re talking about candidates with a software development background, successful project management experience, who have dealt effectively with the customer and understand business realities, and with the skills and experience to successfully act as a proxy for the customer and other stakeholders. Which of the remaining candidates most closely matches this description? If there is no one, have any of the candidates shown promise that they can develop to this level? If the answer is still “No,” then you may want to hire someone with the necessary talents and skills to fill the position.

Travel Restrictions and Offshore Development

posted: 16 Sep 2009

One benefit of my job is that I get to talk to people from hundreds of companies every year, and the people I work with talk to even more people. In recent discussions I've seen a disturbing trend emerging -- disturbing because it's so common and because the effects are so easily predictable. With the economic challenges many companies are facing, many companies have imposed travel restrictions that in practice are working out to "zero travel." I understand the value of this as...(read more)

Scrum Smells: Going Along To Get Along

posted: 16 Sep 2009

 A question was posed on one of the Scrum discussion forums recently about changing the sprint backlog during a sprint. The scenario was as follows: the sprint has been running for 2 days when the Product Owner comes to the daily standup and wants to replace a committed sprint backlog item with one of equal size in the product backlog. What should the Scrum Master do?

As this was a real question posed to the forum, I wondered how often the Product Owner comes to the team two days into a sprint to rearrange the sprint backlog? What a stinky Scrum smell this is.

One of the major hindrances to getting something done is the Tyranny of the Urgent. The reason that Scrum keeps the committed sprint backlog inviolate is to provide the team the ability to keep their heads down for the sprint instead of constantly being whipsawed by context switches forced on behalf of the crisis du jour. If the level of uncertainty is very high in an organization, the proper response isn't to relax Scrum rules by allowing changing the sprint backlog during the sprint, it is to shorten sprint duration so the organization (and the Product Owner) know that, at most, they'll have to wait for no more than two iterations to get a specific item.

In my experience, the importance of changing the backlog during a sprint is almost always greatly exaggerated. (The 'importance' is usually due to someone outside the team dropping the ball on a commitment, and trying to cover it up by breaking the Scrum process.) Scrum has a mechanism for ascertaining if it should be done, however, and that is by forcing the Product Owner to make the call on whether or not to abandon the current sprint and replan with the updated backlog. Yes, this will cost up to a team-day. But following the Scrum process is exposing your problem, not hiding it, and there is a problem here. Are you going to address it, or sweep it under the rug by enabling bad behavior?

I know, some people will wonder what the big deal is. Why not just go along to get along? My response is to reiterate that Scrum is as much about improving the organization as it is organizing project work. Why waste a great opportunity for improvement? What I like about this particular scenario is how it shows whether you truly understand that Scrum is really a tool for improving the organization instead of just another way to manage a project or set up the workflow. Sure, you can go along to get along by swapping out backlog items and getting the Product Owner off your back, just as you can put a penny under a blown fuse and get the lights back on, but you haven't solved the problem. In fact, you've made it more likely to blow up.

Others will wonder if this somehow violates the Agile principle of responding to change over valuing a process. I'm all for responding to change. What I'm against is abandoning the Scrum process, and I'm suggesting that there is a very good reason for the few simple rules that Scrum has. One of those rules is that the sprint backlog with the corresponding commitment is locked at the moment commitment is obtained and the sprint starts. Scrum allows for changes to the sprint backlog during an iteration; abandoning the sprint and replanning and restarting a new sprint. As an aside, if you're following Scrum you shouldn't be dropping items from the sprint backlog during the sprint either; that is just a way of hiding what is truly happening. Instead, during the retrospective the team should discuss why committed sprint backlog items couldn't or wouldn't be completed during that sprint, or why the team undercommitted (if new items are continually being dragged into the sprint backlog), and then brainstorms to arrive at a solution to this problem.

Stephen Covey, of Seven Habits fame, says that "courage is integrity at the moment of choice." Scrum, or any other way of doing things, won't work if the people involved lack courage, if the prevailing philosophy is "Can't we all just get along?" instead of "Do the right thing." The Scrum Master needs to demonstrate some integrity in this situation and require the Product Owner to follow the Scrum process. Either abandon the sprint or leave the backlog alone. Then, during the retrospective, a discussion on why the backlog needed to be changed should be held with the goal of trying to figure out what caused this (genuine unforeseeable emergency, bad PO planning, someone dropped the ball on a commitment, etc.) and how to prevent this in the future. Otherwise, expect constant pressure to change the sprint backlog.

Relationships Rule

posted: 16 Sep 2009

Getting better in software development requires change and change is hard and change is unpleasant. Most of all, it doesn’t seem like we actually change. We talk about change, we start change initiatives, they pay people to teach us new ways, we may even read books, but at the end of the day we seem to be unchanged (except a bit poorer). Do we need to change the way they go about making changes?

I started addressing this question in my earlier post called Logic Loses. I described how facts, fear, and force, while giving the appearance of short-term change, did not seem to result in long-term desired behaviors. If that doesn't work, what does?

In Alan Deutschman book, Change or Die, he combats the approach of facts, fear, and force with his own three words (but his start with the letter R): relate, repeat, and reframe. Let's start with Deutschman’s short definitions for his R’s.

  • Relate: You form a new, emotional relationship with a person or community that inspires and sustains hope.
  • Repeat: The new relationship helps you learn, practice, and master the new habits and skills that you'll need.
  • Reframe: The new relationship helps you learn new ways of thinking about your situation and your life.

The best selling book on leading change is, of course, John P. Kotter’s Leading Change where he lists out his eight-stage process* to creating major change. The second step in Kotter's process is about creating a guiding coalition. This mirrors Deutschman’s first R: relate. Change seems to require that somebody outside ourselves has to believe that we are capable of making the change. That somebody, or some group, gives us a glimpse of what we can be and makes it impelling to us, not only because their facts or that they draw on our fears, but because our relationship to them is as compelling as the data.

Change, it seems, needs another human being whom we can connect with but who is not like us. The connection need not be affection only. Respect, awe, admiration, and the like seem to be equally fine kinds of connections. Fear as the sole basis for the connection seems to be insufficient for long-lasting change. (Though it is pretty darn good for short-term change!) The bigger and tighter the connection, the more you're likely to change and that change to last.

But it just can't be one of our mates who thinks just as we think. Just as in Kotter's third step, they need to have a vision of what we can be that we ourselves cannot quite believe yet. “You can automate those test cases.” “No, we don't have to constantly work massive overtime.” From small things to massive change, our connection to them and their belief in us gives us the hope, and therefore the energy, to make the change.

But that connection cannot be fleeting. Deutschman’s second R: repeat echoes Kotter's fifth and sixth steps. Practice makes change perfect (at least in the old-fashioned meaning of the word: complete) and this is the reason why most cold turkey, sheep-dip approaches fail. Doing the new thing once or twice is often not enough to create habits of mind that will make the change last. Change needs training and coaching.

In order for the repetition to be successful it also needs to be seen to be making progress. If I'm trying to find a better way to do requirements, I need to have some signs that I’m actually doing better lest I totally give up. The signs need not be massive, just little things like a better written requirement statement or a comment from a tester that my requirements are much easier to use. I need short-term wins.

Building habit through repetition and short-term wins drives me to Deutschman’s third R: reframe (Kotter’s seventh and eighth steps). Because I see myself doing it and doing it better, I stop relying on my connection’s vision and start developing my own. My old way of thinking, that there is not any better way to do requirements work, starts to give way to my realization that I am already doing a better way. Things I thought impossible before, or at least unlikely, now seem completely doable. Now statements like, “Requirements are about the problem space and design is about the solution space,” don't seem like nonsense but statements of deep truths.

Of course what ever new worldview we end up in will probably need changing in the future and will have to go through the 3Rs again. And we will try at first: facts, fear, and force. And we will scratch our heads wondering why we cannot change.

[* Kotter’s eights steps are: 1) Establish a sense of urgency; 2) Create the guiding coalition; 3) Develop a vision and strategy; 4)Communicate the change vision; 5) Empower broad-based action; 6) Generate short term wins; 7) Consolidate gains and produce more change; 8) Anchor new approaches in the culture]

Why Use Scrum If Change Isn't Important?

posted: 16 Sep 2009

Is Scrum really only valuable to folks who care about agility (or Agility)?

Let's say I'm running a project with defined requirements, fixed scope, a fixed schedule (firm completion/release date), and fixed resources. What advantages does Scrum offer over other project management methodologies?

How about the ability to maximize team efficiency? So, my requirements are clearly defined and I don't expect to change my product backlog based upon feedback from sprint reviews. So what? I still get the advantage of a pull system for work (scrum team members self-assign work efficiently instead of waiting for a manager or a Gantt chart to tell them what to do). I get the advantage of keeping the team from multi-tasking and other work-robbing interruptions. I get the advantages of acceptance criteria and the Definition of Done. I get the advantage of clearly knowing my status at any given time without a lot of overhead. I get the advantage of team and workflow process refinement/improvement via retrospectives. I get a simple system that lets me track work and predict when the project will be done fairly accurately. I even get the ability to reassure my customers and stakeholders that I am on track because I can show progress at regular intervals. In short, I get all of these advantages without any disadvantages... and I get a much easier project management framework to boot. The big bonus here is I get to show folks how an empowered team using a pull system can be much more efficient than the old command-and-control model while being easier to manage because they mostly manage themselves.

Sure, I can get some of these benefits of Scrum without running Scrum... but all of them? As easy as I can by just adopting Scrum?

Think what it would do to most organizations who don't value agility (the ability to react to change), to see the other benefits of Scrum and then realize they can get all this and the ability to course-correct without the pain and waste of throwing a lot of suddenly-obsolete work away. Being Agile means you can change but you don’t have to. Do what makes sense for your organization.

Many people dismiss the idea of trying Scrum because their organization is more interested in predictability than agility, not realizing that Scrum allows both. If whatever process you’re following now isn’t making you happy, maybe you should reconsider Scrum for all of these other reasons.

Greetings!

posted: 16 Sep 2009

Welcome to the first post on my new software development blog!

Let me tell you a little about myself. I'm an experienced software developer, tester, program and project manager, QA manager, and development team manager, with over two decades of experience in high tech. I've worked at small startups and the world's largest software company. I've written device drivers, OS portability layers, libraries, utilities, and UI components, for environments including CP/M-80, MS-DOS, Windows and Windows CE, AmigaDOS, MacOS, PalmOS, Unix, VAX VMS, and IBM MVS/TSO. I've worked on development tools, desktop applications, and platforms. I've managed developers, testers, program and project managers, and entire development teams. And I've had the privilege of working with some very smart people at very successful companies... and with some very smart people at very unsuccessful companies.

I've learned that failure can be a great learning opportunity, and that the opportunities to turn failure into success are really within our control if we're willing to face the brutal facts, recognizing those signs of disaster that are so obvious in hindsight and then devising a solution while there is still time. Agile retrospectives are a great tool for this, as are other tools and techniques.

Many of my posts will be about what I see, and have seen, along with some general observations and perhaps answers to questions. Feel free to leave comments, or contact me through the blog or via Construx's website. 

Logic Loses

posted: 16 Sep 2009

I have recently read a book called “Change or Die” by Alan Deutschman that has some good insights on how people change deeply held behavior. I like to share some thoughts inspired from (and some outright lifted from) the book.

We spend a lot of time talking about change. We want to become more agile or we want to improve quality or time-to-market. Each one of us has something we want to change about the way we are go about making software. So how do we go about initiating that change?

Most of us take the same approach. We use facts, fear, and/or force. We often start with facts. We find some source who has statistical information or powerful stories about what we want to change and we put that information in presentations, e-mails, and hallway conversations. We expect that those who hear the conclusions of our research and our brilliant analysis will quickly submit to our logic and adopt new ways.

Unfortunately, logic loses.

The reason why logic loses is threefold. First, those who are doing things that are contrary to clear logic (at least our clear logic) have great defenses. These are the classic ego defenses of denial, idealization, projection, and rationalization. The denial defense in software sounds like, “this is just the way that software is done .” People in software development denial did not believe that software can be created any better than the way they are doing it today. Of course, the kind of software they produce is unique and the suggestions, even if the techniques work someplace else, wouldn't work here. Idealization often occurs when the person you're trying to convince helped create the current way of doing things. They can't imagine there being any more perfect way to doing development for their shop than the way they thought of doing it. Projection says that, while there may be problems, it is not the fault of the current way of doing things, it is somebody else's fault. Rationalization suggests that while your idea is good, “they won't let us do that.”

Of course, those same ego defenses of denial, idealization, projection, and rationalization are at work in us too so that our incredibly important change is just as blinded as their stubborn refusal to move on.

The second reason logic loses is that our insightful change doesn't even sound logical. To tell somebody who believes that the world is flat that you can sail around the world doesn't even make sense, it's  illogical (I bet you pictured Spock saying that). Each of us has a frame of reference that helps us decide what is true and useful and what is silly and should be ignored. If our beneficial change idea does not fit within the frame of reference of the audience, it makes no sense to them. It is not a logical idea, it is silly talk. One great example of this is the estimation rule of thumb that to decrease the schedule for a software project you often have to increase the estimate. The logic behind this rule has to do with the reality of unplanned rework and is documented elsewhere. However, to somebody who just wants to go faster, lengthening the estimate makes no sense at all.

Just as in the first instance, if their ideas are outside our frame of reference, they sound like a madman.

The third reason is that, by my estimate, roughly 1/2 of our ideas are just plain stupid and should be rejected. Not that we think they are stupid at the time. Actually, we think the idea is quite wonderful and it works great “on my machine”, or my work area, or in my ivory white tower. It is more like creating a bit of functionality and not considering any error routines or doing testing. We may think it's done but it is not done-done. When we start promoting half-baked ideas we often end up doing far more harm than good.

Another reason our ideas should be rejected is when we approach the change with an all or nothing total transformation. “Let us switch from waterfall to agile in one big swoop,” we might say. Or, “Everybody must start doing code reviews immediately!” More often than not, at least in my experience, a quick change over often leads to a quick change back. Rather than flip-flop around, it is best to ignore this change approach.

So if our logic loses, let’s scare them into adopting our change; the fear approach. But fear is simply negative logic and it suffers the same issues as our positive logic. Fear can generate some short term action but it seldom lasts. In Deutschman’s book (remember it from the beginning of the post?) he points out that people with serious heart problems stop taking their medication—medication that they are supposed to take for the rest of their life—within one year. If I am not immediately in fear, if the pain is not currently present, then why change?

Our last trick is force. Make them do it the better way. The, “I told you so,” approach our parents used on us. Did the parental mandate engender genuine change on our part or merely compliance? When out of sight or out of the house, did many of us not actually adopt the very opposite of the ordered direction? Not that software development is analogous to child rebellion but forced change most of the time only lasts while there is an enforcer. Without changing individual frame of references, when the law giver leaves, the golden calf is created.

So how do people change given all these defensive ramparts? That is the subject for a follow-on post and your comments below. Only don’t suggest I am wrong, my ego defenses will put you in the illogical camp :-)

State of the Practice Survey

posted: 16 Sep 2009

Construx has developed the State of the Practice Survey with the goal of better understanding which software practices really work, which really don't work, and identify trends in practice adoption. Survey participants will receive a summary report of the findings later this year in advance of the published report. I hope you will share your views about the state of the practices in your organization. No one outside Construx will see any of the raw data, and information you share will be presented...(read more)

Spirit of Waterfall

posted: 16 Sep 2009

It is not uncommon for me to see on blog posts, newsgroups, or presentations the phrase or comment that something is not, "in the spirit of Agile". In fact a project team could be doing many of the practices of Agile but, if it fails, the agilist will claim that the project was not Agile in “spirit”. And I was wondering, if that is the thing that was really wrong with the waterfall approach.

Consider it. It appears that many of the failings of agile or the miss application of agile is accredited to not being in the spirit. This occurs even when, if you look down a long checklist of practices, the team appears to be doing most all of the practices. However, because they were not empowered, or some other such factor, their experience is chalked up to bad execution, not bad methodology.

So maybe that is what is wrong with the waterfall approach. Sure, we have lots of practices that we were told to do, we have lots of activities the flow one from another, but we did we really understand its spirit? What would be the spirit of waterfall? I suggest this: the spirit of waterfall is “thinking”.

On the simplest level, the spirit of waterfall—thinking—is the look before you leap philosophy. Before you start doing something, think about it. Before I start doing design, I think about the requirements. Before I think about the requirements, I have a general understanding of what the heck this thing is supposed to be and the constraints I am under. Before I start doing code, do I have any clue about what the requirements or the design is? Did I think about it? Even better perhaps, as a team, did WE think about it?

“Thinking” of course does not mean solo cognitive work only. It means taking a rational approach that also identifies clearly the limits to what can be known and creating ways expose the unknown to make it known. (Flashbacks to Donald Rumsfeld are perfectly understandable at this point.)

On a more subtle level, the thinking was there as well. How much of the requirements were knowable? How much of the design was discernible? Those who ran the waterfall well (and yes, there were people who could run the waterfall approach well) were people who thought about those things. One of the big drivers that caused many waterfall/sequential projects to fail was that people didn't think about what was knowable and turned phases/stage-gates intended to be thinking/assessment points into document signing parties. They didn't think about what was right and, even if they did, they didn't share those thoughts with the stakeholders and steering committees; they just made sure the documents were signed. They knew that if the documents were not signed, there was heck-to-pay because the people on their steering committees didn't want to think either. And when the thinking stopped, the spirit of waterfall was defeated.

If we use the argument put forth by agile zealots that when you violate the spirit, no matter how many practices you perform, you aren't really doing the method, then we could say that a vast majority of sequential/waterfall projects—even if they were executing a lot of the practices—were really not doing waterfall. Oh it may have looked like waterfall, smelled like waterfall, they even went around touting the waterfall name, but without the “thinking” it really wasn't waterfall.

So perhaps the “not in the spirit” argument can give a new lease of life to the waterfall approach to software development. Probably not. “Waterfall” is mostly used as a pejorative word in the software development community. However I say, “Long live the spirit of waterfall!” We could all use a bit more rational thinking.

2009 ECSE Meeting Topics Announced

posted: 07 Jan 2009

The 2009 Executive Council for Software Excellence (ECSE) meeting topics have been announced. They are: January Successful Leadership in Software Development February Overcoming a Legacy of Poor Quality March Organizational Structures April Accelerating Organizational Change May Working Effectively with the Executive Team June Working with Distributed/Offshore Teams July The Business of Software Development August Game Night: Software Project Simulation Board Games (Bellevue) Summer break (dial-in...(read more)

Feedback from Stakeholders ヨ A モDoneヤ Criter

posted: 17 Dec 2008

Each of the previous “done” criterion had the need for the individual applying the criterion to make a judgment call as to the “doneness” of the work item under review. This gives it the power necessary to determine “done” in contextual situations. However, if a person only used one of the criterion—Sufficient to Proceed, Appropriate for the Environment, or Sanity Checks—that check alone can not give a good assessment of “done”. Together they have the ability to guide the “done” decision. To confirm the individual judgment it is critical to get Feedback from Stakeholders. With feedback, we can correct and calibrate our expert judgment and become better assessors of “done”.

There are two primary stakeholders that I like to use when getting feedback on “done”. The first is those that supplied me the information used in creating the work artifact. I want them to tell me if I understood and correctly interpreted what they shared with me.

For example, suppose I am working on a technical design and I want the business customer to tell me that I correctly understood and interpreted their business requirements. Asking them to look over the design and give feedback probably would not give me the feedback I am looking for as the design is fairly technical and my business customer is not. What to do?

I could, instead, work with the customer to write up a series of scenarios or stories and then walk through the design with me while I show how the design would satisfy that scenario. This walk through may not be with the business customer but other technical peers who can see if I have a viable solution to scenario’s problem.

The second stakeholder I am after for feedback is the consumers of work artifact. Each artifact produced on a software project should be created because other people or processes need the information to do their job. Even code is information to a complier to make the 1s and 0s. The question I ask these stakeholders is, “Do they have the necessary information to do their task?”

This one can be a little dicey since there are some people out there who may insist on so much content that they are effectively asking you to do their job. For those members of staff, you may need to step back and look at role definitions and job responsibilities. Then again, you may be much better at their job then they are, so go for it!

One of the great things of Feedback from Stakeholders is that there are both an early and a late indicator of how effective is your expert judgment of “done”. Both share the same early indicator in the feedback you get from the initial review. If the stakeholder tells you that it is incomplete, you need to tune your judgment on the “done” criteria.

Unfortunately, sometimes the stakeholders do a “review” and say that the work artifact looks great. I put review in quotes since the signatures were there but that is about it. I had a coworker once put a paragraph into a work artifact that stated that all those who read the said paragraph would be treated to free beer. Eight signatures later and no mention of the beer. Not even, “You better take that out.”

Fortunately, the two stakeholders have late or lagging indicators as well. For the suppliers of information I like to watch the requests for changes. If I see a lot of changes come through, that means that my process of getting information out of the supplier didn’t really do its job. Either I missed something or, just as bad, I forced them to make decisions they were not ready to make. Either way, I was not “done”, I was just pretending.

For the consumers of my work, I like to watch defect counts. If I had the right information but put it in a way that they made mistakes, I probably wasn’t “done” either. I will want to do some analysis to make sure that the consumer isn’t incompetent but I start by assuming they are. It is my job to give them information they can use, not a pile of stuff that they have to dig through just to find (or not find) the information they need.

The early feedback and the late feedback allow me to tune my “done” criteria. Sufficiently Complete, Appropriate for the Environment, and Sanity Checks are all tied together with Feedback from Stakeholders.

Now I would love to get your feedback on these “done” criteria.

Sanity Checks ヨ A モDoneヤ Criterion

posted: 05 Dec 2008

For every work artifact we create there is often a short list of attributes or questions that can help us determine if the artifact is done. This short list reminds us of classic patterns that have risen to become accepted truth and classic mistakes that continue to dog us. This list of questions or attributes is what I call a Sanity Check, a quick look to see if the work artifact done.

For example, if I am remolding a kitchen, the question, “Does the stove, sink, and refrigerator from a triangle pattern?” should be a Sanity Check. I may choose NOT to have my refrigerator, sink, and stove in a triangle but I better have a good reason why not.

When I was a Quality Assurance Representative for the Department of Defense one of my tasks was to watch the manual testing of circuit boards. Those who have placed the probes and watched the oscilloscope know what a tedious job this can be. A Sanity Check called the “smoke test” often proceeded this long series of manual tests. If you applied power to the entire circuit board and it caught on fire or smoked, then don’t bother with the rest of the tests.

A key point here is that a Sanity Check can be quickly applied and the results determined on the spot. Scanning a list of questions or desired attributes should take only a matter of minutes, not hours. The items on the Sanity Check list should represent clear conditions that the violation of those conditions should put the item under review into question as to if it is really done.

However, failing a Sanity Check may not always be as clear as catching on fire. For example, one Sanity Check item for a software requirement is that has no ambiguity. While this is a worthwhile goal, I don’t believe that “no” ambiguity is a good thing. We already have a profession of people who try to write things with “no” ambiguity. They are contract lawyers and nobody really understands what the heck they have written. Here, the Sanity Check is more concerned that the people reading the requirement will have the same understanding as the people writing it. This is more of a judgment call than a hard and fast rule.

Sanity Checks are useful both as a done criterion and as a creation guide. Sanity Checks in the form of checklists can be used for peer reviews. Each reviewer uses the checklist as an aid to help find potential mistakes in the work artifact. By reviewing the checklist of twenty or so Sanity Checks prior to the start of a review and having it lying in sight while reviewing, a reviewer can greatly increase not just the number of potential mistakes found but also the number of categories of mistakes.

The author can also use the checklist as they create the work artifact. By consulting the checklist, the author can use that knowledge to help self-review and prepare the deliverable for other eyes. It can help the author side-step those classic mistakes that have haunted others.

While the core for many Sanity Checks for a given work artifact will be the same across companies and industries, there should be customization for each application. There are mistakes that your organization makes that others have never seen.

As with many things, Sanity Checks work best with experienced staff. Sanity Checks are mere statements that are designed to be interpreted by people. Their brevity requires a somewhat knowledgeable mind to fill in all the gaps and to make the final decision of the Sanity Check’s applicability.

Like the Sanity Check that blog entries should be less than 1,000 words.

Appropriate for the Environment ヨ A モDoneヤ C

posted: 31 Oct 2008

When we create a work artifact on software project, we usually create it with the understanding that somebody else will need to use it. The who, when, and where of that somebody has a huge impact on whether or we can consider our work “done” or not. To be done, we must determine if the work is Appropriate for the Environment in which it will be used.

Think of two teams: Team A and Team B. Surprisingly, both are working on the exact same product, using the same technology, and the same development lifecycle. Team A has been working together as a team for about five years. They are co-located and have deep knowledge of the technology and the customers.

Team B, on the other hand, is a bunch of new hires straight out of college. They still have not all moved to headquarters yet so most of them are telecommuting. Team B has a basic understanding of the technology and has heard of the customer.

Since we are working on the exact same product, etc. you might expect that they should produce the exact same work artifacts. In fact, you might be able take a work artifact from one team, substitute it for the same artifact on the other team, and nobody would notice.

Absurd you say? Rightly so, but why?

You point out that Team A can probably take a bunch of shortcuts that would be dangerous for Team B. Why can Team A take shortcuts? Because I made Team A different from Team B in the two areas that are critical for being Appropriate for the Environment: Competency and Distance. The more competent you are, the less detail you have to have in your communication to get your message across; the more distance, the more detail you need.

Competency here is not about how smart a team member is, it is how much knowledge they have about the technology being used and how much knowledge they have about the needs of the customer.

A student in one of my seminars was a great example. He was a PhD in Geophysics that went on to get a Master of Software Engineering. He was responsible to create software for other geo-scientists. During the seminar he stated that when he gets a requirements specification from the non-software scientists, he would scan it, ignore most of it, and build what they needed. His customers were delighted. Since he understood both the technology and customers needs, he would get the the gist of what they were looking for and could move forward.

Contrast that with a client in telecommunications which was outsourcing to Russia. The client would send a specification to the Russian body shop, Russia would code it to spec, send it back, and it wouldn’t work. Russia understood the technology but not the customer. The client put more detail into the next specification it sent to Russia but it came back with the same problem: exactly to spec but not useful. Only when they reduced the detail and sent a designer (somebody who knew the customer) with the specification did they get useful software.

As competency–knowledge of the technology and the need–increases, the need to put detail into communication decreases. Team A, with abundance of both, probably has far less detail in its project plans, specifications, and even code comments. Would it harm Team A to put all the detail in? Probably not directly but much of the detail would be wasted effort and may impact morale (taking time to produce not needed stuff).

Distance, a second critical area, can be measured in at least four dimensions. Physical distance, of course, but also time, culture, and team size. One of key lessons that organizations have learned with distributed development is the inability of work artifacts alone to solve the distance factor. For example, bringing the team physically together at the start of a project is considered a best practice by many companies. Leading companies invest in telecommunications and travel to deal with distance issues.

It is somewhat ironic that we see a rise in Agile programming practices with an emphasis on little to no distance in the team and at the same time a rise in distributed development (some outsourced) that has a high distance factor. Enter the somewhat oxymoronic “Distributed Agile”. Sure, we can bring collaboration solutions to help lower distance issues, but Agile at a distance requires higher detail in our work products than co-located Agile.

The size of the team has a distance impact. As the grows, it is more and more difficult for each member share a common set of information. While a team of 100 may be co-located on the same floor of the same building, it is unlikely that they actually share a common understanding of the work at hand. In a sense, the large number of communication links creates distance.

Both Team A and Team B have to deal with the last two critical areas of Appropriateness for the Environment: Criticality/uniqueness and legal/regulatory. If there is a law that says you must, well, there you are. Criticality/uniqueness is a bit more subtle. If the item in the work product is critical for success or is different than what we have done before, then it is worth the time to put in more detail. Leaving the detail out, even competent people may assume that it is business as usual.

An example was some work I had done to remodel a bathroom with a contractor. The contractor, assuming business as usual, had quoted and planned to install a standard 24” towel bar. I had to make sure it was very clear that I wanted three towel hooks (three daughters) instead of a towel bar.

To say a piece of work is “done”, we must consider if the amount of detail is Appropriate for the Environment. To do that, we must look at

  • competency in the technology and the customer needs,
  • distance: physical, time, cultural, and team size,
  • legal/regulatory requirements,
  • and the criticality/uniqueness of the item.

How do you determine the right level of appropriateness? That is far more of an art than a science. Often it is a matter of trial and error until you find the right spot. However, as a thought exercise and a discussion point, it can help you contextually determine when something is “done”.

Like this post.

Sufficiently Complete - A "Done" Criterion

posted: 10 Oct 2008

A helpful way to think about a software project is to see the project as a series of decisions about what problem or opportunity the software will solve/implement and the solution itself. Theoretically the decisions start out large and granular and get refined and detailed as the project progresses. Code becomes that last place to make decisions before turning it over to the compiler. Of course in reality, software projects are more likely to see almost random decisions made at multiple levels of detail. At least most projects still tend to wind up at that last decision point—code.

With a serial decision making model of a software project, one criterion in defining “done” is to ask if the work item under consideration is sufficiently complete to make the decision the team or business needs to make at a particular point in time.

Here is an example. Let’s say I am at a point in the project where I need to make a reasonable commitment to the cost and duration of the project. What would need to be “done” in order for me to do that? A simple answer would be the requirements, the design, and some sort of staffing plan. But do I need all the requirements done and all the design done? Probably not. If I had

  • identified all critical features with their key non-functional criteria (maybe also critical out-of-scope features identified as well)
  • selected an overall architectural approach
  • worked out key design elements
  • secured commitment from critical staffing resources
  • created a list of top risks

I could probably make a reasonable commitment. I do not need all the requirements completed to the last dotted “i”. I do not need every part of the design worked out in advance. I do need the work to go to a certain level of depth and refinement where the information is sufficient enough to make the decision at hand.

Sufficiently complete is about creating the information needed to make decisions at different points in the project. Could I have done all the requirements work before making a cost commitment? Sure, but much of it would be nice-to-have rather than truly needed for the decision. Also, given that much of that requirements detail will be out of parity with other work detail (like design), it is highly subject to change and may mislead the decision at hand. Think of the sufficiently complete criterion as creating just-in-time information.

The question you must ask to determine if a work item on a software project is sufficiently complete is, “What decisions do we want to or need to make on this project?” The decision points are often closely related to the software development lifecycle you have chosen. For a sequential project, I might need to make a decision if the requirements are sufficiently complete to start design. On a more incremental and iterative project, I might need to decide if a story would fit within my iteration length. To make either decision, I have to do some requirements work but not all the work. The requirements work will be sufficiently complete when I can decide the story will/will not fit the iteration (and a little design work will be needed too) or that my risk at starting design on the sequential project is low enough to proceed.

Some common decisions that any project must face are

  • Why put our effort on this project rather than some other project?
  • What are the problems/opportunities do we want to address with this project (and which ones do we not want to address)?
  • Who should be a part of this project?
  • What technologies/strategies should be bring to bear on this problem/opportunity?
  • Does this project still make sense given the business case and what we know now?

And we don’t make these general decisions once but over and over again at various levels of detail or abstraction. At each questioning, some amount of work needs to be accomplished to get the information to make the decision.

Once you have found your decision points, identify what you must know in order to make that decision. When you have identified what you need to know, you then have the information to judge if your work is sufficiently complete.

New White Papers

posted: 02 Oct 2008

We've recently posted a few new white papers on our website, along with some existing papers. These are free to members (and membership is free). 10 Keys to Successful Scrum Adoption Scrum is a project management approach for Agile software development and is the most commonly adopted Agile approach in the industry today. Construx has worked with hundreds of organizations to implement Agile approaches including Scrum. We have helped numerous organizations to adopt the core principles of Scrum...(read more)

In Defense of the Bill Gates / Jerry Seinfeld Ad #

posted: 24 Sep 2008

Say what you like about the new Bill Gates /Jerry Seinfeld ads, I have to approve Bill's choice of bedtime reading. He's reading from Section 18.2 of Code Complete 2. http://www.youtube.com/watch?v=gBWPf1BWtkw I thought I was the only person who read Code Complete 2 aloud to put their kids to sleep!...(read more)

Software Executive Summit 2008 Rapidly Approaching

posted: 09 Sep 2008

After Labor Day most of my focus goes into our annual Software Executive Summit . We are now in the final registration period -- with a $1000 public seminar voucher bonus for people who register by September 15. I'm very excited about the speaker lineup this year. In addition to me, Martin Fowler, and Ken Schwaber, we have several very interesting industry speakers. Mike Morrissey is VP Infrastructure at RIM, where he's responsible for the software that keeps all the Blackberries running...(read more)

Defining "Done"

posted: 08 Sep 2008

In software development, like many other areas of life, we need to decide when some item of work is done. The decision of "doneness" has wide impacts as under-done creates creates defects, downstream rework, and lost opportunity costs while over-done wastes time and resource and incurs its own lost opportunities.

To be even more critical, in my review of documents from hundreds of clients I find that work items are often under-done in important areas and over-done in trivial ones. That is, the document cover, table of contents, document purpose statement, and sign-off areas have been vetted to precision. However, the requirement, design, test plan, or code contained within has defects both minor and major.

This may be explained by human nature as the trivial parts can easily be checked and confirmed. Committees or teams charted with creating common process and practices occasionally find that the only place where they can garner agreement and claim success is in the trivial. One instructor from my past called these the blah-blah pages; they just seem to go blah, blah, blah and not say anything really important.

What about that other part, the important part? Why can't the committees or teams which gain success on the trivial part garner the same agreement on these parts? Well I think the answer here lies not in human nature so much but in the nature of the problem. The issue is that the important stuff in software development, as in many parts of life, is contextual. What is going on in the project, the team, and the organization at the moment when the work artifact is completed all have an effect on the decision of done. You can't really spell out in advance what done looks like.

For example, let's look at the requirement written on a story card, "Make it faster". If I were to consult my requirement books, articles, and heck, even the class I teach, all would proclaim "Make it faster" a woefully inadequate and a completely NOT done requirement. Way too much ambiguity. No scale identified. Not tagged adequately. Not testable as it stands. And the list could go on. This requirement is doomed to cause a lot of defects and angst.

However, on my imaginary project where this story card has been written, the small team has been together for six years and through four releases. The story is was written shortly after the entire team had witnessed the prototype of the fifth version perform reasonable well but much slower than the fourth release did. Having a well defined target customer understood by every team member, the entire team knew what it would take to make that customer happy. For this team, the requirement "Make it faster" is in fact done. It is "good enough" to get the team to focus on the right work to the right level. There will be no defects or angst.

So we can't come up with a clear, complete, consistent definition of done for the parts of software development that really matter. Faced with this challenge, out committee and teams often take one of two paths. The first path is to create the "mother of all templates" and put in everything and every practice they can think of and give (often in small print, with dire consequences if actually attempted) direction that the template may be tailored. This offer to tailor is seen as the compromise to the reality of contextuality. Unfortunately, the compromise is not required as most implementers of templates know that if they do it all—make it over-done—then the process police will give them their blessing and all will be right in their world.

The other path that committees and teams often take to deal with not being able to define done is to slip into a "father knows best" syndrome. The person with the most experience in a given area (even if that means it is the recent hire who is the only one who claims to know the new technology) gets to define "done". So the entire team starts to do what the most experienced—or the loudest—person on the team does. Occasionally, like any flock or pack, there is a fight for dominance or pecking order. Most of the time everybody does it the same way that, by definition, fails the contextuality test.

Given the two paths I seen, what is a committee or team to do? Contextuality demands that doneness can't be defined ahead of time but the costs of not being done are so high. The answer, I believe, is not in defining "done" but defining how to determine "doneness" within a context. The process I use I call my "good enough" criteria. That is, I have four criteria I use to help me decide if the work artifact is done to a level that is good enough for what the project needs to do.

The four criteria are

  1. Sufficient to Proceed. Is the work to a level that the next person who must take up the work has what is needed to do their job?
  2. Appropriate for the Environment. Are the people who take up the work likely to understand it?
  3. Sanity Checks. Has the work committed a classic mistake that can easily be detected by the review of a short checklist of critical attributes?
  4. Feedback from Stakeholders. Do the critical stakeholders tell me that it is OK?

I find using the combination of these four criteria gives me insight into how done the work artifact is and is fully contextual. Process standardization zealots can take heart in the sanity checks and experience anarchists can rejoice in the feedback.

In future entries, I will explore each of these four criteria. Until then, I am anxious to hear how you define "done".

That's it, I'm done... for now.

Lights... Camera... Arrrgg!

posted: 03 Aug 2008

There seems to be a third thing certain in life besides death and taxes. That thing seems to be the fact that the moment some moron uses the product or software that I have been working on, they are going to do something stupid. My brilliant work of pristine intellectual purity which functions just the way I want it to will behave like a brakeless car in the hands of a drunken driven; it is going to crash.

I don't know about you but when I release a product I have been working on, I want it to work. I want it to work so well that people say things like, "How did the human race survive without out this. Why, there has been other attempts at doing this, but this, oh my, it really puts everything thing else to shame." Of course, I don't tell anyone I want them to say that. I also admit they probably won't say that because I know there is some moron out there and they will do something to make me go "Arrrgg!"

I often try to outwit the morons. You'd think it would be easy. I develop the product. I test it all sorts of ways. I try to think of the things that the morons will do and make it so that they can't. It is much like the amateur play I recently was in. For the two public performances to raise money for a charity, we rehearsed for months. Some of the other cast members didn't remember their lines the same way the playwright wrote them nor the same way more than once. To deal with that, I came up with witty and clever things to say when they wondered into the script of moron creativity. Then the big night arrived. Lights came on, curtains went up, and Arrrgg! I forgot one of my lines. I became the moron.

It is somewhat similar with my projects. More often than I desire, what I get is something like, "Well that is nice, but did you know that 'X' is not working?" A line somewhere was forgotten. You would think I would be more prepared for an initial Arrrgg! There is usually at least one. A smarter person may even try to get to the Arrrgg! as quickly as possible to get it over with. Maybe that is one of the best things of developing in short increments. You get the Arrrgg!s more quickly and they tend to be smaller. Increment, moron... uh, demo, Arrrgg!; increment, demo, Arrrgg!; increment, demo, Arrrgg! Of course in that situation we don't actually say "Arrrgg!", we say, "Oh, we will put that in the backlog (you moron) and you can prioritize it."

We can't, however, rely on short increments alone to solve the Arrrgg! problem. I know that if my play's cast had not rehearsed, it wouldn't have been just one Arrrgg! but a complete disaster. We had to do some up front work. Even improv groups work together for a long time before they are any good. So some amount of planning, some amount of up front work is required to at least limit the Arrrgg!s. Can I get the Arrrgg!s to zero with enough up front planning? Has there been a moratorium on the making of morons?

The Arrrgg!, therefore, is here to stay. The only question left then is, when do you want to experience your Arrrgg! and how big do you want it to be?

The Existential Pleasures of Flogging

posted: 03 Aug 2008

In my last post I spoke about how some moron is going to cause you to go Arrrgg! by doing something stupid with your product. Unfortunately, that appears to be a fundamental truth. I have been pondering why some moron does the stupid thing when Steve McConnell's inaugural post lead me to another conclusion: it is as fun as heck to flog somebody else's product.

That is just it. Whipping and beating your own product is about as enjoyable as a warm tuna sandwich with way too much mayo. You can eat it if have to but you rather not. However, flogging some other poor soul's attempt at product perfection is a way to bring them down to their existential, earth-bound reality. When I have to flog my own work, I call it testing and I hate it. When I get to flog somebody else's stuff, well, then it is play time! I don't have to check out every part of it, I don't have to calculate coverage or trace back to every requirement, I can pick and choose the spots that seem interesting; the spots that are mostly likely to cave. I can invoke moron creativity and try clicking on that button while holding down the print screen key just because I can.

I had one of you flog me recently. That person (who will remain nameless but I will call Bruce) attempted to use this site's email system to send an email after he had 1) composed a system email 2) opened up a new tab and 3) logged off the system. So, surprise-surprise, when he tried to send his email, the system said something like, "You need to be logged in, you bozo." It probably didn't say the "bozo" bit but it should have. Then -- as he writes me oh so formally -- the system had the audacity to return him to a blank email form after he logged back in. "It should have populated the system email with the original text," wrote Bruce-who-is-nameless (I guess in some other email system). I personally am not sure how, since there was no database connection to save the text but, hey, this is where the existential pleasure of flogging comes in.

When I flog, it is not about what is best for some undefined customer or world peace or the improvement of humanity. It is doing what I want to do in the way I want to do it. It is a fully self-focused, down-to-earth, gritty reality of my own best interests when I flog. I don't have to limit it just to software or products, I can flog away at any personal injustice that I deem worthy of my attention. I am sure that many governmental rules are the result of flogging. I recall that a once mighty sports stadium in Seattle, the Kingdome, had to change its railings because some drunk climbed up on them and fell off. That is flogging at its finest; morons do something stupid so we all need to change. Think of the overload of politically correct speech and you can see flogging at work.

The obnoxious point is that it would be better if the system populated the email, the Kindome didn't kill people (even if they are unbelievably stupid), and that we all respected each other. A good flog isn't wrong, it isn't even necessarily petty (though it often is). It is, however, the exception rather than the rule. It is what happens to the one rather than what happens to the many.

I believe it was the comedian George Carlin who pointed out that all people driving faster than you on the freeway are maniacs and those driving slower than you are idiots. It is the rationalization that *I* am the special one that brings the true pleasure of flogging. It reflects well my experience with the world.

Only, don't flog my products, kay?

PEZ Development

posted: 03 Aug 2008

I was teaching an Agile seminar recently when the image of a PEZ candy dispenser popped into my head. Why, PEZ candy, I thought, is just like an Agile project. You work things in priority order by taking what is off the top of the stack of similar sized bits of work. We know that they are similar size since we broke down the bigger ones until they at least fit into the iteration length. Like PEZ, the product owner could rearrange the candy flavors however they liked until I put one into my mouth. Then it is either spit it out or eat it, no changing the flavor.

This metaphor may be the perfect thing for solving the Agile name problem. A few of the "founders" of Agile have suggest that the name "Agile" was not the best choice. I mean, who really is saying their development methodology is "rigid"? Also, there isn't a good name for the opposite of Agile. Boehm uses a term like "Plan-driven". Others have used the term "traditional". I have used the idea of "deterministic". But if I rename Agile to "PEZ Development", that opens all up all sorts of useful comparisons.

Waterfall can become "Jawbreaker Development". One big, hard lump that, no matter how long you suck on it, never seems to go away. Of course, we all know somebody who said that they actually finished a jawbreaker in their lifetime. Sure they did. I think they just licked at it for awhile before it got lost behind the clothes washer. As kids, most of us took out a hammer and broke our breaker. It was much easier to consume as a chunk here and a splinter there. Once we got tired of it, the rest could be thrown away.

Spiral can become "Tootsie Roll Pop Development". You start out with the hard stuff. By slowly spinning it around and around in your mouth, you take care of all the hard things that can break your teeth. Once that is gone, the soft, easily consumed center is quickly dispatched.

All the well run sequential models (staged delivery, design to schedule, etc.) are "Chocolate Bunny Development". Of course, here I am referring to the large, solid chocolate bunnies given in the spring at Easter. You can substitute your favorite holiday large chocolate item if you wish. Usually consumed in series of scheduled times (like after dinner or, for me, when awake), you keep eating it until it is gone or you run out of time (your Dad takes it and eats the rest). Also a common feature of this type of candy is a decision to eat in a certain order. First go the ears, then the feet...

The old code-and-fix model can be "M&M Development". Sure, its fun to eat but you just don't stop. Not even when the bag is gone, you just go get another one. There is no way to eat a few. You also never really know when you have had enough.

Evolutionary prototyping is "Salt Water Taffy Development". Pick one up out of the pile of brightly colored taffy and taste it. Most of the time you spit it right back out because it tasks like, well, salt water. But, you try again until you find one that is edible. Then you stop since you know it is not wise to press your luck.

I think that this general renaming of software development can really help us identify the best of each set of practices. It can help us get out of the "my way is good" and "your way is bad" syndrome. By using this naming standard we can see that there is something sweet about each way of approaching software development. It can also remind us that too much software development before dinner can ruin our appetite.

And you can be the Candy Man!

Is Faster Always Faster?

posted: 03 Aug 2008

A reader of one of my books asked this question:

What is the impact of an improvement in response time on increased throughput? I develop many systems, and some have instantaneous response times, some have 10 minute response times, others have 4 or 5 hour response times. What are the threshholds at which response times affect throughput? Clearly going from 30 minutes to 30 seconds would be a big improvement. But would 30 minutes to 20 minutes also be a big improvement? [this has been paraphrased for clarity]

I think the key assumption in this statement is this: "Clearly going from 30 minutes to 30 seconds would be a big improvement." I suspect that sometimes the dynamic is actually the opposite of what the reader implied. With small changes in response time you can probably assume an increase in throughput. If response time improves from 10 seconds to 5 seconds, you can probably assume the users will get more work done. 

 

But with large changes in response time (in either direction), I believe you will see users adopt offsetting behaviors that can outweigh any differences in response time. For example, years ago when computers were changing from batch processing to interactive processing there were some studies that tried to assess the improvements in productivity attributable to interactive systems. Surprisingly, I don't recall reading any study that found clear evidence of an improvement in productivity in the move from batch processing to interactive processing. Instead, the studies found that programmers had adapted to the long wait times in batch processing environments and filled their wait time with other useful activities.

 

It's like cooking in a microwave. If I heat up frozen vegetables on the stove, I can just throw them in the pan, turn the stove on low, and go do something else for 10 minutes. If I put them into the microwave for 40 seconds, I might very well stand in front of the microwave and wait for 40 seconds. The food cooks faster with the microwave, but I might actually get more done if I use the stove.

 

Fred Brooks made a similar point in a keynote address at ICSE '95. He commented that he wasn't sure there had been any real gains in productivity arising from the move from character-based displays to GUIs. He said, "I used to write a draft of a letter and then give it to my secretary to type the final draft. Now I type the draft myself, and then I spend 20 minutes making the fonts look nice!" In other words, more computing power doesn't necessarily mean more productivity.

 

In the famous IBM Chief Programmer Team project, one programmer wrote 83,000 lines of code in one year. This project took place in 1968. And the code was written in a batch processing environment. And on punch cards. This person had 8 other people arrayed around him in supporting roles, but that still works out to 9,200 lines of code per staff year for a business systems project. At Construx, we see lots of companies writing similar kinds of software that don't achieve 9,200 lines of code per staff year even 40 years later, even in highly interactive environments, even with radically better tool support, even on computers that are millions of times more powerful. Of course we see other companies writing code much faster, though we haven't yet seen any individual programmer who has written 83,000 lines of code in one year, no matter how the team is configured.

 

Productivity is only partly a function of how fast you go. Highly productive developers need to be aware of the difference between activity and productivity. The fact that you're busy doesn't mean you're getting work done. 10x developers focus on getting the actual work of the project done. They pay close attention to their experience to discern whether the work they're doing actually means more progress -- or just more motion.

 

References

10x Software Engineering seminar

What is a traditional Manager?

posted: 03 Aug 2008

I've been studying a lot about agile management, and even completed the Certified Scrum Master training. But still I ponder about how they define a traditional manager.  Am I traditional?  

On one hand, I started managing software teams over 25 years ago.  I graduated from the US Air Force Academy and completed too many Professional Military Education programs. I got my Masters in Management over 20 years ago. I've lead big teams and small teams.  Sounds like I should fit into the traditional camp.

On the other hand, when agile writers talk about traditional managers, they often equate it to the pointy-haired bosses who dictate every action. They talk about total reliance on command and control authority, process for process sake, and rigid conformance to the plan.  Do you picture Michael from "The Office?"  Is that really traditional? If so, I am definitely not traditional.

Perhaps we're looking at it the wrong way.  It's not a matter of traditional vs. agile management. It's really all about good and bad management.  I have never worked for a bad manager - the one agile calls traditional.  I know they exist, but I've been lucky.

I've had some good role models. They taught me years ago, long before the agile movement began, that good management is all about setting directions, getting the resources, removing barriers, and then getting out of the way. 

This was drilled into me from very early. I still remember quotes I had to memorize when I was a  smack (1st year) at the Air Force Academy in 1976. 

 "Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity."   General George Patton

"A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves."   Sun Tzu

To me these capture the essence of good management.  But perhaps, I ponder too much. What do you think?

Cone of Uncertainty Controversy

posted: 03 Aug 2008

The May/June 2006 issue of IEEE Software published an interesting article that analyzed the estimation results of an extensive set of projects from Landmark Graphics. The author, Todd Little, analyzed the relationships between estimated outcomes and actual outcomes. Based on his data, he concluded that the 80% confident range of estimates did not reduce as the Cone of Uncertainty implies, but that the estimates continued to vary by about a factor of 3-4 for the remaining work on the project -- regardless of when in the project the estimate was created.

There are some interesting takaways from the article's data, and some of its conclusions are supported by the data, whereas others are not. The basic issue with the article's data is that it represents estimation accuracy as estimation commonly occurs in practice rather than estimation accuracy when estimation is done well.

Figure 5 in Little's article is particularly interesting:


Figure 5 from "Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty."

Figure 5 shows a scatter plot of estimates created at different points in a project's duration. The scatter plot forms a near perfect cone--but only the half of the Cone that represents underestimation! There is only a tiny scattering of points that represent overestimation (those below the 1.0 line). As a view of estimation in practice, this is consistent with data my company has seen from many of our clients. It supports the conclusion that the software industry doesn't have a neutral estimation problem; it has an underestimation problem. (This is my conclusion, not the article's.)

The article's conclusions about the Cone of Uncertainty are less well supported. With reference to Figure 5, Little makes the observation that it forms a visual Cone, but only because the graph plots "estimated remaining duration" vs. "current position in the schedule." He points out that, since the duration remaining decreases as the project progresses, smaller estimation errors later in a project are not necessarily better. For the improved estimates to be accurate (i.e., for the Cone to be true), the estimates would need to be more accurate on a percentage-remaining basis, not just have a smaller absolute error. That analysis is all correct as far as I am concerned.

The article then goes on to point out that the relative error of the Landmark estimates didn't actually decrease, and concludes

"While the data supports some aspects of the cone of uncertainty, it doesn’t support the most common  conclusion that uncertainty significantly decreases as the project progresses. Instead, I found that relative remaining uncertainty was essentially constant over the project’s life."

There are two reasons that this particular conclusion can't be drawn from Landmark's underlying data.

First, the article misstates the "common conclusion" about the Cone. As I’ve emphasized when I’ve written about it, the Cone represents best-case estimation accuracy; it’s easily possible to do worse—as many organizations have demonstrated for decades. Anyone who's ever worked on a project that got to "3 weeks from completion," and then slipped 6 weeks, and then got to "3 weeks from completion" again, and then slipped another 6 weeks, knows that uncertainty doesn't automatically decrease as a project progresses. The Cone is a hope, but not a promise. Little's data simply says that the estimates in the Landmark data set weren't very accurate. It's interesting to have this data put into the public eye, but it doesn't tell us anything we didn't already know. It tells us that software projects are routinely underestimated by a lot, and that projects aren't necessarily estimated any better at the end than they were at the beginning. That's a useful reminder, as long as we don't stretch the conclusions beyond what the underlying data supports.

The second problem with the conclusion the article draws about the Cone is that it doesn’t account for the effect of iterative development. Although it isn't stated in the published article, an earlier draft of the article, circulated on the Internet in mid 2003, emphasized that the projects in the data set were using agile practices, and in particular that they emphasized responding to change over performing to plan. In other words, the projects in this data set experienced significant requirements churn. If the projects averaged 329 days as the article says, and if they followed agile practices as Little described in the 2003 version, there could easily be five to 10 iterations within each project. But the Cone applies to single iterations of the requirements-design-build-test process. For an analysis of the Cone of Uncertainty to be meaningful in a highly iterative  context, the article would need to account for the effect of iteration on the Cone by looking at each iteration separately -- that is, by looking at 1-2 month iterations rather than looking at 329-day-long projects. The 329 day long projects are essentially sequences of little projects, so the way the Cone of Uncertainty applies in this case is that there isn't one big 329-day Cone; there are 6-12 1-2 month Cones instead. Unfortunately, the article doesn't present the iteration data; it presents only the rolled-up 329 data, which is unfortunately meaningless in terms of drawing any conclusions about how the Cone affects estimation accuracy over the course of a project.

The fact that requirements were treated in a highly iterative way also forces a reexamination of Figure 5. While it makes sense initially to treat Figure 5 as evidence of systemic underestimation, that conclusion can't be drawn either, because the requirements changed significantly over the course of the average 329 day project, and so whatever was delivered at the end of the project was not the same thing that was estimated at the beginning of the project, and that makes the early-project estimates and the late-in-the-project estimates an apples-to-oranges comparison, i.e., not meaningful.

Little makes an interesting comment at the end of the article that I think is a good takeaway overall. He points out that some of the variation in estimation accuracy was due to "a corproate culture using targets as estimates." Figure 5 might not provide a meaningful view of estimation accuracy, but it can certainly be interpreted as an indication that projects tend to set aggressive targets and then repeatedly fail to meet those targets. That's something we already knew, too, but it's good to have a reminder, and it's good to see that reminder supported with some data.

Resources

Software Compensation 2007--Is it 1999 All Over Ag

posted: 03 Aug 2008

A comment I'm hearing with increasing frequency is "The job market is getting to be like the dot com era all over again. Developer salaries are increasing, and it's getting harder and harder to attract and retain good developers." Our May ECSE Meeting focused on the topic of "Compensation, Recruiting, and Retention," and so I used that as an opportunity to dig into the question of "Is it really 1999 all over again?"

The first question is, Is developer compensation increasing? I think quite clearly it is. The consensus raise for 2006 was about 3.5%-4.0%. The raises being budgeted for 2007 are more variable -- I've heard a low of 3.0% and a high of 6-7%. (These figures are all North American figures. Figures in India, Russia, and Eastern Europe can be very different.) But these are not the unprecedented raises we saw in 1998-1999; they're more incremental. Note too that a "budgeted raise of 5%" doesn't mean everyone will get 5%. People who are top performers will tend to get more than that. People who's compensation has gotten behind the market will tend to get higher raises too.

What is current developer compensation? Most of my data here is from the Seattle area. In the Seattle area, developer comp typically ranges from about $60K to about $120K, with very few people (less than 5% of the most senior people) making more than $120K. Fresh outs are being hired at $50-$60K in our area. East coast salaries tend to be similar, with higher salaries in more expensive areas (e.g., Manhattan). Salaries in less populated areas tend to be somewhat lower.

Bonuses. Most employers report annual bonuses of 5-15% for purely technical positions, with most companies paying closer to 5% than 15%. For very senior technical people and upper-level managers (i.e., Directors and VPs), bonuses can go higher than 15%, and in a few cases quite a bit higher. One company reported going as high as 50%. Most companies give higher-percentage bonuses to more senior people, although some don't differentiate on the basis of seniority.

Standard Benefits. We see a lot of commonality in benefits at this time. Fully-paid health coverage for employees seems to be standard among software employers. Partial coverage of dependent medical premiums seems to be common, with a few companies paying 100%. Starting vacation of 3 weeks is typical, with some companies offering only 2 weeks. Vacation increasing by an additional week after 5 years also seems to be typical. Vacation policies are almost always based on longevity with the company, and most managers have little flexibility in varying vacation policy.

Other Benefits. We discussed signing bonuses, stock options, stock grants, and other more elaborate perq's. Signing bonuses appear to be rare, still very much the exception rather than the rule. Most employers report that prospective hires are showing little interest in stock options. Apparently the memories of the dot com collapse are still fresh enough that many people would still rather have the bird in the hand of cash now rather than the bird in the bush of equity that might be worth a lot more later. Many companies sponsor occasional low-key "morale events" such as tickets to a baseball game, dinner out, pizza and beer at the office, and that kind of thing. Other more exotic and expensive perq's seem not to be reappearing at this time.

Hiring wars. A few companies reported losing key people, and in a few cases to "crazy offers that it just doesn't make sense to try to match." After quite a bit of discussion on this point at the ECSE meetings, the consensus seemed to be that these extreme compensation packages were more the result of a specific overactive recruiter than a symptom of the job market overall. Several companies in our area (Seattle) have reported losing staff to the most actively hiring companies (especially Google and Yahoo), but even in these cases the salaries offered were something like 20% higher, which doesn't seem to be symptomatic of any overheating in the job market. There have also been a few reported cases of very experienced people getting more than one job offer at a time, but again these seem to be the exceptions.

So, is it 1999 all over again? I think it clearly is not 1999 all over again. What we're seeing is healthy competition for top talent, which is really business as usual -- and business as it should be. We aren't seeing elaborate perqs -- no onsite massages, concierage service, nights out in limousines, and so on. We're not seeing hiring wars for average talent -- remember in 1999 we had hiring wars even for people whose only skill was writing basic HTML. We're not seeing huge equity grants or promises of ridiculous wealth in short time frames. People seem to have already forgotten how crazy 1998 and 1999 were. One ECSE member commented that people aren't currently "Expecting to work for five years and then be able to retire." My recollection is that people at that time expected to work for two years and then retire! The market was unbalanced in favor of employees -- to a degree that was unhealthy, because businesses were constantly confronted with unpredictable escalations in salaries, unexpected losses of key staff, uncontrollably high turnover. There was so much chaos in the job market that businesses had difficulty finding time to actually focus on their business.

In 2001 through 2002 or 2003 (depending on where in the country you were), we saw a job market that was unbalanced in favor of employers. There were so few open positions available, and the software personnel who had good jobs were so reluctant to change jobs, that even some qualified people had trouble finding work. That wasn't healthy either because it can cause talented, qualified people to leave the field.

Job Market 2007. What we are seeing today is that the best employees can command a premium, but they can't be unreasonable. Average employees can find jobs but probably aren't going to get multiple offers. The worst employees are going to struggle to find jobs at all.

That all sounds to me like a healthy, sustainable equilibrium -- a balance of power between employers and employees. I would be happy to see that balance continue for the foreseeable future.

Resources

Estimate THIS

posted: 03 Aug 2008

It used to be a common feedback I would get when I taught Construx's Software Estimation seminar. I would show the bright developers how to estimate their software projects several different ways and they respond with the whine, "This is all fine and good but our management won't let us."

I would question them as to why they they thought management, who had hired me to come show them how to improve their ability to estimate, would then turn around around and tell them that they can't use these techniques. This did not make sense to me at all. I know management can be a little strange at times but even they can't be that clueless.

The bright developers would explain to me that if they used the good and wise techniques presented in the seminar that they would come up with estimates that far exceeded the desires of the managers who had asked them for an estimate. In a sense, when presented with an estimate that didn't match the already made commitments made further up the chain, management has said, “Estimate THIS!” and held up a single finger, not the index.

In reality, management didn't want an estimate at all. They wanted a plan, or more like a miracle, that would allow them to keep the boastful promises made in the heat of a compensation impacting meeting. Management's ability to get promoted, secure a bonus, shine in front of their peers, or to get a raise depended on the developers' ability to spin gold out of straw. The problems is that the developers don't have a midget running around with a hard to pronounce name who can do that. Instead, all they can do is mix the straw with mud and hope to make enough bricks to be useful.

I said I "used to" get this feedback, I don't as much anymore. That is because I realized a few things I need to do when teaching estimation. The first is to make the bright developers understand that management has nothing to do with the estimate. The purpose of the estimate is to give knowledge to the development team, not to the managers. With this knowledge, the development team can make appropriate decisions, including looking for another job. The second is that it is rational for managers to want miracles. Heck, I would like to win the lottery. My odds are pretty darn low I know, especially since I don't buy lottery tickets, but I would still like to win. Wanting a nice thing is a reasonable act. However, wanting something -- even in a rational manner -- does not make it so. The third is that when the rational targets of the managers exceed the estimate calculated by the developers, there is going to be pain. Maybe not outright torture, but discomfort to be sure. The only choice we have is when will we experience the pain; early pain from staying with what is possible or late pain from not delivering on the commitments.

So I present the seminar not so much on how to calculate a date (given a feature set) or a set of features (given a date) but how to come up with enough facts and data that allow the bright developers to have a professional discussion even after an manager says, "Estimate THIS." I think this is the key challenge for estimators because our "standard" estimation technique of taking our best guess based on our personal memory of having done something sort of like this in the past has as much validity as an Elvis sighting. If we have facts and data that has at least the appearance of empirical consistency we will be in the position to respond with something like, "I understand you want that straw spun into gold. I would like that too. I must point out that this is not possible given our current development capabilities. I am happy to walk through the data with you, if you desire. Given this reality, I would like to suggest one of these three brick alternatives we could make within the constraints you have presented." It is probably best to leave out the "Your mama" we want to lead with.

In a nutshell, we estimators have to take our professionalism up a notch when faced with a rational but unachievable target. We need an analytical estimate based on historical data to back us up. Only then will we be able choose the early pain and, with practice, allow it to be only a minor irritant.

Book Review: The Myths of Innovation

posted: 03 Aug 2008

I just finished reading Scott Berkun's new book, The Myths of Innovation (O'Rielly 2007).  

While taking on the role of myth-buster; Scott provides insights into how innovations really happen and more important how they gain adoption. Like his first book The Art of Project Management (O'Reilly, 2005), Scott witty style makes the book easy and enjoyable to read.   There's much in the book that makes you rethink and question the common views of innovation.  While each chapter presents good insights, I especially liked chapter 9 "Problems and Solutions."  Scott does a great job pointing that the real key is correctly defining and framing the problem.

Some relevant quotes:

Problem solving is not nearly as important as problem finding.

Problem finding--problem solving's shy, freckled, but confident cousin--is the craft of defining challenges so they are easier to solve.

Discovering problems actually requires just as much creativity as discovering solutions. There are many ways to look at any problem, and realizing a problem is often the first step toward a creative solution.

I guess this brings us back to the Problem Space vs. Solution Space discussion Earl Beede started on Construx's Requirements Forum.

I highly recommend this book. It's a short easy to read book; one that will keep you enterained during a cross-courntry flight.

Classic Mistakes Updated

posted: 03 Aug 2008

In Rapid Development I wrote that, "Some ineffective development practices have been chosen so often, by so many people, with such predictable, bad results that they deserve to be called 'Classic Mistakes.'" That was in 1996. At that time I was self-employed and most of my experience had come from working with only a handful of companies. New Classic Mistakes After founding Construx, a decade of work with hundreds of companies has enabled us to identify several new classic mistakes...(read more)

Speaker at SD Conference

posted: 03 Aug 2008

I will be speaking at the SD Best Practices 2007 conference in Boston in September. My presentation Applying Lean Principles to Plan-Driven Software Projects will introduce applying lean thinking when circumstances require using a plan-driven approach.

Estimation of Outsourced Projects

posted: 03 Aug 2008

A question we sometimes hear from our clients is, "My company does outsource software development for other companies. Is there anything special about estimating in that context?" There actually are some distinctive aspects to estimating in the context of preparing a bid or price quote, and I don't discuss that in my book Software Estimation.

 

Estimation in a Time & Materials Context

 

Creating estimates to support time and materials bids (i.e., charging by the hour), is only barely a special case because the very structure of T&M implies some variability in the outcome, same as my recommendations for estimating in-house development work. The only real difference if you can even call it a difference is that you have to make doubly sure that you're setting expectations clearly: "This is an estimate. We can't know the outcome with 100% certainty. Actual results will depend on exact details of what you end up requiring and how different issues get prioritized throughout the project," and so on.

 

Estimation in a Fixed-Price Context

 

In contrast, estimation in a fixed-price context is very much a special case. If your estimate causes you to bid too high, you won't get the work. If it causes you to bid too low, you will lose money. Both of these are undesirable outcomes! In other circumstances I usually find myself recommending that people back away from really elaborate estimation approaches because there's so much inherent variability in software projects that the accuracy of your estimates is inherently limited, and you reach the point of diminishing estimation accuracy after you've put in even a little bit of effort. But a fixed price environment, at proposal time, is one of the few circumstances I've run encountered in which an elaborate estimation approach is warranted. And so my first recommendation is, If your business depends on creating fixed price bids, focus on estimation skills as a core competency and treat estimation work as a business-critical function. That means, read Software Estimation, take my company's estimation class, and read other people's estimation books.

 

My second recommendation is similar to my general recommendation that you separate the "estimate" from the "target." In a fixed price bid context, separate estimation from pricing. The estimate informs the price you'll charge, of course, but there isn't any necessary relationship between the two. You can price a bid at the "unlikely" end of the estimation range if it's really important to you to win the work, and you're willing to lose money on it. Or you can price it way above the estimation range if you think you have an approach that allows you to perform the work at low cost to you and that delivers a higher value to the client.

 

We've seen lots of companies wrap themselves around the axle when the sales staff insists on lowering the "estimate" to get the work, when really what needs to be lowered is the price. This creates confusion throughout the project. Giving everyone permission to keep estimates and prices separate increases accountability on the sales side (they have to own up to the fact that they're pricing something on the low end of the estimation range and get buy in to do that), and it improves planning on the dev side -- if there's a big gap between the price and the estimate, the project needs to be treated as a higher risk project than if there isn't a large gap. When estimation and pricing are merged into one concept and called "estimation" (even though it isn't really estimation), the project planners can lose the important risk information that arises from the relationship between the price and the estimates.

 

Try not to do the "commitment/pricing" estimate until later in the cone of uncertainty. Of course this is the holy grail, but most companies can't do this with any regularity because they feel that the competitive pressures require them to submit bids in the wider part of the cone.

 

Bid smaller amounts of work when you can, i.e., be more iterative. One of the great benefits of iterative development is the ability to generate project-level data on early iterations that can be used to estimate later iterations with really good accuracy. The companies we've worked with have settled in on 3 iterations as the number needed to calibrate a project team's productivity. Interestingly enough, it doesn't seem to matter whether the iterations are 1 week or 1 month or longer -- it still takes 3 iterations.

 

Consider creating two-phase bids when you can. You can call the first phase "preliminary work", "exploratory phase," "proof of concept phase," "design phase", "Phase 1", etc. The purpose of this phase is to attack all the sources of high variability feeding into the estimates and ultimately deliver a bid for the second part of the project after the cone has been narrowed considerably. We've seen many companies use this approach successfully, although I can't think of any companies we've seen that have been able to use it for the majority of the work they bid on. Again, competitive pressures seem to lead to their using this approach only selectively.

 

Two phase bids can be structured either more "waterfallish" or more "agile". The description above assumes a more linear development approach in which you're trying to get most or all of the requirements defined up front and then bid the whole project. In a more agile approach, you can treat "Phase 1" as an actual design-build-deliver cycle, but structure it into 3 iterations so that you can get good project-level calibration data that you can then use as the basis for bidding the remainder of the project.

 

Collect historical data on your estimates at proposal time vs. the eventual outcomes so that you can build your own cone of uncertainty. The better records you keep about what materials fed into your estimate, the more meaningful your cone will be. For example, you might have really specific requirements for one bid and pretty vague requirements for another. In one sense, if they're both "proposal time" estimates, you might treat them similarly. But if one was supported by significantly more detail in the requirements that implies you're at a different location in the cone, and you'd want to account for that.

 

Non-Estimation Recommendations

 

On this particular topic, several of the most powerful recommendations aren't specifically about estimation; they're about project control.

 

Go highly iterative as early as you can, regardless of whether the bid is structured into one or two phases. Even if you're working to a single-stage bid, there's value to getting project-level calibration data sooner rather than later. If you discover 10% of the way into the project that you've under bid it by a factor of 2, you can go back to the customer sooner and reset their expectations, you can give the customer options that you still have time to act on, you can implement functionality in strict priority order, you can identify the project as a high risk project and manage it accordingly, etc. But if you don't have the project level data that tells you your initial estimates were way off, you'll just run the project as "business as usual", which is really the last thing you want to do.

 

Document assumptions at the contract level, spell them out in as much detail as you can, and then *contain* them. If you build a house, your building contractor might let you specify the kitchen cabinets, but there will be a line item in the contract budget for cabinets. If you end up choosing cabinets that are more expensive, you pay the difference. You typically would have line items for all kinds of things: lighting, landscaping, carpet, flooring, countertops, etc. The areas that are more certain (e.g., roofing, siding, foundation, plumbing) are simply specified. In software projects we also typically have areas that we can specify in detail and other areas that we don't know enough about at contract time to specify in detail. So in software contracts you can include clauses like, "The exact work required in the XYZ module has been budgeted at 40 staff hours. If work on XYZ exceeds its budget, the contract price will be increased correspondingly." I'm not an attorney so I am not recommending this as specific contract language, but hopefully this gives you a general idea about the general kind of clause you would ask your attorney to include in a contract. 

 

Manage your set of projects/bids as an investment portfolio, accepting that some will "win" and some will "lose." From a theoretical point of view, if you're estimating early in the cone there just isn't a good answer to improving the accuracy of your estimates on a project-by-project basis. The fact is, your estimates will be off to varying degrees, and when you happen to get one that's pretty accurate it will be a matter of luck, not skill, because of the inherent limits of the Cone. On the other hand, assuming there isn't any bias in the early-in-the-cone estimates (which can be a huge assumption), you can essentially punt on the question of project-by-project profitabilty and instead focus on portfolio level profitability. Solving the problem of estimating accurately in the wide part of the cone for an individual project isn't even theoretically solvable. But solving the problem of estimating a collection of projects in the wide part of the cone IS solveable. The key to solving that problem is rooting out any systemic bias in those estimates so that the error tendency is neutral. Then with that set of neutral estimates you simply increase each estimate by the amount you'd like your profit margin to be. If you want it to be 10%, you bid 10% higher than your neutral estimate. This will result in your actual project cost coming in higher than some of your estimates and lower than others, but on balance, assuming no systemic bias, you should make a 10% profit on your portfolio of projects.

 

Of course this requires that you have several projects in your portfolio, and that there aren't just one or two huge projects whose estimation errors could drown out whatever error was contributed by the smaller projects, and that you can afford to take a loss on some percentage of your projects. And those are big assumptions that might not be true in your specific case.

 

Bottom Line on Estimating in a Fixed-Price Bidding Context

 

The bottom line on this particular question is that it isn't possible to solve this particular problem purely using estimation practices. You have to change when you're estimating (later in the Cone), or what you're estimating (e.g., portfolios vs. individual projects), or how many times you estimate (e.g., two-phase bids). And project-control responses (as opposed to estimation responses) and even contract-level responses will probably turn out to be at least as useful as estimation responses.

 

Resources

 

Software Engineering Ignorance

posted: 03 Aug 2008

The February 2007 issue of IEEE Computer contained a column titled "Software Development: What Is the Problem?" (pp. 112, 110-111). The column author asserts, "Writing and maintaining software are not engineering activities. So it's not clear why we call software development software engineering ." The author then brushes aside any further discussion of software development as engineering and proceeds to base an extended argument on the premise that software development is...(read more)

Context Matters

posted: 03 Aug 2008

So, I was driving along, making a right turn into a driveway like I have done a thousand times before. I did what one always does when making a right turn: I checked carefully for pedestrians and watched the driveway to make sure nobody was coming down it. I then signaled my intentions and proceeded. Unfortunately, this right turn was occurring in England. Did you know that right turns in England cross on-coming traffic before entering a driveway? Well, I and another car added more evidence to the theory that two pieces of matter can not occupy the same space at the same point in time. In making that right turn, context mattered.

Context matters with requirements and design work. The context actually determines if something is part of the problem space or the solution space. The same exact statement (e.g. the menu must drop down) is a solution requirement if the context is that I am to build the user interface; a problem requirement if the context is that I am to build the database only. In fact, I am starting to believe that any kind of specification without a clear understanding of its context is basically worthless.

Context matters when picking lifecycles. I recently was teaching our Agile seminar and describing the basic Agile approaches. "But," said the students, "we have twenty interdependent technology streams with 300 developers building a core proprietary OS on three continents with at least five 'equally treated' stakeholders who need direct access to the developers to work out interface details to the UI and other hardware components while maintaining backwards binary compatibility." Well, out of the box XP isn't going to cut it here.

Context matters as we determine solutions. In fact, one of the great failings of software development is that we often design our solutions more on the function to be performed rather than the characteristics (qualities/non-functionals/attributes -- select the name you like) the design supports. A solution that would be functionally fine if the context were that maintainability didn't matter may be worthless in a context that has long term support concerns. It is my observation that this kind of context is rarely described in a project.

Speaking of the project, context matters there, too. Which side of the "iron triangle" -- schedule, resources, or functionality -- is the side that is fixed and which side will adjust? (Clue here for managers: you can only fix one and your project will have a higher chance of success if you don't change your mind five or six times during the project.) I will select all manner of practices for one context and different practices in another.

In fact, context matters with any "best practice". The word "best" doesn't make it immune to the reality of the context. While it may be best somewhere, it could be a "worst practice" for your project. How do you know the difference? You can't until you fully comprehend the context and that usually means trying the practice out. (A *** Vitale moment: "Plan-Do-Check-Act time, baby")

I hope you take all these ideas in the right context.

Rumors of Software Engineering's Death are Greatly

posted: 03 Aug 2008

A reader of my previous blog post on Software Engineering Ignorance pointed me to Eric Wise's blog post Rejecting Software Engineering . Eric seems like a bright guy, and he's a persuasive writer, but his post is another example of what I was referring to in my earlier post -- that is, people who are uninformed about software engineering spreading misinformation about it. One of Eric's arguments that is representative of other published arguments is that "software isn't like...(read more)

Outsourcing your own job?

posted: 03 Aug 2008

Did you hear the one about the programmer who outsourced his own job? http://www.wired.com/wired/archive/12.07/view.html?pg=2

Hmm -- perhaps I shouldn't ponder about this too much on a blog that my boss reads ;-) 

Never underestimate the value of beer

posted: 03 Aug 2008

During my seminars I often cite a light-hearted principle of never underestimating the value of beer. Of course, some of the attendees think drinking beer while writing code would be a wonderful practice. But the real principle is to get to know people as people.

It is somewhat amusing when you consider it. The stereotypical software person is the introverted geek. But software development is driven by people and so it can be thought of as a social event. Once you get to know the players as real people, you’ll be surprised how much you can accomplish.

So take the time to go out with your coworkers, your peers, and even that old grinch. You don’t like beer – that’s sad, but ok. Just go for lunch, have a latte or cup of tea, whatever. It is amazing what happens when you share food together. The inevitable conflicts may not disappear, but they will get resolved much quicker.

Perhaps, I ponder, that’s why breaking bread together is part of almost every major religion. But then again, perhaps I ponder too much.

Best Companies to Work For, Part 1

posted: 03 Aug 2008

[Warning, bragging ahead] At the end of June I was very pleased to learn that Construx Software (my company) had been recognized as the Best Small Company to Work For in Washington state. Washington CEO magazine published a list of the 100 Best Companies to work for. Construx topped the "Small Companies" category. With a total score of 148.87 (a total of the employee survey scores and judges' scores), Construx easily topped the winner in the "large company" category, which...(read more)

Incremative

posted: 03 Aug 2008

In our 10x and Agile seminars, I talk about the role and purpose of incremental and iterative (incremative) development practices. On the surface incremative development is kind of wasteful. I mean, it is like asking me to drive to the grocery store and a I stop on each block, call home and ask my wife, "I am one block closer to the grocery store. Do you still want me to get the milk?" By the time I get there, buy the milk, call 10 more times on the way home, ("Do you still want me to come home? What do you mean you are having doubts???") the milk is spoiled and the tea is cold.

There is waste even if my wife were to say to me, "I think I need something at the grocery store, but I will not know until I see it and I am not sure what store I want to go to. Get in the car and start driving." There is wear, tear, and overhead costs in starting and stopping the car. We will take more time to buy the thing-she-may-end-up-buying-but-won't-know-it-until-we-get-there then if she knew what she wanted in the first place. Heck, I may even drive the wrong way and have to retrace my path since I need to guess a bit to choose an initial direction. Why even start a trip like this? It doesn't make sense.

I only have that "waste", however, when I truly have a deterministic outcome. If I can know the final destination, the straight line always will be faster. It is when uncertainty creeps in that I need to be incremative. With uncertainty comes the need to gather information to lower the uncertainty. The "waste" of incremative development is a known cost to buy information and avoid the potentially much larger unknown cost of guessing wrong.

Being incremative is all about lowering uncertainty by getting and acting on feedback. The incremental part of being incremative determines how often I get feedback. Since I have high uncertainty about what the other bozos on the road are going to do, my increments are often very small as I constantly scan the traffic. I don't let my eyes leave the view of the road for long (unless, of course, I need to send a text message on my mobile). The iterative part of being incremative determines what I parts I do over and can change based on the feedback. As I spot a bozo in a red truck coming into my lane, I change my acceleration, vehicle direction and the amplitude of my horn. Based on what the traffic does around me and my desired route, I will change the vehicles parameters many times.

But here is the punch line. Both certainty and uncertainty can live side by side on the same trip. I can be certain of my desired destination (I want to go to the store) but have uncertainty about the traffic, road conditions, etc. on the way there. Saying my trip is completely deterministic because I know I want to buy milk is naive. Saying it is completely uncertain because of the traffic is simplistic. It is both and I need to approach it that way.

So the trick in incremative development is to find the right balance, the point where I buy just enough information to lower the risk to an acceptable level. Too much, and I have unnecessary waste; too little, and I likely to have large undefined costs.

Like when you brought home the non-fat when your spouse really wanted the whole. It's back to the store again! (Note: bad iteration!)

Worst Companies to Work For, Part All

posted: 03 Aug 2008

Steve McConnell (my boss) is bragging about his company since it got voted the best small company to work for in Washington State. He is so proud that he needs to do the bragging in three parts!

I have to admit, it is a pretty nice place to work. Did he mention the free beer? Anyplace that has free beer is a great place to work by definition. Not to mention that I am writing this in my private office while wearing shorts and listing to the blues. Unless, of course, I get too distracted by trying to decided how to spend my many weeks of paid time off. (I am thinking August is no longer good for me.)

Now that I have you all jealous, of course Construx is a great place to work. There are only sixteen of us and we are all contributing professionals. I think you can do things as a small, flexible company that you just can't do other places. What may be more interesting are the common things that make a company a worst company to work for. Not the weird things done by a psychologically disturbed pointy haired boss but the irritating things that happen day in and day out that can make a place a living hell.

Here is my short list:

  • Bodily noises from the people who work around you that are better left to the pages of Mad magazine
  • Food or drink at a work station that has been there since the disco age
  • Print jobs that a) use the last of the paper or b) jam the machine and were launched by a person who just went on a three week vacation
  • Anybody who works around you whose teenage children have more ethical lapses than a presidential administration
  • Any client or manager who begins a request with the phrase, "It is just a ..."
  • The rattle made by the air vent in the ceiling in which you have already stuffed 37 post-it notes
  • Application dialogs that tell you that the system has crashed/needs to be restarted and then asks if it is "OK"
  • Meetings scheduled for the end of the day because that is, "the only time everyone is available" -- because we want to go home!

Anything else?

Doing Justice to V&V

posted: 03 Aug 2008

One of my secret passions is to kill the man (or woman) who started to use the terms verification and validation in the software world. I know you are hiding out there and when I find you, I will do justice.

I mean, first of all there is this horrible trick of using two words that sound soooo close in English. We don't use many of those 'V' words in this language on a day to day basis so just starting out with a 'V' pretty much means we ignore the rest of the letters. I think I only know about five 'V' words off the top of my head: Valium, (beach) Volleyball, Vacant (my head), and those two nasty beasties listed above. And they all mean the same thing, "time for beer."

And then there are the software related definitions of those two words. Verification: did I build the thing right; Validation: did I build the right thing. Or is that the other way around? I can never tell. I always need to look it up since it is just switching a word here and there. Not that starting the word with a 'V' helps me out much (see previous paragraph). It is like asking if I drank the beer and was it the right beer. Well, by definition, if I drank the beer it was the right beer!

And in the shorter phrase "V&V", which one comes first? Is there a rule on that? Is that rule as critical as the one about not wearing socks with sandals?

Wouldn't it have been better if we called it Requirements Confirmation and Design Confirmation? First of all, we would a nice alignment with the activities that typically produce the needed stuff for the V&V and the V&V itself. Second, we would have words that are completely different and therefore much harder to confuse. The drawback with this solution, of course, is that nobody really gets the difference between requirements and design. (Somebody just said to me the "what vs. how" thing again today and I almost did justice on them!)

There has to be a better way! I will find you V&V instigator! I will vilify you! You will wish you were virtual! I will vanquish the pain you have caused virtuous software developers. Very truly I tell you, vultures will think it vain to feed on the bits left. Victory is ours.

V on!

How to Self-Study for a Computer Programming Job

posted: 03 Aug 2008

Readers will sometimes ask me, "I don't have a college degree in computer science. How can I study for a computer programming job?" Both my company in general and I personally have put a lot of work into answering that particular question over the past 10 years. The specific answer is based on a few questions that each individual must first answer for himself or herself: 1. Do you want to go back to school, or do you want to self study? 2. Are you more interested in doing software development...(read more)

Best Companies to Work For, Part 2

posted: 03 Aug 2008

Construx Employee Perspective As I mentioned in an earlier post , at the end of June I was very pleased to learn that Construx Software (my company) had been recognized as the Best Small Company to Work For in Washington state. Getting the outside validation was gratifying, but what does the inside view look like? What do Construx's employees think makes Construx a good company to work for? We held an all company lunch discussion in July to talk about that question, and here's what people...(read more)