Thursday, December 23, 2010

No room for zealots in Agile project management

I've been brought in to rescue a number of projects in my career as a project manager. If there is one pattern that I've noticed between them, it's that the former PM in charge was essentially an Agile zealot. This is someone who has usually done more reading than managing. They fail because they try to apply exactly what they read in the books or blogs without really understanding the underlying principles and goals of what they are trying to accomplish.

The very nature of Agility is to be able to adapt to the circumstances of the project and environment. One form of the zealotry comes when a PM, fresh from reading a pile of blogs or from a conference, decides to come into a new project and implement a whole bunch of nifty practices that they learned. They won't attempt to explain why they are doing what they are doing, and in reality they may not truly know, outside of an intuitive feeling that it would be a good thing to do. Ultimately the practices may not fit in the project or environment that they are working in, but they will continue to stick to their guns, saying things like "this is a best practice that has worked elsewhere". That might very well be true, but are the circumstances the same?

The biggest offense I've seen is the implementation of the self directed team, or rather the lack of implementation. The PM takes the principle of the self directed team, and looks at it as a self-implementing mechanism. Literally, they think by leaving the team to their own devices they will figure out what the best process is for themselves. The problem is that the team will spend a lot of time spinning their wheels, detracting from their true mission of developing valuable software.

The approach I have found to be far more successful is to get the project team up on their feet, with a set of agreed upon principles and known best practices to get started, and then once up and running let them shift direction as they see fit. I've often described this as "priming the pump", but my colleague Tiffany Lentz uses the term "putting on the training wheels" which I think is a more appropriate analogy. The retrospective is the mechanism that allows the team to make their adjustments, and should not be discounted. Most people are better at critiquing than creating anyway.

Tuesday, January 20, 2009

Backend stories make for unexciting showcases

Working software as the primary measure of progress is one of the most important aspects of an Agile development project. We conduct showcases at every iteration close meeting to demonstrate that what we have said we have built is actually "done and done". But when working on a project where a lot of the user stories don't have a user interface, a showcase just doesn't seem to pack the punch you had hoped for.

I've done several projects now where this was often the case. The team worked hard, and built some really valuable and solid features, but when it came to actually demonstrating them for business people we were often faced with the prospect of showing a command window where we would execute the program and watch the log messages fly by. The business people politely smile and the dev team can't help but chuckle.

One way to "spice things up" (if you can call it that) is to create visuals that help illustrate what is going on behind the scenes. Let's say that the backend process consists of ten steps, and by the end of the iteration the team has completed 3 of them. What I've done on some of my projects is to create an overall illustration of the intended backend process, and then highlight the steps that were completed during the iteration. At each iteration close you can start by showing what was completed by the end of the previous iteration, followed by what was completed by this one. For the people who don't fully understand the technical aspects, this at least gives them a feeling for the overall progress.

This is not to say that you should bag the showcase altogether. A lot of people will still like to see the proof that the feature is working per the story they signed off on, and may be very happy to walk through the log messages, and any resulting text files or database updates. If the end result is viewable on a screen, showing that screen before and after can be useful as well.

I wouldn't doubt that this post brings to mind the difficulty a lot of people face in creating stories for backend systems. They have a good point, and I'll have to blog about that in a future post.

Thursday, October 16, 2008

Thoughts on adding "stabilization" to the end of your project plan

It's a pretty common practice to add time to the end of your project plan in between the end of the last development iteration and production deployment. There are a number of activities that may have to happen prior to going to production, including:
- Final QA signoff of cards from the final dev iteration
- Any remaining defect fixes as a result of that testing
- Final user acceptance testing
- Preparation of the system for production

This period is often generically referred to as "stabilization". The issue I have is that this term implies that the system is not going to be stable at that point in the project, and this may even become a self-fulfilling prophecy.

The ideal in an agile project is that there are no defects at all at the end of each iteration. While this is not an impossible goal, in reality teams will usually strive to ensure there are no defects that would fit in the category of "critical", "high" or "medium", and more than likely a few "low" defects will escape the iteration. They will catch up on these in subsequent iterations, but following the logic you will still have some at the end of the last iteration.

So instead of referring to that final stage as "stabilization", I would probably prefer something like "wrapup" or maybe even "polishing".

Wednesday, October 8, 2008

Agile is not a magic "change" vacuum

One statement about Agile that seems to haunt a number of projects is "Change is expected and welcome" during the project. This is true, but with conditions, and if expectations are not managed properly things can get out of control.

I've seen more than a few cases where customers think this means they can keep changing their mind with no impact to the project. When we explain that it is fine to change your mind, or ask for more changes, but something else has to go, they act as if we just performed some kind of bait-and-switch. The "problem" is especially prevalent because with short iterations we give the customer plenty of opportunities to review the finished features and make comments.

The essence of welcoming change has to do with making adjustments before the story card has been finished. If you are in an iteration and the developers are working on a story card, and in the process of discussing the requirements with the product owner (or bringing up an issue they have discovered) it is determined that things need to change, that is generally going to be ok as long as the spirit of the story does change dramatically and the estimate for the change is still in line with the original estimate. We want the finished product to be what the product owner really wants and needs.

But in terms of managing change once story cards are completed, it is important to explain the dynamics to your product owner/customer up front. The team will establish a velocity throughout the iterations, and for the purposes of planning, that velocity will guide what goes into the iterations. If you want to put something else into that iteration, then something else has to go. So if you are in the iteration showcase and the stakeholders make some comments about changing/adding something, then you need to decide number one whether it is worth it or not, but number two when it should be scheduled and what else is going to move as a result.

Thursday, September 25, 2008

Retrospectives are great, but...

...they're not worth much if no action is taken afterwards. I haven't met many people who disagree that retrospectives provide a lot of value to the team. But without follow-up that value is often not realized, and if identified issues persist they can actually have a negative impact on the team.

In general, items that require follow-up need to be captured in a manner that will allow them to be followed up on. One method is to turn these items into new stories. I'm not always keen on this approach, since the items don't usually follow the spirit of the user story, which is to provide direct business value. Making the items technical stories often makes sense, as they can be factored into the iterations but don't meet the definition of providing business value.

A lot of the items that get brought up during retrospectives aren't related directly to development, which makes it hard to figure out where to put them. The issues list is a likely candidate. Another approach I've taken on some projects is to create a wall (like a story wall) which shows the retrospective action items and where they are in their lifecycle. Seeing a number of untouched items on the wall is a good indicator that the value is not being realized.

Another technique is to start future retrospectives with a recap of issues that came up in previous retrospectives, and where those issues stand. If the team sees that items are getting addressed, it could encourage them to participate in the retrospectives even more.

Lastly I just wanted to comment that the best situation is when you can get the team to decide on a resolution and take action right then at the retrospective. Not always possible (or advisable in some cases) but is a great example of realizing the value that retrospectives can bring.

Wednesday, September 17, 2008

Just in Time Tasking vs Upfront Tasking vs No Tasking

This topic seems to come up all the time and the answer seems to change based on your situation. For the uninitiated let me give you some background on what I'm talking about.

If you are following the rules of good user story writing, then your stories should be business focused and represent what needs to be delivered. What it doesn't cover is "how" the story is going to get implemented. To cover this we usually create task cards which describe the technical tasks required to complete the development of the story. Some examples might be "Create a new database table to store the customer info", "Modify the x screen to include the new fields", etc.

There are essentially three main variations to tasking:
- Just in Time Tasking - when a story is ready to be picked up and worked on, the dev pair that picks it up sits down with the business analyst to review it, and then they brainstorm the list of technical tasks that will be required to finish it. Usually they will write each task down on index cards, but I have also seen cases where they are listed in the online tool which the team uses to manage the stories and iterations.
- Upfront Tasking - at some point in the very beginning of the iteration the entire team gets together and essentially does what the individual pair did above. This often takes place at the end of the planning meeting, and the team goes through all of the story cards for the iteration.
- No Tasking - the developers don't write down tasks at all. They probably will do some upfront thinking on the design, but in some cases they dive right in.

Let me start by saying that I'm no fan of No Tasking. This may work in situations where the story is very simple and the duration is very short, but in most real life situations this is asking for trouble.

The main benefits of doing the tasking session is to do the upfront design, and have some means of documenting it so others can understand the thought process behind it. We encourage frequent pair switching on my teams and the cards, while not useful by themselves for understanding the design, serve as a good outline to explain what's going on to a new person. The task cards can also be used by the dev pair and others to judge how complete the story is - as tasks are completed the task cards are marked completed.

Upfront tasking is often favorable because it enables the entire team to participate and become knowledgeable in the high level design. If there are going to be issues the team will know at that point and there should be enough time in the iteration to either resolve those issues, or if they are insurmountable then other stories can be swapped in instead. The downside in most cases is that it can be a very time consuming exercise. If your team is large there will probably be a lot of stories to discuss, and some developers may tune out after sitting in the meeting for a long time, thus negating the participation and knowledge sharing aspect. There's also the possibility that the developers won't fully remember the details when it come time to work on the story, even if they have the tasks written out.

Just in Time tasking is often a response to the issues mentioned above. After sitting through a couple of very long up front tasking sessions, the developers rebel and demand to switch to Just in Time. The developers are happy that they don't have to sit in a long meeting, and the managers are happy because the work gets started. The downside is of course the opposite of the upsides of upfront - only the pair that picks up the card is involved in the design decisions, when new members join the pair they have to be brought up to speed, etc.

I've tried a couple of variations to find a happy medium to all this. On my last project we did the upfront tasking, but timeboxed it at an hour and a half. Any stories that didn't get addressed in that session reverted to Just in Time tasking. We prioritized the stories based on a feel for how complex they were going to be, so that we usually addressed the most complex stories during this session.

Another variation was used when I had a team of about 20 developers. Because of the team size there were a lot of stories each iteration. Upfront tasking with the entire team was a painful affair, but issues arose as a result of Just in Time. So the solution we settled on involved the team breaking into smaller subteams right after the planning meeting. These subteams took a subset of the cards and did the tasking for those. The team was pretty good at breaking themselves up into these subteams, often switching people around mid-stream if other people were needed.

So, the answer to the question is that it depends on the situation. If you keep the issues above in mind, and are willing to try a variety of approaches, you should arrive at a method that works best for your project situation.

Thursday, September 11, 2008

Resistance is futile

Well, I finally went and created a blog. Never really had much time to spend blogging (and still really don't), but I've given the same advice over and over so many times that this seems like the best way to get it down once and share it with everybody. So now that it is here I hope to be able to capture my insights and experiences with Agile methods on my projects.