We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Retrospectives

Lucid ruminations

A few months ago, I wrote about changes in our development process for Ubuntu 10.04 LTS in order to meet our goals for this long-term release. So, how has it turned out?

Well, the development teams are still very busy preparing for the upcoming release, so there hasn’t been too much time for retrospection yet. Here are some of my initial thoughts, though.

  • Merge from Debian testing – Martin Pitt has started a discussion on ubuntu-devel about how this went. For my part, I found that Lucid included fewer surprises than Karmic.
  • Add fewer features – This is difficult to evaluate objectively, but my gut feeling is that we kept this largely under control. As usual, a few surprise desktop features were implemented that not everyone is happy about, myself included.
  • Avoid major infrastructure changes – I think we did reasonably well here, though Plymouth is a notable exception. It resulted (unsurprisingly) in some nasty bugs which we’ve had to spend time dealing with.
  • Extend beta testing – This will be difficult to assess, though if 10.04 beta was at least as good as 9.10 or 9.04 beta, then it will have arguably been a success.
  • Freeze with Debian – Although early indications were good, this didn’t work out so well, as Debian’s freeze was delayed
  • Visualize progress – The feature status page provided a lot of visual progress information, and the system behind it allowed us to keep track of work slippage throughout the cycle, both of which seemed like a firm step in the right direction. I’m looking forward to hearing from development teams how this information helped them (or not).

A more complete set of retrospectives on Lucid should give us some good ideas for how to improve further in Maverick and beyond.

Update: Fixed broken link.


Written by Matt Zimmerman

April 20, 2010 at 09:23

Amplify Your Effectiveness (AYE) Conference: Day 3 (morning)

Today is my last day at the conference, and I’ll be leaving before the welcome dinner in order to catch my flight to Dallas. It has been a pleasure meeting everyone, and an invaluable opportunity to get perspective on my work and other parts of my life.

Morning session: Retrospectives (Esther Derby)

The full title of this session was Looking Back, Moving Forward: Continuous Improvement With Effective Retrospectives. I find retrospectives to be a useful tool for process improvement, and am interested in gaining more experience with them and learning new techniques.

imag0009To start, the group (about 12 people) conducted a project together. We were provided with a specification for a 3-dimensional model of an urban development project, and some supplies with which to build it. Over the course of 30 minutes, we organized ourselves, performed the task, and successfully delivered the result to a satisfied customer.

Overall, the project went very well, so I was looking forward to the retrospective. In a retrospective, I often find myself focusing on the problems, so this would be good practice in identifying and repeating successes.

To start, we “set the stage” for the retrospective by reviewing the agenda and “working agreements” (pre-established goals and guidelines for behavior). I always try to do this in retrospectives, but the idea of agreeing the guidelines in advance was new to me. This would both help to build support in the team, and save time in the meeting.

Then, we conducted a quick poll of the group to assess the outcome of the project, using a weather metaphor: was it “partly sunny”, “cloudy”, “rain”, etc. I liked this metaphor because I found it more neutral than a rating: weather is something which just happens and is observable, while numbers, for example, seem to carry more judgement.

Next, we broke into smaller groups to collect data (our observations) on the project we had just completed. I liked the idea of doing this in smaller groups, as it promoted more points of view, and also produced some redundant data (which was useful in the analysis to follow). We wrote these observations on sticky notes.

imag0010To analyze the data, everyone came back together and plotted it on a “FRIM” chart (frequency vs. impact), where one axis indicated frequency (how often it occurred during the project?) and the other impact (how much impact did it have, and was it positive or negative?). We looked at this data from near and far, looking for patterns and considering individual notes.

The upper left quadrant (low frequency and positive impact) was dominated things which happened near the beginning of the project: establishing roles, establishing the workspace, etc. The lower left quadrant (low frequency and negative impact) showed mostly items to do with ambiguity: lack of information, confusion and so on. The upper right quadrant (high frequency and positive impact) included ongoing teamwork, the ongoing progress which helped coalesce and motivate the team. There was nothing in the lower right quadrant (high frequency and negative impact), which I took as a sign of a healthy project. The fact that some of the observations were redundant helped make the patterns clearer, by showing what had been noticed more.

With these observations in mind, we broke into smaller groups again, this time with the objective of generating ideas for improvement, or for repeating our successes. I thought that splitting up the group would bring more ideas out, but in fact we came up with many of the same suggestions. Still, this helped to reinforce the suggestions because they were independently proposed by different groups.

imag0011The team was reformed, and suggestions were posted on a flipchart page with a table for evaluation. The team rated the suggestions according to the amount of effort required, the anticipated impact, and their individual willingness to work on it. Finally, based on this data, Esther asked for a volunteer to implement the most promising suggestion, choosing just one to keep the amount of change manageable. She asked for a second volunteer to back up and support the first. I liked the idea of rating the suggestions together, and also having someone in a supporting role for the followup action.

We then polled the team again, reusing the weather metaphor, to predict the “weather” for the next project, based on what we learned. Most of the group was optimistic, though a couple of people predicted storms. When asked about their concerns, one of them pointed out that our decision to focus on improving one aspect of our process could cause us to waste too much time up front working on that. To mitigate this, we resolved to limit the amount of time we would spend on it, and he seemed satisfied. The other doomsayer was unrepentant, and said he suspected we would fall prey to the “second system” effect. I got the impression that he wanted to be proven right, and wasn’t interested in avoiding such a problem.

We concluded by reviewing the overall structure of the retrospective, and how the stages fit together: setting the stage, gathering data, analyzing it, selecting actions and wrapping up. The wrap-up included a retrospective on the retrospective itself, to promote improvement of that process. I was a bit concerned we would end up in infinite regress, but Esther stopped there, and didn’t do a retrospective of the retrospective of the retrospective. I don’t normally even do one meta-retrospective, but am considering trying that out now.

Written by Matt Zimmerman

November 12, 2009 at 01:01