Posts Tagged ‘Leadership’
Back to the future
In my professional role as Ubuntu CTO, I take on a number of different perspectives, which sometimes compete for my attention, including:
- Inward – supporting the people in my department, alignment with other departments in Canonical and reporting upward
- Outward – connecting with customers, partners and the free software community, including Debian
- Forward – considering the future of the Ubuntu platform and products, based on the needs of their users, our customers and business stakeholders within Canonical
- Outside-in – taking off my Canonical hat and putting on an Ubuntu hat, and looking at what we’re doing from an outside perspective
My recent work, as Canonical has gone through a period of organizational growth and change, has prioritized the inward perspective. I took on a six-month project which was inwardly focused, temporarily handing off many of my day-to-day responsibilities (well done, Robbie!). I’ve grappled with an assortment of growing pains as many new people joined Canonical over the past year.
With that work behind me, it’s time to rebalance myself and focus more outside of Canonical again. It’s good to be back!
In my outward facing capacity, I’ll shortly be attending Web 2.0 Summit in San Francisco. I attend several free software conferences each year, but this is a different crowd. I hope to renew some old ties, form some new ones, and generally derive inspiration from the people and organizations represented there. Being in the San Francisco Bay area will also give me an opportunity to meet with some of Canonical’s partners there, as well as friends and acquaintances from the free software community. With my head down, working hard to make things happen, it’s easy to lose perspective on how that work fits into the outside world. Spending more time with people outside of Canonical and Ubuntu is an important way of balancing that effect.
Looking forward, I’ll be thinking about the longer term direction for the Ubuntu platform. The platform is the layer of Ubuntu which makes everything else possible: it’s how we weave together products like Desktop Edition and Server Edition, and it’s what developers target when they write applications. Behind the user interfaces and applications, there is a rich platform of tools and services which link it all together. It’s in this aspect of Ubuntu that I’ll be investing my time in research, experimentation and imagination. This includes considering how we package and distribute software, how we adapt to technological shifts, and highlighting opportunities to cooperate with other open source projects.
My primary outside-in role is as chair of the Ubuntu Technical Board. In this capacity, I’m accountable to the Ubuntu project, the interests of its members, and the people who use the software we provide. Originally, the TB was closely involved with a range of front-line technical decisions in Ubuntu, but today, there are strong, autonomous teams in place for the most active parts of the project, so we only get involved when there is a problem, or if a technical question comes up which doesn’t “fit” the charter of an established team. It’s something of a catch-all. I’d like to re-establish the TB in a more central role in Ubuntu, looking after concerns which affect the project as a whole, such as transparency and development processes. I’m also re-joining Debian as a non-uploading contributor, to work on stimulating and coordinating cooperation between Debian and Ubuntu. I’m looking forward to working more with Zack on joint projects in this area.
This change will help me to support Canonical and Ubuntu more effectively as they continue to grow and change. I look forward to exercising some mental muscles I haven’t used very much lately, and facing some new challenges as well.
Management and information distortion
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so”
– attributed to Mark Twain
There is a lot that I don’t know about what goes on in my organization. This isn’t only because I can’t observe everything, or because it’s too complex, or because I make mistakes. These are all true, of course, but they’re also obvious. Much more devious is the way the flow of information to me is distorted. It’s distorted by me, and by the people around me, whether any of us are aware of it or not. This is most apparent in considering how people feel about their managers: this is why I have a deeply flawed view of what it’s really like to work for me.
My theory is that power bends information like gravity bends light. The effect is more pronounced with people of greater mass (more authority), and lessened with distance (less direct influence). The more directly you influence someone else’s fate, the more it is in their self-interest to be guarded around you. This means that the people closest to you, who you receive the most information from, may have the most difficulty being open with you, especially if it’s bad news. It also means that the higher your standing in the corporate hierarchy, the more influence you wield, the more people are affected by this.
Pretty scary, right?
Some managers respond to this terrifying reality by trying to collect more information. They’ll quietly cross-check what people are telling them, asking people in different levels of the organization, routing around managers, hoping to get “the real story”. This usually backfires, because it signals distrust to the people involved and makes the distortion worse.
Another common response is to check in constantly, trying to monitor and control the work as closely as possible (so-called micro-management). This is even worse; not only does it signal distrust, but managers who do this become more personally attached to outcomes, and lose perspective on progress and quality due to information overload, self-enhancement bias, and neglect of managerial work. The more it becomes “your” work rather than the team’s, the harder it is to see it objectively.
So what’s a better way to respond to this phenomenon? Here’s what I try to do:
- Accept it – You’ll never have certainty about what’s happening, so get used to it, and don’t let it paralyze you. Learn decision making strategies which cope well with information noise, and allow you to experiment and adapt.
- Admit it – Everyone else knows that you have this distortion field around you. If you pretend it isn’t there, you’ll appear deluded. Acknowledge that you don’t know, don’t understand, and can’t control.
- Trust – The more you trust someone, the more they trust you. The more someone trusts you, the more confident they can be in telling you what they think. Be grateful for bad news, and never shoot the messenger.
- Delegate – Enable people with a less distorted view of the situation to make local decisions. Don’t make people wait for information to propagate through you before acting, unless there is a clear and sufficient benefit to the organization.
I was inspired to think and write about this today after listening to Prof. Robert Sutton’s speech at the California Commonwealth Club, which Lindsay Holmwood shared with me.
The paradox of Steve Jobs
Steve Jobs is a name that comes up a lot when talking to businesspeople, especially in the technology industry. His ideas, his background, his companies, their products, and his personal style are intertwined in the folklore of tech. I have no idea whether most of it is fiction or not, and I write this with apologies to Mr. Jobs for using him as shorthand. I have never met him, and my point here has nothing to do with him personally.
What I want to discuss is the behavior of people who invoke the myth of Steve Jobs. In my (entirely subjective) experience, it seems to me that there is a pattern which comes up again and again: People seem to want to discuss and emulate the worst of his alleged qualities.
Jobs has been characterized as abusive to his employees, dismissive of his business partners, harshly critical of mistakes, punishingly secretive, and otherwise extremely difficult to work with. Somehow, it is these qualities which are put forward as worthy of discussion, inspiration and emulation. Is this a simple case of confusing correlation with causation? Do people believe that Steve Jobs is successful because of these traits? Perhaps it is a way of coping with one’s own character flaws: if Jobs can “get away” with such misbehavior, then perhaps we can be excused from trying to improve ourselves. Or is there something more subtle going on here? Maybe this observation is an effect of my own cognitive biases, as it is only anecdotal.
As with any successful person, Jobs surely has qualities and techniques which are worthy of study, and perhaps even emulation. Although direct comparison can be problematic, luminaries like Jobs can provide valuable inspiration. I’d just like to hear more about what they’re doing right.
Perhaps this is an argument for drawing inspiration from people you know personally, rather than from second-hand or fictitious accounts of someone else’s life. I’ve been fortunate to be able to work with many different people, each with their own strengths, weaknesses and style. I’ve seen those characteristics first-hand, so I also have the context to understand why they were successful in particular situations. If there’s one thing I’ve learned about leadership, it’s that it’s highly context-sensitive: what worked well in one situation can be disastrous in another. Is your company really that much like Apple? Probably not.
DebConf 10: Last day and retrospective
DebConf continued until Saturday, but Friday the 6th was my last day as I left New York that evening. I’m a bit late in getting this summary written up.
Making Debian Rule, Again (Margarita Manterola)
Marga took a bold look at the challenges facing Debian today. She says that Debian is perceived to be less innovative, out of date, difficult to use, and shrinking as a community. She called out Ubuntu as the “elephant in the room”, which is “‘taking away’ from Debian.” She insists that she is not opposed to Ubuntu, but that nonetheless Ubuntu is to some extent displacing Debian as a focal point for newcomers (both users and contributors).
Marga points out that Debian’s work is still meaningful, because many users still prefer Debian, and it is perceived to be of higher quality, as well as being the essential basis for derivatives like Ubuntu.
She conducted a survey (about 40 respondents) to ask what Debian’s problems are, and grouped them into categories like “motivation” and “communication” (tied for the #1 spot), “visibility” (#3, meaning public awareness and perception of Debian) and so on. She went on to make some suggestions about how to address these problems.
On the topic of communication, she proposed changing Debian culture by:
- Spreading positive messages, celebrating success
- Thanking contributors for their work
- Avoiding escalation by staying away from email and IRC when angry
- Treating every contributor with respect, “no matter how wrong they are”
This stimulated a lot of discussion, and most of the remaining time was taken up by comments from the audience. The video has been published, and offers a lot of insight into how Debian developers perceive each other and the project. She also made suggestions for the problems of visibility and motivation. These are crucial issues for Debian devotees to be considering, and I applaud Marga for her fortitude in drawing attention to them. This session was one of the highlights of this DebConf, and catalyzed a lot of discussion of vital issues in Debian.
Following her talk, there was a further discussion in the hallway which included many of the people who commented during the session, mostly about how to deal with problematic behavior in Debian. Although I agreed with much of what was said, I found it a bit painful to watch, because (ironically) this discussion displayed several of the characteristic “people problems” that Debian seems to have:
- Many people had opinions, and although they agreed on many things, agreement was rarely expressed openly. Sometimes it helps a lot to simply say “I agree with you” and leave it at that. Lending support, rather than adding a new voice, helps to build consensus.
- People waited for their turn to talk rather than listening to the person speaking, so the discussion didn’t build momentum toward a conclusion.
- The conversation got louder and more dense over time, making it difficult to enter. It wasn’t argumentative; it was simply loud and fast-paced. This drowned out people who weren’t as vocal or willful.
- Even where agreement was apparent, there was often no clear action agreed. No one had responsibility for changing the situation.
These same patterns are easily observed on Debian mailing lists for the past 10+ years. I exhibited them myself when I was active on these lists. This kind of cultural norm, once established, is difficult to intentionally change. It requires a fairly radical approach, which will inevitably mean coping with loss. In the case of a community, this can mean losing volunteer contributors cannot let go of this norm, and that is an emotionally difficult experience. However, it is nonetheless necessary to move forward, and I think that Debian as a community is capable of moving beyond it.
Juxtaposition
Given my history with both Debian and Ubuntu, I couldn’t help but take a comparative view of some of this. These problems are not new to Debian, and indeed they inspired many of the key decisions we made when founding the Ubuntu project in 2004. We particularly wanted to foster a culture which was supportive, encouraging and welcoming to potential contributors, something Debian has struggled with. Ubuntu has been, quite deliberately, an experiment in finding solutions to problems such as these. We’ve learned a lot from this experiment, and I’ve always hoped that this would help to find solutions for Debian as well.
Unfortunately, I don’t think Debian has benefited from these Ubuntu experiments as much as we might have hoped. A common example of this is the Ubuntu Code of Conduct. The idea of a project code of conduct predates Ubuntu, of course, but we did help to popularize it within the free software community, and this is now a common (and successful) practice used by many free software projects. The idea of behavioral standards for Debian has been raised in various forms for years now, but never seems to get traction. Hearing people talk about it at DebConf, it sometimes seemed almost as if the idea was dismissed out of hand because it was too closely associated with Ubuntu.
I learned from Marga’s talk that Enrico Zini drafted a set of Debian Community Guidelines over four years ago in 2006. It is perhaps a bit longand structured, but is basically excellent. Enrico has done a great job of compiling best practices for participating in an open community project. However, his document seems to be purely informational, without any official standing in the Debian project, and Debian community leaders have hesitated to make it something more.
Perhaps Ubuntu leaders (myself included) could have done more to nurture these ideas in Debian. At least in my experience, though, I found that my affiliation with Ubuntu almost immediately labeled me an “outsider” in Debian, even when I was still active as a developer, and this made it very difficult to make such proposals. Perhaps this is because Debian is proud of its independence, and does not want to be unduly influenced by external forces. Perhaps the initial “growing pains” of the Debian/Ubuntu relationship got in the way. Nonetheless, I think that Debian could be stronger by learning from Ubuntu, just as Ubuntu has learned so much from Debian.
Closing thoughts
I enjoyed this DebConf very much. This was the first DebConf to be hosted in the US, and there were many familiar faces that I hadn’t seen in some time. Columbia University offered an excellent location, and the presentation content was thought-provoking. There seemed to be a positive attitude toward Ubuntu, which was very good to see. Although there is always more work to do, it feels like we’re making progress in improving cooperation between Debian and Ubuntu.
I was a bit sad to leave, but was fortunate enough to meet up with Debian folk during my subsequent stay in the Boston area as well. It felt good to reconnect with this circle of friends again, and I hope to see you again soon.
Looking forward to next year’s DebConf in Bosnia…
Habit forming
I find that habits are best made and broken in sets. If I want to form a new habit, I’ll try to get rid of an old one at the same time. I don’t know why this works, but it seems to. Perhaps I only have room in my head for a certain number of habits, so if I want a new one, then an old one has to go. I’m sure some combinations are better than others.
I’m currently working on changing some habits, including:
- Start exercising, swimming three times per week
- Stop drinking alcohol entirely
- Start a consistent flossing routine
I’m thinking of adding a reading habit to the set, but it’s going well so far and I don’t want to overdo it. I feel good, and am forming a new routine.
The flossing is definitely the hardest of the three. I hate pretty much everything about flossing. It also unbalances the set, so that I have a net gain of one habit. Maybe that’s the real reason, and if I broke another habit, it would get easier.
Does anyone else have this experience? What sort of tricks do you employ to help you change your behavior?
Finishing books
Having invested in some introspection into my reading habits, I made up my mind to dial down my consumption of bite-sized nuggets of online information, and finish a few books. That’s where my bottleneck has been for the past year or so. Not in selecting books, not in acquiring books, and not in starting books either. I identify promising books, I buy them, I start reading them, and at some point, I put them down and never pick them back up again.
Until now. Over the weekend, I finished two books. I started reading both in 2009, and they each required my sustained attention for a period measured in hours in order to finish them.
Taking a tip from Dustin, I decided to try alternating between fiction and non-fiction.
Jitterbug Perfume by Tom Robbins
This was the first book I had read by Tom Robbins, and I am in no hurry to read any more. It certainly wasn’t without merit: its themes were clever and artfully interwoven, and the prose elicited a silent chuckle now and again. It was mainly the characters which failed to earn my devotion. They spoke and behaved in ways I found awkward at best, and problematic at worst. Race, gender, sexuality and culture each endured some abuse on the wrong end of a pervasive white male heteronormative American gaze.
I really wanted to like Priscilla, who showed early promise as a smart, self-reliant individual, whose haplessness was balanced by a strong will and sense of adventure. Unfortunately, by the later chapters, she was revealed as yet another vacant vessel yearning to be filled by a man. She’s even the steward of a symbolic, nearly empty perfume bottle throughout the book. Yes, really.
Managing Humans by Michael Lopp
Of the books I’ve read on management, this one is perhaps the most outrageously reductionist. Many management books are like this, to a degree. They take the impossibly complex problem domain of getting people to work together, break it down into manageable problems with tidy labels, and prescribe methods for solving them (which are hopefully appropriate for at least some of the reader’s circumstances).
Managing Humans takes this approach to a new level, drawing neat boxes around such gestalts as companies, roles, teams and people, and assigning them Proper Nouns. Many of these bear a similarity to concepts which have been defined, used and tested elsewhere, such as psychological types, but the text makes no effort to link them to his own. Despite being a self-described collection of “tales”, it’s structured like a textbook, ostensibly imparting nuggets of managerial wisdom acquired through lessons learned in the Real World (so pay attention!). However, as far as I can tell, the author’s experience is limited to a string of companies of a very specific type: Silicon Valley software startups in the “dot com” era.
Lopp (also known as Rands) does have substantial insight into this problem domain, though, and does an entertaining job of illustrating the patterns which have worked for him. If you can disregard the oracular tone, grit your teeth through the gender stereotyping, and add an implicit preface that this is (sometimes highly) context-sensitive advice, this book can be appreciated for what it actually is: a coherent, witty and thorough exposition of how one particular manager does their job.
I got some good ideas out of this book, and would recommend it to someone working in certain circumstances, but as with Robbins, I’m not planning to track down further work by the same author.
Rethinking the Ubuntu Developer Summit
This is a repost from the ubuntu-devel mailing list, where there is probably some discussion happening by now.
After each UDS, the organizers evaluate the event and consider how it could be further improved in the future. As a result of this process, the format of UDS has evolved considerably, as it has grown from a smallish informal gathering to a highly structured matrix of hundreds of 45-to-60-minute sessions with sophisticated audiovisual facilities.
If you participated in UDS 10.10 (locally or online), you have hopefully already completed the online survey, which is an important part of this evaluation process.
A survey can’t tell the whole story, though, so I would also like to start a more free-form discussion here among Ubuntu developers as well. I have some thoughts I’d like to share, and I’m interested in your perspectives as well.
Purpose
The core purpose of UDS has always been to help Ubuntu developers to explore, refine and share their plans for the subsequent release. It has expanded over the years to include all kinds of contributors, not only developers, but the principle remains the same.
We arrive at UDS with goals, desires and ideas, and leave with a plan of action which guides our work for the rest of the cycle.
The status quo
UDS looks like this:
This screenshot is only 1600×1200, so there are another 5 columns off the right edge of the screen for a total of 18 rooms. With 7 time slots per day over 5 days, there are over 500 blocks in the schedule grid. 9 tracks are scattered over the grid. We produce hundreds of blueprints representing projects we would like to work on.
It is an impressive achievement to pull this event together every six months, and the organizers work very hard at it. We accomplish a great deal at every UDS, and should feel good about that. We must also constantly evaluate how well it is working, and make adjustments to accommodate growth and change in the project.
How did we get here?
(this is all from memory, but it should be sufficiently accurate to have this discussion)
In the beginning, before it was even called UDS, we worked from a rough agenda, adding items as they came up, and ticking them off as we finished talking about them. Ad hoc methods worked pretty well at this scale.
As the event grew, and we held more and more discussions in parallel, it was hard to keep track of who was where, and we started to run into contention. Ubuntu and Launchpad were planning their upcoming work together at the same time. One group would be discussing topic A, and find that they needed the participation of person X, who was already involved in another discussion on topic B. The A group would either block, or go ahead without the benefit of person X, neither of which was seen to be very effective. By the end of the week, everyone was mentally and physically exhausted, and many were ill.
As a result, we decided to adopt a schedule grid, and ensure that nobody was expected to be in two places at once. Our productivity depended on getting precisely the right people face to face to tackle the technical challenges we faced. This meant deciding in advance who should be present in each session, and laying out the schedule to satisfy these constraints. New sessions were being added all the time, so the UDS organizers would stay up late at night during the event, creating the schedule grid for the next day. In the morning, over breakfast, everyone would tell them about errors, and request revisions to the schedule. Revisions to the schedule were painful, because we had to re-check all of the constraints by hand.
So, in the geek spirit, we developed a program which would read in the constraints and generate an error-free schedule. The UDS organizers ran this at the end of each day during the event, checked it over, and posted it. In the morning, over breakfast, everyone would tell them about constraints they hadn’t been aware of, and request revisions to the schedule. Revisions to the schedule were painful, because a single changed constraint would completely rearrange the schedule. People found themselves running all over the place to different rooms throughout the day, as they were scheduled into many different meetings back-to-back.
At around this point, UDS had become too big, and had too many constraints, to plan on the fly (unconference style). We resolved to plan more in advance, and agree on the scheduling constraints ahead of time. We divided the event into tracks, and placed each track in its own room. Most participants could stay in one place throughout the day, taking part in a series of related meetings except where they were specifically needed in an adjacent track. We created the schedule through a combination of manual and automatic methods, so that scheduling constraints could be checked quickly, but a human could decide how to resolve conflicts. There was time to review the schedule before the start of the event, to identify and fix problems. Revisions to the schedule during the event were fewer and less painful. We added keynote presentations, to provide opportunities to communicate important information to everyone, and ease back into meetings after lunch. Everyone was still exhausted and/or ill, and tiredness took its toll on the quality of discussion, particularly toward the end of the week.
Concerns were raised that people weren’t participating enough, and might stay on in the same room passively when they might be better able to contribute to a different session happening elsewhere. As a result, the schedule was randomly rearranged so that related sessions would not be held in the same room, and everyone would get up and move at the end of each hour.
This brings us roughly to where things stand today.
Problems with the status quo
- UDS is big and complex. Creating and maintaining the schedule is a lot of work in itself, and this large format requires a large venue, which in turn requires more planning and logistical work (not to mention cost). This is only worthwhile if we get proportionally more benefit out of the event itself.
- UDS produces many more blueprints than we need for a cycle. While some of these represent an explicit decision not to pursue a project, most of them are set aside simply because we can’t fit them in. We have the capacity to implement over 100 blueprints per cycle, but we have *thousands* of blueprints registered today. We finished less than half of the blueprints we registered for 10.04. This means that we’re spending a lot of time at UDS talking about things which can’t get done that cycle (and may never get done).
- UDS is (still) exhausting. While we should work hard, and a level of intensity helps to energize us, I think it’s a bit too much. Sessions later in the week are substantially more sluggish than early on, and don’t get the full benefit of the minds we’ve brought together. I believe that such an intense format does not suit the type of work being done at the event, which should be more creative and energetic.
- The format of UDS is optimized for short discussions (as many as we can fit into the grid). This is good for many technical decisions, but does not lend itself as well to generating new ideas, deeply exploring a topic, building broad consensus or tackling “big picture” issues facing the project. These deeper problems sometimes require more time. They also benefit tremendously from face-to-face interaction, so UDS is our best opportunity to work on them, and we should take advantage of it.
- UDS sessions aim for the minimum level of participation necessary, so that we can carry on many sessions in parallel: we ask, “who do we need in order to discuss this topic?” This is appropriate for many meetings. However, some would benefit greatly from broader participation, especially from multiple teams. We don’t always know in advance where a transformative idea will come from, and having more points of view represented would be valuable for many UDS topics.
- UDS only happens once per cycle, but design and planning need to continue throughout the cycle. We can’t design everything up front, and there is a lot of information we don’t have at the beginning. We should aim to use our time at UDS to greatest effect, but also make time to continue this work during the development cycle. “design a little, build a little, test a little, fly a little”
Proposals
- Concentrate on the projects we can complete in the upcoming cycle. If we aren’t going to have time to implement something until the next cycle, the blueprint can usually be deferred to the next cycle as well. By producing only moderately more blueprints than we need, we can reduce the complexity of the event, avoid waste, prepare better, and put most of our energy into the blueprints we intend to use in the near future.
- Group related sessions into clusters, and work on them together, with a similar group of people. By switching context less often, we can more easily stay engaged, get less fatigued, and make meaningful connections between related topics.
- Organize for cross-team participation, rather than dividing teams into tracks. A given session may relate to a Desktop Edition feature, but depends on contributions from more than just the Desktop Team. There is a lot of design going on at UDS outside of the “Design” track. By working together to achieve common goals, we can more easily anticipate problems, benefit from diverse points of view, and help each other more throughout the cycle.
- Build in opportunities to work on deeper problems, during longer blocks of time. As a platform, Ubuntu exists within a complex ecosystem, and we need to spend time together understanding where we are and where we are going. As a project, we have grown rapidly, and need to regularly evaluate how we are working and how we can improve. This means considering more than just blueprints, and sometimes taking more than an hour to cover a topic.
QCon London 2010: Day 1
For the first time in several years, I had the opportunity to attend a software conference in the city where I lived at the time. I’ve benefited from many InfoQ articles in the past couple of years, and watched recordings of some excellent talks from previous QCon events, so I jumped at the opportunity to attend QCon London 2010. It is being held in the Queen Elizabeth II Conference Center, conveniently located a short walk away from Canonical’s London office.
Whenever I attend conferences, I can’t help taking note of which operating systems are in use, and this tells me something about the audience. I was surprised to notice that in addition to the expected Mac and Windows presence, there was a substantial Ubuntu contingent and some Fedora as well.
Today’s tracks included two of particular interest to me at the moment: Dev and Ops: A single team and the unfortunately gendered Software Craftsmanship.
Jason Gorman: Beyond Masters and Apprentices
A Scalable, Peer-led Model For Building Good Habits In Large & Diverse Development Teams
Jason explained the method he uses to coach software developers.
I got a bad seat on the left side of the auditorium, where it was hard to see the slides because they were blocked by the lectern, so I may have missed a few points.
He began by outlining some of the primary factors which make software more difficult to change over time:
- Readability: developers spend a lot of their time trying to understand code that they (or someone else) have written
- Complexity: as well as making code more difficult to understand, complexity increases the chance of errors. More complex code can fail in more ways.
- Duplication: when code is duplicated, it’s more difficult to change because we need to keep track of the copies and often change them all
- Dependencies and the “ripple effect”: highly interdependent code is more difficult to change, because a change in one place requires corresponding changes elsewhere
- Regression Test Assurance: I didn’t quite follow how this fit into the list, to be honest. Regression tests are supposed to make it easier to change the code, because errors can be caught more easily.
He then outlined the fundamental principles of his method:
- Focus on Learning over Teaching – a motivated learner will find their own way, so focus on enabling them to pull the lesson rather than pushing it to them (“there is a big difference between knowing how to do something and being able to do it”)
- Focus on Ability over Knowledge – learn by doing, and evaluate progress through practice as well (“how do you know when a juggler can juggle?”)
…and went on to outline the process from start to finish:
- Orientation, where peers agree on good habits related to the subject being learned. The goal seemed to be to draw out knowledge from the group, allowing them to define their own school of thought with regard to how the work should be done. In other words, learn to do what they know, rather than trying to inject knowledge.
- Practice programming, trying to exercise these habits and learn “the right way to do it”
- Evaluation through peer review, where team members pair up and observe each other. Over the course of 40-60 hours, they watch each other program and check off where they are observed practicing the habits.
- Assessment, where learners practice a time-boxed programming exercise, which is recorded. The focus is on methodical correctness, not speed of progress. Observers watch the recording (which only displays the code), and note instances where the habit was not practiced. The assessment is passed only if less than three errors are noticed.
- Recognition, which comes through a certificate issued by the coach, but also through admission to a networking group on LinkedIn, promoting peer recognition
Jason noted that this method of assessing was good practice in itself, helping learners to practice pairing and observation in a rigorous way.
After the principal coach coaches a pilot group, the pilot group then goes on to coach others while they study the next stage of material.
To conclude, Jason gave us a live demo of the assessment technique, by launching Eclipse and writing a simple class using TDD live on the projector. The audience were provided with worksheets containing a list of the habits to observe, and instructed to note instances where he did not practice them.
Julian Simpson: Siloes are for farmers
Production deployments using all your team
After a brief introduction to the problems targeted by the devops approach, Julian offered some advice on how to do it right.
He began with the people issues, reminding us of Weinberg’s second law, which is “no matter what they tell you, it’s always a people problem”.
His people tips:
- In keeping with a recent trend, he criticized email as a severely flawed communication medium, best avoided.
- respect everyone
- have lunch with people on the other side of the wall
- discuss your problems with other groups (don’t just ask for a specific solution)
- invite everyone to stand-ups and retrospectives
- co-locate the sysadmins and developers (thomas allen)
Next, a few process suggestions:
- Avoid code ownership generally (or rather, promote joint/collective ownership)
- Pair developers with sysadmins
- It’s done when the code is in production (I would rephrase as: it’s not done until the code is in production)
and then tools:
- Teach your sysadmins to use version control
- Help your developers write performant code
- Help developers with managing their dev environment
- Run your deploy scripts via continuous integration (leading toward continuous deployment)
- Use Puppet or Chef (useful as a form of documentation as well as deployment tools, and on developer workstations as well as servers)
- Integrate monitoring and continuous integration (test monitoring in the development environment)
- Deliver code as OS packages (e.g. RPM, DEB)
- Separate binaries and configuration
- Harden systems immediately and enable logging for tuning security configuration (i.e. configure developer workstations with real security, making the development environment closer to production)
- Give developers access to production logs and data
- Re-create the developer environment often (to clear out accumulated cruft)
I agreed with a lot of what was said, objected to some, and lacked clarity on a few points. I think this kind of material is well suited to a multi-way BOF style discussion rather than a presentation format, and would have liked more opportunity for discussion.
Lars George and Fabrizio Schmidt: Social networks and the Richness of Data
Getting distributed webservices done with Nosql
Lars and Fabrizio described the general “social network problem”, and how they went about solving it. This problem space involves the processing, aggregation and dissemination of notifications for a very high volume of events, as commonly manifest in social networking websites such as Facebook and Twitter which connect people to each other to share updates. Apparently simple functionality, such as displaying the most recent updates from one’s “friends”, quickly become complex at scale.
As an example of the magnitude of the problem, he explained that they process 18 million events per day, and how in the course of storing and sharing these across the social graph, some operations peak as high as 150,000 per second. Such large and rapidly changing data sets represent a serious scaling challenge.
They originally built a monolithic, synchronous system called Phoenix, built on:
- LAMP frontends: Apache+PHP+APC (500 of them)
- Sharded MySQL multi-master databases (150 of them)
- memcache nodes with 1TB+ (60 of them)
They then added on asynchronous services alongside this, to handle things like Twitter and mobile devices, using Java (Tomcat) and RabbitMQ. The web frontend would send out AMQP messages, which would then be picked up by the asynchronous services, which would (where applicable) communicate back to Phoenix through an HTTP API call.
When the time came to re-architect their activity , they identified the following requirements:
- endless scalability
- storage- and cloud-independent
- fast
- flexible and extensible data model
This led them to an architecture based on:
- Nginx + Janitor
- Embedded Jetty + RESTeasy
- NoSQL storage backends (no fewer than three: Redis, Voldemort and Hazelcast)
They described this architecture in depth. The things which stood out for me were:
- They used different update strategies (push vs. pull) depending on the level of fan-out for the node (i.e. number of “friends”)
- They implemented a time-based activity filter which recorded a global timeline, from minutes out to days. Rather than traversing all of the user’s “friends” looking for events, they just scan the most recent events to see if their friends appear there.
- They created a distributed, scalable concurrent ID generator based on Hazelcast, which uses distributed locking to assign ranges to nodes, so that nodes can then quickly (locally) assign individual IDs
- It’s interesting how many of the off-the-shelf components had native scaling, replication, and sharding features. This sort of thing is effectively standard equipment now.
Their list of lessons learned:
- Start benchmarking and profiling your app early
- A fast and easy deployment keeps motivation high
- Configure Voldemort carefully (especially on large heap machines)
- Read the mailing lists of the NoSQL system you use
- No solution in docs? – read the sources
- At some point stop discussing and just do it
Andres Kitt: Building Skype
Learnings from almost five years as a Skype Architect
Andres began with an overview of Skype, which serves 800,000 registered users per employee (650 vs. 521 million). Their core team is based in Estonia. Their main functionality is peer-to-peer, but they do need substantial server infrastructure (PHP, C, C++, PostgreSQL) for things like peer-to-peer supporting glue, e-commerce and SIP integration. Skype uses PostgreSQL heavily in some interesting ways, in a complex multi-tiered architecture of databases and proxies.
His first lesson was that technical rules of thumb can lead us astray. It is always tempting to use patterns that have worked for us previously, in a different project, team or company, but they may not be right for another context. They can and should be used as a starting point for discussion, but not presumed to be the solution.
Second, he emphasized the importance of paying attention to functional architecture, not only technical architecture. As an example, he showed how the Skype web store, which sells only 4 products (skype in, skype out, voicemail, and subscription bundles of the previous three) became incredibly complex, because no one was responsible for this. Complex functional architecture leads to complex technical architecture, which is undesirable as he noted in his next point.
Keep it simple: minimize functionality, and minimize complexity. He gave an example of how their queuing system’s performance and scalability were greatly enhanced by removing functionality (the guarantee to deliver messages exactly once), which enabled the simplification of the system.
He also shared some organizational learnings, which I appreciated. Maybe my filters are playing tricks on me, but it seems as if more and more discussion of software engineering is focusing on organizing people. I interpret this as a sign of growing maturity in the industry, which (as Andres noted) has its roots in a somewhat asocial culture.
He noted that architecture needs to fit your organization. Design needs to be measured primarily by how well they solve business problems, rather than beauty or elegance.
He stressed the importance of communication, a term which I think is becoming so overused and diluted in organizations that it is not very useful. It’s used to refer to everything from roles and responsibilities, to personal relationships, to cultural norming, and more. In the case of Skype, what Andres learned was the importance of organizing and empowering people to facilitate alignment, information flow and understanding between different parts of the business. Skype evolved an architecture team which interfaces between (multiple) business units and (multiple) engineering teams, helping each to understand the other and taking responsibility for the overall system design.
Conclusion
Overall, I thought the day’s talks gave me new insight into how Internet applications are being developed and deployed in the real world today. They affirmed some of what I’ve been wondering about, and gave me some new things to think about as well. I’m looking forward to tomorrow.
Amplify Your Effectiveness (AYE) Conference: Day 3 (afternoon)
Seeing Process: Making Process Visible (Steve Smith)
This session allowed me to practice capturing and visualizing the processes used by a team. In order to create a process to work with, we first conducted a simulation, where we (about 15 participants) organized ourselves to produce a product (sorted decks of cards) from raw materials (many cards from different decks). Volunteers took on the roles of the customer, the customer’s QA manager, the factory owner and the factory production manager, while the rest of us (myself included) acted as factory workers. As a manager in my day job, I wanted to get a different perspective in this exercise, and practice different kinds of leadership.
While the customer and management were sequestered for further instruction, the workers waited outside and talked about what would happen. There was quite a bit of talk of disrupting the activity, making things difficult for management, which I hoped wouldn’t amount to anything once we got started. The point of the exercise was to learn something, and being too disruptive would hinder that.
More interestingly, several of the workers had participated in other, similar simulations, and so were keen to avoid the obvious pitfalls. We collectively resolved to work cooperatively and treat each other well during the simulation: to act as a team. This turned out to be a defining moment in the simulation.
The appointed manager started out by briefing us on the parameters of the simulation, insofar as he was aware of them, and setting some ground rules. He explained that there was a mess inside, preparing us for what to expect. Realizing the group might be too big for the task at hand, he explained that some people might not be needed, and they should find something to do on their own without interfering with the rest of the group. This was also a key action.
The manager also made a point of wielding explicit power, which I found a bit excessive. He said there would be layoffs, and threatened that as punishment for the workers. Particularly given the team dynamic, this threatened to unite the team against him, which I felt was an unnecessary risk.
When we were ready to start work, the manager gave a lot of explicit instruction, providing an algorithm for the first phase of sorting. It wasn’t a very good algorithm, and this was evident to some members of the group, but the workers didn’t seem to feel it was worth the time which would be wasted arguing. That is, all but one, who caused the group to block for a couple of minutes by refusing to cooperate or suggest an alternative.
What was missing at this point was an understanding of the goal. The team was prepared to work on the task, but without knowing what the result was expected to be, they hesitated and were confused. The manager offered a vote on a small change to the algorithm, and it was agreed. Gradually, we established that the result should be piles of cards sorted according to the pattern on the back, and the team quickly self-organized around this goal, largely disregarding the proposed algorithms. This beautifully illustrated one of the lessons of the exercise, by showing how quickly and naturally processes diverge from what is prescribed.
I took quite an active role in learning the parameters of the simulation and helping to facilitate the group’s self-organization. During the initial card-sorting phase, I established a clearinghouse for cards of all types, and invited my colleagues to bring me any type of card to add to my stacks. This simple act made it obvious how to contribute to the goal, and organized the physical materials for later use. Someone else was doing the same thing, and when all of the cards were stacked, we just combined our stacks.
The manager explained that we needed to sort the decks. I asked what ordering should be used, and he pointed to a flipchart where instructions were written. The workers took the stacks of cards off to various parts of the room to sort them. Once I had one complete, I took it to the QA manager, who said it was incorrect. I checked it against the instructions, and it was clear that they were not effective in communicating the correct ordering. I rewrote them on another flipchart, checked them with the QA manager, and took the old instructions down.
Before I went back to sorting cards, I talked to other people sorting, and made sure that they knew about the new instructions. I checked if they had any complete decks, helped them do the final (quick) sorting by suit, and carried their completed decks to QA. This simple action which both boosted our immediate output (by pushing work in progress through the process) and accelerated production (by spreading information about the requirements).
The next time I handed in a batch of cards to QA, I was recruited to work at the QA table, because there was a shortage of capacity there. After checking a couple of decks, I had had experience with every stage of the process in the “factory” and thus a strong working knowledge of the overall process. For the rest of the time, I sought out work in progress on the factory floor and helped to finish it off. Wherever I found stacks of cards, I just sorted them and took them to QA.
Throughout this process, I remember hearing the disconcerted voice of the owner, who seemed to be worried about everything. She was frightened when I fixed the instructions on the flipchart, confused when I was changing roles or anything else unexpected. She noticed when people were not working, and wanted the manager to do something about it. I’m not sure how that role was structured, and what pressures she was responding to. I had no contact at all with the customer.
Once the simulation was over, the customer checked several of the delivered products, rejecting one which had an error, but still ending up with enough to meet the requirements. All told, the simulation ran very smoothly, and I was keen to analyze why.
Next, each of us drew a diagram of the process from our point of view. I often struggle with creating diagrams by hand, because I tend to work out the structure as I go. This time, I stopped for a moment and sketched the major phases of the process in my notebook before I started drawing. This resulted in a much clearer and neater diagram than I normally produce.
We discussed the diagrams as a group, which revealed a lot about how things had really worked, more than anyone had been able to see individually. It was surprising how much information was disseminated in such a short amount of time through drawing and discussing the diagrams. It helped that the entire exercise was very fresh in our minds, but I got the impression that this would be a useful way to gather and share information from a group under normal circumstances as well.
I had to leave a bit early in order to catch my flight, so I missed the end of the discussion.
Amplify Your Effectiveness (AYE) Conference: Day 3 (morning)
Today is my last day at the conference, and I’ll be leaving before the welcome dinner in order to catch my flight to Dallas. It has been a pleasure meeting everyone, and an invaluable opportunity to get perspective on my work and other parts of my life.
Morning session: Retrospectives (Esther Derby)
The full title of this session was Looking Back, Moving Forward: Continuous Improvement With Effective Retrospectives. I find retrospectives to be a useful tool for process improvement, and am interested in gaining more experience with them and learning new techniques.
To start, the group (about 12 people) conducted a project together. We were provided with a specification for a 3-dimensional model of an urban development project, and some supplies with which to build it. Over the course of 30 minutes, we organized ourselves, performed the task, and successfully delivered the result to a satisfied customer.
Overall, the project went very well, so I was looking forward to the retrospective. In a retrospective, I often find myself focusing on the problems, so this would be good practice in identifying and repeating successes.
To start, we “set the stage” for the retrospective by reviewing the agenda and “working agreements” (pre-established goals and guidelines for behavior). I always try to do this in retrospectives, but the idea of agreeing the guidelines in advance was new to me. This would both help to build support in the team, and save time in the meeting.
Then, we conducted a quick poll of the group to assess the outcome of the project, using a weather metaphor: was it “partly sunny”, “cloudy”, “rain”, etc. I liked this metaphor because I found it more neutral than a rating: weather is something which just happens and is observable, while numbers, for example, seem to carry more judgement.
Next, we broke into smaller groups to collect data (our observations) on the project we had just completed. I liked the idea of doing this in smaller groups, as it promoted more points of view, and also produced some redundant data (which was useful in the analysis to follow). We wrote these observations on sticky notes.
To analyze the data, everyone came back together and plotted it on a “FRIM” chart (frequency vs. impact), where one axis indicated frequency (how often it occurred during the project?) and the other impact (how much impact did it have, and was it positive or negative?). We looked at this data from near and far, looking for patterns and considering individual notes.
The upper left quadrant (low frequency and positive impact) was dominated things which happened near the beginning of the project: establishing roles, establishing the workspace, etc. The lower left quadrant (low frequency and negative impact) showed mostly items to do with ambiguity: lack of information, confusion and so on. The upper right quadrant (high frequency and positive impact) included ongoing teamwork, the ongoing progress which helped coalesce and motivate the team. There was nothing in the lower right quadrant (high frequency and negative impact), which I took as a sign of a healthy project. The fact that some of the observations were redundant helped make the patterns clearer, by showing what had been noticed more.
With these observations in mind, we broke into smaller groups again, this time with the objective of generating ideas for improvement, or for repeating our successes. I thought that splitting up the group would bring more ideas out, but in fact we came up with many of the same suggestions. Still, this helped to reinforce the suggestions because they were independently proposed by different groups.
The team was reformed, and suggestions were posted on a flipchart page with a table for evaluation. The team rated the suggestions according to the amount of effort required, the anticipated impact, and their individual willingness to work on it. Finally, based on this data, Esther asked for a volunteer to implement the most promising suggestion, choosing just one to keep the amount of change manageable. She asked for a second volunteer to back up and support the first. I liked the idea of rating the suggestions together, and also having someone in a supporting role for the followup action.
We then polled the team again, reusing the weather metaphor, to predict the “weather” for the next project, based on what we learned. Most of the group was optimistic, though a couple of people predicted storms. When asked about their concerns, one of them pointed out that our decision to focus on improving one aspect of our process could cause us to waste too much time up front working on that. To mitigate this, we resolved to limit the amount of time we would spend on it, and he seemed satisfied. The other doomsayer was unrepentant, and said he suspected we would fall prey to the “second system” effect. I got the impression that he wanted to be proven right, and wasn’t interested in avoiding such a problem.
We concluded by reviewing the overall structure of the retrospective, and how the stages fit together: setting the stage, gathering data, analyzing it, selecting actions and wrapping up. The wrap-up included a retrospective on the retrospective itself, to promote improvement of that process. I was a bit concerned we would end up in infinite regress, but Esther stopped there, and didn’t do a retrospective of the retrospective of the retrospective. I don’t normally even do one meta-retrospective, but am considering trying that out now.