We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Conferences

Looking forward to UDS for Ubuntu 11.04 (Natty)

For some time now, we’ve been gearing up to begin development on Ubuntu 11.04. While some folks have been putting the finishing touches on the 10.10 release, and bootstrapping the infrastructure for 11.04, others have been meeting with Canonical stakeholders, coordinating community brainstorm sessions, and otherwise collecting information about what our priorities should be in the next cycle.

We’re using what we’ve learned to plan the Ubuntu Developer Summit next week in Orlando, where we’ll refine these ideas into a plan for the cycle. We’re organizing UDS a little bit differently this time, with the main program divided into the following tracks to reflect the key considerations for Ubuntu today:

  • Application Developers – Making it faster, easier, and more enjoyable to develop and distribute new applications on (and for) Ubuntu
  • Cloud – Delivering the best experience of cloud computing, whether hosting in a public cloud or building your own private cloud
  • Hardware Compatibility – Measuring and improving compatibility with a wide range of laptops, netbooks, servers and desktops
  • Multimedia – Formulating the best software stacks for graphics, audio and video in Ubuntu
  • Package Selection and System Defaults – Choosing the right components to keep Ubuntu lean, flexible and ready-to-run, while ensuring that the pieces fit and work together cleanly
  • Performance – Squeezing the best performance out of today’s free software stack, from the Linux kernel and GNU toolchain through user interfaces
  • Ubuntu the Project – Continuously improving the way we work together to produce Ubuntu, both within the project and with our upstream and downstream partners

You can click on the links above for a preview of the schedule for the week, with links to more detailed blueprints which will develop during and following UDS. If you’ll be joining us in person, then I’ll see you there! If not, be sure to review Laura’s guide on how to participate remotely.

Advertisement

Written by Matt Zimmerman

October 21, 2010 at 12:29

DebConf 10: Last day and retrospective

DebConf continued until Saturday, but Friday the 6th was my last day as I left New York that evening. I’m a bit late in getting this summary written up.

Making Debian Rule, Again (Margarita Manterola)

Marga took a bold look at the challenges facing Debian today. She says that Debian is perceived to be less innovative, out of date, difficult to use, and shrinking as a community. She called out Ubuntu as the “elephant in the room”, which is “‘taking away’ from Debian.” She insists that she is not opposed to Ubuntu, but that nonetheless Ubuntu is to some extent displacing Debian as a focal point for newcomers (both users and contributors).

Marga points out that Debian’s work is still meaningful, because many users still prefer Debian, and it is perceived to be of higher quality, as well as being the essential basis for derivatives like Ubuntu.

She conducted a survey (about 40 respondents) to ask what Debian’s problems are, and grouped them into categories like “motivation” and “communication” (tied for the #1 spot), “visibility” (#3, meaning public awareness and perception of Debian) and so on. She went on to make some suggestions about how to address these problems.

On the topic of communication, she proposed changing Debian culture by:

  • Spreading positive messages, celebrating success
  • Thanking contributors for their work
  • Avoiding escalation by staying away from email and IRC when angry
  • Treating every contributor with respect, “no matter how wrong they are”

This stimulated a lot of discussion, and most of the remaining time was taken up by comments from the audience. The video has been published, and offers a lot of insight into how Debian developers perceive each other and the project. She also made suggestions for the problems of visibility and motivation. These are crucial issues for Debian devotees to be considering, and I applaud Marga for her fortitude in drawing attention to them. This session was one of the highlights of this DebConf, and catalyzed a lot of discussion of vital issues in Debian.

Following her talk, there was a further discussion in the hallway which included many of the people who commented during the session, mostly about how to deal with problematic behavior in Debian. Although I agreed with much of what was said, I found it a bit painful to watch, because (ironically) this discussion displayed several of the characteristic “people problems” that Debian seems to have:

  • Many people had opinions, and although they agreed on many things, agreement was rarely expressed openly. Sometimes it helps a lot to simply say “I agree with you” and leave it at that. Lending support, rather than adding a new voice, helps to build consensus.
  • People waited for their turn to talk rather than listening to the person speaking, so the discussion didn’t build momentum toward a conclusion.
  • The conversation got louder and more dense over time, making it difficult to enter. It wasn’t argumentative; it was simply loud and fast-paced. This drowned out people who weren’t as vocal or willful.
  • Even where agreement was apparent, there was often no clear action agreed. No one had responsibility for changing the situation.

These same patterns are easily observed on Debian mailing lists for the past 10+ years. I exhibited them myself when I was active on these lists. This kind of cultural norm, once established, is difficult to intentionally change. It requires a fairly radical approach, which will inevitably mean coping with loss. In the case of a community, this can mean losing volunteer contributors cannot let go of this norm, and that is an emotionally difficult experience. However, it is nonetheless necessary to move forward, and I think that Debian as a community is capable of moving beyond it.

Juxtaposition

Given my history with both Debian and Ubuntu, I couldn’t help but take a comparative view of some of this. These problems are not new to Debian, and indeed they inspired many of the key decisions we made when founding the Ubuntu project in 2004. We particularly wanted to foster a culture which was supportive, encouraging and welcoming to potential contributors, something Debian has struggled with. Ubuntu has been, quite deliberately, an experiment in finding solutions to problems such as these. We’ve learned a lot from this experiment, and I’ve always hoped that this would help to find solutions for Debian as well.

Unfortunately, I don’t think Debian has benefited from these Ubuntu experiments as much as we might have hoped. A common example of this is the Ubuntu Code of Conduct. The idea of a project code of conduct predates Ubuntu, of course, but we did help to popularize it within the free software community, and this is now a common (and successful) practice used by many free software projects. The idea of behavioral standards for Debian has been raised in various forms for years now, but never seems to get traction. Hearing people talk about it at DebConf, it sometimes seemed almost as if the idea was dismissed out of hand because it was too closely associated with Ubuntu.

I learned from Marga’s talk that Enrico Zini drafted a set of Debian Community Guidelines over four years ago in 2006. It is perhaps a bit longand structured, but is basically excellent. Enrico has done a great job of compiling best practices for participating in an open community project. However, his document seems to be purely informational, without any official standing in the Debian project, and Debian community leaders have hesitated to make it something more.

Perhaps Ubuntu leaders (myself included) could have done more to nurture these ideas in Debian. At least in my experience, though, I found that my affiliation with Ubuntu almost immediately labeled me an “outsider” in Debian, even when I was still active as a developer, and this made it very difficult to make such proposals. Perhaps this is because Debian is proud of its independence, and does not want to be unduly influenced by external forces. Perhaps the initial “growing pains” of the Debian/Ubuntu relationship got in the way. Nonetheless, I think that Debian could be stronger by learning from Ubuntu, just as Ubuntu has learned so much from Debian.

Closing thoughts

I enjoyed this DebConf very much. This was the first DebConf to be hosted in the US, and there were many familiar faces that I hadn’t seen in some time. Columbia University offered an excellent location, and the presentation content was thought-provoking. There seemed to be a positive attitude toward Ubuntu, which was very good to see. Although there is always more work to do, it feels like we’re making progress in improving cooperation between Debian and Ubuntu.

I was a bit sad to leave, but was fortunate enough to meet up with Debian folk during my subsequent stay in the Boston area as well. It felt good to reconnect with this circle of friends again, and I hope to see you again soon.

Looking forward to next year’s DebConf in Bosnia

Written by Matt Zimmerman

August 25, 2010 at 16:57

DebConf 10: Day 3

How We Can Be the Silver Lining of the Cloud (Eben Moglen)

Eben’s talk was on the same topic as his Internet Society talk in February, which I had downloaded and watched some time ago. He challenges the free software community to develop the software to power the “freedom box”, a small, efficient and inexpensive personal server.

Such a system would put users more in control of their online lives, give them better protection for it under the law, and provide a platform for many new federated services.

It sounds like a very interesting project, which I’d like to write more about.

Statistical Machine Learning Analysis of Debian Mailing Lists (Hanna Wallach)

Hanna is bringing together her interests in machine learning and free software by using machine learning techniques to analyze of publicly available data from free software communities. In doing so, she hopes to develop tools for studying the patterns of collaboration, innovation and other behavior in these communities.

Her methodology uses statistical topic models, which infer the topic of a document based on the occurrence of topical words, to group Debian mailing list posts by topic. Her example analyzed posts from the debian-project and debian-women mailing lists, inferring a set of topics and categorizing all of the posts according to which topic(s) were represented in them.

Using this data, she could plot over time the frequency of discussion of each topic, which revealed interesting patterns. The audience quickly zoned in on practical applications for things like flamewar and troll detection.

Debian Derivatives BoF (Matt Zimmerman)

I organized this discussion session to share perspectives on Debian derivatives, in particular how we can improve cooperation between derivatives and Debian itself. The room was a bit hard to find, so attendance was relatively small, but this turned out to be a plus. With a smaller group, we were able to get acquainted with each other, and everyone participated.

Unsurprisingly, there were many more representatives from Ubuntu than other derivatives, and I was concerned that Ubuntu would dominate the discussion. It did, but I tried to draw out perspectives from other derivatives where possible.

On the whole, the tone was positive and constructive. This may be due in part to people self-selecting for the BoF, but I think there is a lot of genuine goodwill between Debian and Ubuntu.

Stefano Zacchiroli took notes in Gobby during the session, which I expect he will post somewhere public when he has a chance.

Written by Matt Zimmerman

August 5, 2010 at 15:10

DebConf 10: Day 2

Today was the first day of DebConf proper, where all of the sessions were aimed at project participants.

Bits from the DPL (Stefano Zacchiroli)

Stefano delivered an excellent address to the Debian project. As Project Leader, he offered a perspective on how far Debian has come, raised some of the key questions facing Debian today, and challenged the project to move forward and improve in several important ways.

He asked the audience: Is Debian better than other distributions? Is Debian still relevant? Why/how?

Having asked this question on identi.ca and Twitter recently, he presented a summary. There was a fairly standard list of technical concerns, but also:

  • A focus on quality, as defined by Debian’s highly modular approach. Each package maintainer is an expert on the software they package, and Debian as a whole offers a superior repository of packages.
  • The principles of software freedom, as embodied in Debian’s Social Contract. The Debian community’s current interpretation is a purist one, and Stefano cited the elimination of non-free firmware as a milestone in the upcoming Squeeze release. I wonder, though, how many of the audience, tapping away on WiFi-connected laptops, were able to do so without such firmware.
  • The project’s independent status, supported by donations and volunteers, which empowers it to make its own decisions, free of external impositions.
  • Debian’s ability to make decisions, as embodied in the constitution. This happens mostly through do-ocracy (individuals are empowered to decide questions concerning their own work), though larger scope issues are decided democratically. This one evoked a bit of a chuckle, as decision making in Debian is not always perceived as fully effective.

He pointed out some areas which we would like to see improve, including:

  • Developers accepting shared responsibility for the release as a whole. Making one’s own packages ready for release is necessary, but not sufficient. He cited evidence that the culture around NMUs is changing: historically, due to the do-ocratic system mentioned above, Debian developers have been somewhat territorial about their packages, and non-maintainer uploads were seen as stepping on their toes. However, recent experiments have indicated that this may no longer be the case, and Stefano encouraged more developers to help each other through NMUs.
  • When making decisions, we should seek consensus, not unanimity. In a project with thousands of contributors, whose operations are open to the public, there will never be unanimous support for a proposal, and seeking unanimity leads to stalled decisions.
  • In order to gain more contributors, Debian needs to welcome new and inexperienced contributors, as well as users (who can grow into contributors. He suggested reaching out to derivatives to find more of both. He decried the conventional wisdom that a “thick skin” should be a prerequisite for joining the project, pointing out that this attitude simply leads to fewer contributors. This point was met with applause by the DebConf audience.

All in all, I thought this was an accurate, timely and inspirational message for the project, and the talk is worth watching for any current or prospective contributor to Debian.

Debian Policy BoF (Russ Albery)

Russ facilitated a discussion about the Debian policy document itself and the process for managing it. He has recently put in a lot of time working on the backlog (down from 160+ to 120), but this is not sustainable for him, and help is needed.

There was a wide-ranging discussion of possible improvements including:

  • Editing the policy manual so that it is more readable start to finish as a document, rather than a reference
  • Creating a closer linkage between lintian and the policy manual, so that best practices from lintian get documented, and policy changes are accompanied by new checks
  • Separating the normative and informative parts of the policy manual

There was also some discussion in passing of the long-standing confusion (presumably among people new to the project) with regard to how policy is established. In Debian, best practices are first implemented in packages, then documented in policy (not the reverse). Sometimes, improvements are suggested at the policy level, when they need to start elsewhere. I’m not very familiar with how the policy manual is maintained at present, but listening to the discussion, it sounded like it might help to extend the process to include the implementation stage. This would allow standards improvements to be tracked all the way through from concept, to implementation, to documentation.

The Java Packaging Nightmare (Torsten Werner)

Torsten described the current state of Java packaging in Debian and the general problems involved, including licensing issues, build system challenges (e.g. maven) and dependency management. His slides were information-dense, so I didn’t take a lot of notes.

His presentation inspired a lively discussion about why upstream developers of Java applications and libraries often do not engage with Debian. Suggested reasons included:

  • They are not interested in Linux as a target platform
  • Although their code is released under a free license, they are not interested in meeting Debian standards for freedom and license correctness
  • They use Java because it is cross-platform, and so do not want to concern themselves with platform-specific issues
  • Because Java applications are easy to download and run manually, they perceive relatively little value in the Debian packaging system

Collaboration between Ubuntu and Debian (Jorge Castro)

Jorge talked about the connections between Debian and Ubuntu, how people in the projects perceive each other, and how to foster good relationships between developers.

He talked about past efforts to quantify collaboration between the projects, but the focus is now on building personal relationships. There were many good questions and comments afterward, and I’m looking forward to the Debian derivatives BoF session tomorrow to get into more detail.

Tonight is the traditional wine and cheese party. When this tradition started, I was one of just a handful of people in a room with some cheese and paper plates, but it’s now a large social gathering with contributions of cheese and wine from around the world. I’m looking forward to it.

Written by Matt Zimmerman

August 2, 2010 at 23:23

Rethinking the Ubuntu Developer Summit

This is a repost from the ubuntu-devel mailing list, where there is probably some discussion happening by now.

After each UDS, the organizers evaluate the event and consider how it could be further improved in the future. As a result of this process, the format of UDS has evolved considerably, as it has grown from a smallish informal gathering to a highly structured matrix of hundreds of 45-to-60-minute sessions with sophisticated audiovisual facilities.

If you participated in UDS 10.10 (locally or online), you have hopefully already completed the online survey, which is an important part of this evaluation process.

A survey can’t tell the whole story, though, so I would also like to start a more free-form discussion here among Ubuntu developers as well. I have some thoughts I’d like to share, and I’m interested in your perspectives as well.

Purpose

The core purpose of UDS has always been to help Ubuntu developers to explore, refine and share their plans for the subsequent release. It has expanded over the years to include all kinds of contributors, not only developers, but the principle remains the same.

We arrive at UDS with goals, desires and ideas, and leave with a plan of action which guides our work for the rest of the cycle.

The status quo

UDS looks like this:

This screenshot is only 1600×1200, so there are another 5 columns off the right edge of the screen for a total of 18 rooms. With 7 time slots per day over 5 days, there are over 500 blocks in the schedule grid. 9 tracks are scattered over the grid. We produce hundreds of blueprints representing projects we would like to work on.

It is an impressive achievement to pull this event together every six months, and the organizers work very hard at it. We accomplish a great deal at every UDS, and should feel good about that. We must also constantly evaluate how well it is working, and make adjustments to accommodate growth and change in the project.

How did we get here?

(this is all from memory, but it should be sufficiently accurate to have this discussion)

In the beginning, before it was even called UDS, we worked from a rough agenda, adding items as they came up, and ticking them off as we finished talking about them. Ad hoc methods worked pretty well at this scale.

As the event grew, and we held more and more discussions in parallel, it was hard to keep track of who was where, and we started to run into contention. Ubuntu and Launchpad were planning their upcoming work together at the same time. One group would be discussing topic A, and find that they needed the participation of person X, who was already involved in another discussion on topic B. The A group would either block, or go ahead without the benefit of person X, neither of which was seen to be very effective. By the end of the week, everyone was mentally and physically exhausted, and many were ill.

As a result, we decided to adopt a schedule grid, and ensure that nobody was expected to be in two places at once. Our productivity depended on getting precisely the right people face to face to tackle the technical challenges we faced. This meant deciding in advance who should be present in each session, and laying out the schedule to satisfy these constraints. New sessions were being added all the time, so the UDS organizers would stay up late at night during the event, creating the schedule grid for the next day. In the morning, over breakfast, everyone would tell them about errors, and request revisions to the schedule. Revisions to the schedule were painful, because we had to re-check all of the constraints by hand.

So, in the geek spirit, we developed a program which would read in the constraints and generate an error-free schedule. The UDS organizers ran this at the end of each day during the event, checked it over, and posted it. In the morning, over breakfast, everyone would tell them about constraints they hadn’t been aware of, and request revisions to the schedule. Revisions to the schedule were painful, because a single changed constraint would completely rearrange the schedule. People found themselves running all over the place to different rooms throughout the day, as they were scheduled into many different meetings back-to-back.

At around this point, UDS had become too big, and had too many constraints, to plan on the fly (unconference style). We resolved to plan more in advance, and agree on the scheduling constraints ahead of time. We divided the event into tracks, and placed each track in its own room. Most participants could stay in one place throughout the day, taking part in a series of related meetings except where they were specifically needed in an adjacent track. We created the schedule through a combination of manual and automatic methods, so that scheduling constraints could be checked quickly, but a human could decide how to resolve conflicts. There was time to review the schedule before the start of the event, to identify and fix problems. Revisions to the schedule during the event were fewer and less painful. We added keynote presentations, to provide opportunities to communicate important information to everyone, and ease back into meetings after lunch. Everyone was still exhausted and/or ill, and tiredness took its toll on the quality of discussion, particularly toward the end of the week.

Concerns were raised that people weren’t participating enough, and might stay on in the same room passively when they might be better able to contribute to a different session happening elsewhere. As a result, the schedule was randomly rearranged so that related sessions would not be held in the same room, and everyone would get up and move at the end of each hour.

This brings us roughly to where things stand today.

Problems with the status quo

  1. UDS is big and complex. Creating and maintaining the schedule is a lot of work in itself, and this large format requires a large venue, which in turn requires more planning and logistical work (not to mention cost). This is only worthwhile if we get proportionally more benefit out of the event itself.
  2. UDS produces many more blueprints than we need for a cycle. While some of these represent an explicit decision not to pursue a project, most of them are set aside simply because we can’t fit them in. We have the capacity to implement over 100 blueprints per cycle, but we have *thousands* of blueprints registered today. We finished less than half of the blueprints we registered for 10.04. This means that we’re spending a lot of time at UDS talking about things which can’t get done that cycle (and may never get done).
  3. UDS is (still) exhausting. While we should work hard, and a level of intensity helps to energize us, I think it’s a bit too much. Sessions later in the week are substantially more sluggish than early on, and don’t get the full benefit of the minds we’ve brought together. I believe that such an intense format does not suit the type of work being done at the event, which should be more creative and energetic.
  4. The format of UDS is optimized for short discussions (as many as we can fit into the grid). This is good for many technical decisions, but does not lend itself as well to generating new ideas, deeply exploring a topic, building broad consensus or tackling “big picture” issues facing the project. These deeper problems sometimes require more time. They also benefit tremendously from face-to-face interaction, so UDS is our best opportunity to work on them, and we should take advantage of it.
  5. UDS sessions aim for the minimum level of participation necessary, so that we can carry on many sessions in parallel: we ask, “who do we need in order to discuss this topic?” This is appropriate for many meetings. However, some would benefit greatly from broader participation, especially from multiple teams. We don’t always know in advance where a transformative idea will come from, and having more points of view represented would be valuable for many UDS topics.
  6. UDS only happens once per cycle, but design and planning need to continue throughout the cycle. We can’t design everything up front, and there is a lot of information we don’t have at the beginning. We should aim to use our time at UDS to greatest effect, but also make time to continue this work during the development cycle. “design a little, build a little, test a little, fly a little”

Proposals

  1. Concentrate on the projects we can complete in the upcoming cycle. If we aren’t going to have time to implement something until the next cycle, the blueprint can usually be deferred to the next cycle as well. By producing only moderately more blueprints than we need, we can reduce the complexity of the event, avoid waste, prepare better, and put most of our energy into the blueprints we intend to use in the near future.
  2. Group related sessions into clusters, and work on them together, with a similar group of people. By switching context less often, we can more easily stay engaged, get less fatigued, and make meaningful connections between related topics.
  3. Organize for cross-team participation, rather than dividing teams into tracks. A given session may relate to a Desktop Edition feature, but depends on contributions from more than just the Desktop Team. There is a lot of design going on at UDS outside of the “Design” track. By working together to achieve common goals, we can more easily anticipate problems, benefit from diverse points of view, and help each other more throughout the cycle.
  4. Build in opportunities to work on deeper problems, during longer blocks of time. As a platform, Ubuntu exists within a complex ecosystem, and we need to spend time together understanding where we are and where we are going. As a project, we have grown rapidly, and need to regularly evaluate how we are working and how we can improve. This means considering more than just blueprints, and sometimes taking more than an hour to cover a topic.

Written by Matt Zimmerman

May 27, 2010 at 16:15

Ubuntu 10.10 (Maverick) Developer Summit

I spent last week at the Ubuntu Developer Summit in Belgium, where we kicked off the 10.10 development cycle.

Due to our time-boxed release cycle, not everything discussed here will necessarily appear in Ubuntu 10.10, but this should provide a reasonable overview of the direction we’re taking.

Presentations

While most of our time at UDS is spent in small group meetings to discuss specific topics, there are also a handful of presentations to convey key information and stimulate creative thinking.

A few of the more memorable ones for me were:

  • Mark Shuttleworth talked about the desktop, in particular the introduction of the new Unity shell for Ubuntu Netbook Edition
  • Fanny Chevalier presented Diffamation, a tool for visualizing and navigating the history of a document in a very flexible and intuitive way
  • Rick Spencer talked about the development process for 10.10 and some key changes in it, including a greater focus on meeting deadlines for freezes (and making fewer exceptions)
  • Stefano Zacchiroli, the current Debian project leader, gave an overview of how Ubuntu and Debian developers are working together today, and how this could be improved. He has posted a summary on the debian-project mailing list.

The talks were all recorded, though they may not all be online yet.

Foundations

The Foundations team provides essential infrastructure, tools, packages and processes which are central to the development of all Ubuntu products. They make it possible for the desktop and server teams to focus on their areas of expertise, building on a common base system and development procedures.

Highlights from their track:

Desktop

The desktop team manages both Desktop Edition and Netbook Edition, on a mission to provide a top-notch experience to users across a range of client computing devices.

Highlights from their track:

Server/Cloud

The server team is charging ahead with making Ubuntu the premier server OS for cloud computing environments.

Highlights from their track:

ARM

Kiko Reis gave a talk introducing ARM and the corresponding opportunity for Ubuntu. The ARM team ran a full track during the week on all aspects of their work, from the technical details of the kernel and toolchain, to the assembly of a complete port of Netbook Edition 10.10 for several ARM platforms.

Kernel

The kernel team provided essential input and support for the above efforts, and also held their own track where they selected 2.6.35 as their target version, agreed on a variety of changes to the Ubuntu kernel configuration, and created a plan for providing backports of newer kernels to LTS releases to support deployment on newer hardware.

Security

Like the kernel team, the security team provided valuable input into the technical plans being developed by other teams, and also organized a security track to tackle some key security topics such as clarifying the duration of maintenance for various collections of packages, and the ongoing development of AppArmor and Ubuntu’s AppArmor profiles.

QA

The QA team focuses on testing, test automation and bug management throughout the project. While quality is everyone’s responsibility, the QA team helps to coordinate these activities across different teams, establish common standards, and maintain shared infrastructure and tools.

Highlights from their track include:

Design

The design team organized a track at UDS for the first time this cycle, and team manager Ivanka Majic gave a presentation to help explain its purpose and scope.

Toward the end of the week, I joined in a round table discussion about some of the challenges faced by the team in engaging with the Ubuntu community and building support for their work. This is a landmark effort in mating professional design with free software community, and there is still much to learn about how to do this well.

Community

The community track discussed the usual line-up of events, outreach and advocacy programs, organizational tools, and governance housekeeping for the 10.10 cycle, as well as goals for improving the translation of Ubuntu and related resources into many languages.

One notable project is an initiative to aggressively review patches submitted to the bug tracker, to show our appreciation for these contributions by processing them more quickly and completely.

Written by Matt Zimmerman

May 17, 2010 at 12:01

Ten TED talks I took in today

Starting about a year ago, I started following the release of videos from TED events. If one looked interesting, I would download the video to watch later. In this way, I accumulated a substantial collection of talks which I never managed to watch.

I spent a Saturday evening working my way through the list. These are my favorites out of this batch.

Written by Matt Zimmerman

April 25, 2010 at 01:25

Posted in Uncategorized

Tagged with , , ,

Ubuntu Inside Out – Free Software and Linux Days 2010 in Istanbul

In early April, I visited Istanbul to give a keynote at the Free Software and Linux Days event. This was an interesting challenge, because this was my first visit to Turkey, and my first experience presenting with simultaneous translation.

In my new talk, Ubuntu Inside Out, I spoke about:

  • What Ubuntu is about, and where it came from
  • Some of the challenges we face as a growing project with a large community
  • Some ways in which we’re addressing those challenges
  • How to get involved in Ubuntu and help
  • What’s coming next in Ubuntu

The organizers have made a video available if you’d like to watch it (WordPress.com won’t let me embed it here).

My first game of backgammonAfterward, Calyx and I wandered around Istanbul, with the help of our student guide, Oğuzhan. We don’t speak any Turkish, apart from a few vocabulary words I learned on the way to Turkey, so we were glad to have his help as we visited restaurants, cafes and shops, and wandered through various neighborhoods. We enjoyed a variety of delicious food, and the unexpected company of many friendly stray cats.

It was only a brief visit, but I was grateful for the opportunity to meet the local free software community and to see some of the city.

QCon London 2010: Day 3

The tracks which interested me today were “How do you test that?”, which dealt with scenarios where testing (especially automation) is particularly challenging, and “Browser as a Platform”, which is self-explanatory.

Joe Walker: Introduction to Bespin, Mozilla’s Web Based Code Editor

I didn’t make it to this talk, but Bespin looks very interesting. It’s “a Mozilla Labs Experiment to build a code editor in a web browser that Open Web and Open Source developers could love”.

I experimented briefly with the Mozilla hosted instance of Bespin. It seems mostly oriented for web application development, and still isn’t nearly as nice as desktop editors. However, I think something like this, combined with Bazaar and Launchpad, could make small code changes in Ubuntu very fast and easy to do, like editing a wiki.

Doron Reuveni: The Mobile Testing Challenge

Why Mobile Apps Need Real-World Testing Coverage and How Crowdsourcing Can Help

Doron explained how the unique testing requirements of mobile handset application are well suited to a crowdsourcing approach. As the founder of uTest, he explained their approach to connecting their customers (application vendors) with a global community of testers with a variety of mobile devices. Customers evaluate the quality of the testers’ work, and this data is used to grade them and select testers for future testing efforts in a similar domain. The testers earn money for their efforts, based on test case coverage (starting at about $20 each), bug reports (starting at about $5 each), and so on. Their highest performers earn thousands per month.

uTest also has a system, uTest Remote Access, which allows developers to “borrow” access to testers’ devices temporarily, for the purpose of reproducing bugs and verifying fixes. Doron gave us a live demo of the system, which (after verifying a code out of band through Skype) displayed a mockup of a BlackBerry device with the appropriate hardware buttons and a screenshot of what was displayed on the user’s screen. The updates were not quite real-time, but were sufficient for basic operation. He demonstrated taking a picture with the phone’s camera and seeing the photo within a few seconds.

Dylan Schiemann: Now What?

Dylan did a great job of extrapolating a future for web development based on the trend of the past 15 years. He began with a review of the origin of web technologies, which were focused on presentation and layout concerns, then on to JavaScript, CSS and DHTML. At this point, there was clear potential for rich applications, though there were many roadblocks: browser implementations were slow, buggy or nonexistent, security models were weak or missing, and rich web applications were generally difficult to engineer.

Things got better as more browsers came on the scene, with better implementations of CSS, DOM, XML, DHTML and so on. However, we’re still supporting an ancient implementation in IE. This is a recurring refrain among web developers, for whom IE seems to be the bane of their work. Dylan added something I hadn’t heard before, though, which was that Microsoft states that anti-trust restrictions were a major factor which prevented this problem from being fixed.

Next, there was an explosion of innovation around Ajax and related toolkits, faster javascript implementations, infrastructure as a service, and rich web applications like GMail, Google Maps, Facebook, etc.

Dylan believes that web applications are what users and developers really want, and that desktop and mobile applications will fall by the wayside. App stores, he says, are a short term anomaly to avoid the complexities of paying many different parties for software and services. I’m not sure I agree on this point, but there are massive advantages to the web as an application platform for both parties. Web applications are:

  • fast, easy and cheap to deploy to many users
  • relatively affordable to build
  • relatively easy to link together in useful ways
  • increasingly remix-able via APIs and code reuse

There are tradeoffs, though. I have an article brewing on this topic which I hope to write up sometime in the next few weeks.

Dylan pointed out that different layers of the stack exhibit different rates of change: browsers are slowest, then plugins (such as Flex and SilverLight), then toolkits like Dojo, and finally applications which can update very quickly. Automatically updating browsers are accelerating this, and Chrome in particular values frequent updates. This is good news for web developers, as this seems to be one of the key constraints for rolling out new web technologies today.

Dylan feels that technological monocultures are unhealthy, and prefers to see a set of competing implementations converging on standards. He acknowledged that this is less true where the monoculture is based on free software, though this can still inhibit innovation somewhat if it leads to everyone working from the same point of view (by virtue of sharing a code base and design). He mentioned that de facto standardization can move fairly quickly; if 2-3 browsers implement something, it can start to be adopted by application developers.

Comparing the different economics associated with browsers, he pointed out that Mozilla is dominated by search through the chrome (with less incentive to improve the rendering engine), Apple is driven by hardware sales, and Google by advertising delivered through the browser. It’s a bit of a mystery why Microsoft continues to develop Internet Explorer.

Dylan summarized the key platform considerations for developers:

  • choice and control
  • taste (e.g. language preferences, what makes them most productive)
  • performance and scalability
  • security

and surmised that the best way to deliver these is through open web technologies, such as HTML 5, which now offers rich media functionality including audio, video, vector graphics and animations. He closed with a few flashy demos of HTML 5 applications showing what could be done.

Written by Matt Zimmerman

March 12, 2010 at 17:14

QCon London 2010: Day 2

I was talk-hopping today, so none of these are complete summaries, just enough to capture my impressions from the time I was there. I may go back and watch the video for the ones which turned out to be most interesting.

Yesterday, I noted a couple of practices employed by the QCon organizers which I wanted to note, to consider trying them out with Canonical and Ubuntu events:

  • As participants leave each talk, they pass a basket with a red, a yellow and a green square attached to it. Next to the wastebasket are three small stacks of colored paper, also red, yellow and green. There are no instructions, indeed no words at all, but the intent seemed clear enough: drop a card in the basket to give feedback.
  • The talks were spread across multiple floors in the conference center, which I find is usually awkward. They mitigated this somewhat by posting a directory of the rooms inside each lift.

Chris Read: The Cloud Silver Bullet

Which calibre is right for me?

Chris offered some familiar warnings about cloud technologies: that they won’t solve all problems, that effort must be invested to reap the benefits, and that no one tool or provider will meet all needs. He then classified various tools and services according to their suitability for long or short processing cycles, and high or low “data sensitivity”.

Simon Wardley: Situation Normal, Everything Must Change

I actually missed Simon’s talk this time, but I’ve seen him speak before and talk with him every week about cloud topics as a colleague at Canonical. I highly recommend his talks to anyone trying to make sense of cloud technology and decide how to respond to it.

In some of the talks yesterday, there was a murmur of anti-cloud sentiment, with speakers asserting it was not meaningful, or they didn’t know what it was, or that it was nothing new. Simon’s material is the perfect antidote to this attitude, as he makes it very clear that there is a genuinely important and disruptive trend in progress, and explains what it is.

Jesper Boeg: Kanban

Crossing the line, pushing the limit or rediscovering the agile vision?

Jesper shared experiences and lessons learned with Kanban, and some of the problems it addresses which are present in other methodologies. His material was well balanced and insightful, and I’d like to go back and watch the full video when it becomes available.

Here again was a clear and pragmatic focus on matching tools and processes to the specific needs of the team, business and situation.

Ümit Yalcinalp: Development Model for the Cloud

Paradigm Shift or the Same Old Same Old?

Ümit focused on the PaaS (platform as a service) layer, and the experience offered to developers who build applications for these platforms. An evangelist from Salesforce.com, she framed the discussion as a comparison between force.com, Google App Engine and Microsoft Azure.

Eric Evans: Folding Design into an Agile Process

Eric tackled the question of how to approach the problem of design within the agile framework. As an outspoken advocate of domain-driven design, he presented his view in terms of this school and its terminology.

He emphasized the importance of modeling “when the critical complexity of the project is in understanding and communicating about the domain”. The “expected” approach to modeling is to incorporate an up-front analysis phase, but Eric argues that this is misguided. Because “models are distilled knowledge”, and teams are relatively ignorant at the start of a project, modeling in this way captures that ignorance and makes it persist.

Instead, he says, we should employ to a “pull” approach (in the Lean sense), and decide to work on modeling when:

  • communications with stakeholders deteriorates
  • when solutions are more complex than the problems
  • when velocity slows (because completed work becomes a burden)

Eric illustrated his points in part by showing video clips of engineers and business people engaged in dialog (here again, the focus on people rather than tools and process). He used this material as the basis for showing how models underlie these interactions, but are usually implicit. These dialogs were full of hints that the people involved were working from different models, and the software model needed to be revised. An explicit model can be a very powerful communication tool on software projects.

He outlined the process he uses for modeling, which was highly iterative and involves identifying business scenarios, using them to develop and evaluate abstract models, and testing those models by experimenting with code (“code probes”). Along the way, he emphasized the importance of making mistakes, not only as a learning tool but as a way to encourage creative thinking, which is essential to modeling work. In order to encourage the team to “think outside the box” and improve their conceptual model, he goes as far as to require that several “bad ideas” are proposed along the way, as a precondition for completing the process.

Eric is working on a white paper describing this process. A first draft is available on his website, and he is looking for feedback on it.

Modeling work, he suggested, can be incorporated into:

  • a stand up meeting
  • a spike
  • an iteration zero
  • release planning

He pointed out that not all parts of a system are created equal, and some of them should be prioritized for modeling work:

  • areas of the system which seem to require frequent change across projects/features/etc.
  • strategically important development efforts
  • user experiences which are losing coherence

This was a very compelling talk, whose concepts were clearly applicable beyond the specific problem domain of agile development.

Written by Matt Zimmerman

March 11, 2010 at 17:43