We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Ubuntu

Ubuntu and Qt

with 141 comments

I like to think that in the Ubuntu project, we’re pragmatic about technology. This means keeping an open mind, considering alternatives, and evaluating them objectively. It means bearing in mind the needs of the user, and measuring ourselves based on how well we solve their problems (not merely our own).

It is in this spirit that I have been thinking about Qt recently. We want to make it fast, easy and painless to develop applications for Ubuntu, and Qt is an option worth exploring for application developers. In thinking about this, I’ve realized that there is quite a bit of commonality between the strengths of Qt and some of the new directions in Ubuntu:

  • Qt has a long history of use on ARM as well as x86, by virtue of being popular on embedded devices. Consumer products have been built using Qt on ARM for over 10 years. We’ve been making Ubuntu products available for ARM for nearly two years now, and 10.10 supports more ARM boards than ever, including reference boards from Freescale, Marvell and TI. Qt is adding ARMv7 optimizations to benefit the latest ARM chips. We do this in order to offer OEMs a choice of hardware, without sacrificing software choice. Qt preserves this same choice for application developers.
  • Qt is a cross-platform application framework, with official ports for Windows, MacOS and more, and experimental community ports to Android, the iPhone and WebOS. Strong cross-platform support was one of the original principles of Qt, and it shows in the maturity of the official ports. With Ubuntu Light being installed on computers with Windows, and Ubuntu One landing on Android and the iPhone, we need interoperability with other platforms. There is also a large population of developers who already know how to target Windows, who can reach Ubuntu users as well by choosing Qt.
  • Qt has a fairly mature touch input system, which now has support for multi-touch and gestures (including QML), though it’s only complete on Windows 7 and Mac OS X 10.6. Meanwhile, Canonical has been working with the community to develop a low-level multi-touch framework for Linux and X11, for the benefit of Qt and other toolkits. These efforts will eventually meet in the middle.

Overall, I think Qt has a lot to offer people who want to develop applications for (and on) Ubuntu, particularly now. It already powers popular cross-platform applications like VLC, not to mention the entire Kubuntu distribution. I missed it when this happened last year, but Qt is now available under either the LGPL 2.1 or the GPL 3.0, which should make it suitable for virtually any Ubuntu application. It has strong commercial backing as well as a large developer community. No single solution will meet all developers’ needs, of course, and Ubuntu supports multiple toolkits and frameworks for this reason, but Qt seems like a great tool to have in our toolbox for the road ahead.

Written by Matt Zimmerman

October 20, 2010 at 10:08

Posted in Uncategorized

Tagged with , ,

DebConf 10: Last day and retrospective

with 4 comments

DebConf continued until Saturday, but Friday the 6th was my last day as I left New York that evening. I’m a bit late in getting this summary written up.

Making Debian Rule, Again (Margarita Manterola)

Marga took a bold look at the challenges facing Debian today. She says that Debian is perceived to be less innovative, out of date, difficult to use, and shrinking as a community. She called out Ubuntu as the “elephant in the room”, which is “‘taking away’ from Debian.” She insists that she is not opposed to Ubuntu, but that nonetheless Ubuntu is to some extent displacing Debian as a focal point for newcomers (both users and contributors).

Marga points out that Debian’s work is still meaningful, because many users still prefer Debian, and it is perceived to be of higher quality, as well as being the essential basis for derivatives like Ubuntu.

She conducted a survey (about 40 respondents) to ask what Debian’s problems are, and grouped them into categories like “motivation” and “communication” (tied for the #1 spot), “visibility” (#3, meaning public awareness and perception of Debian) and so on. She went on to make some suggestions about how to address these problems.

On the topic of communication, she proposed changing Debian culture by:

  • Spreading positive messages, celebrating success
  • Thanking contributors for their work
  • Avoiding escalation by staying away from email and IRC when angry
  • Treating every contributor with respect, “no matter how wrong they are”

This stimulated a lot of discussion, and most of the remaining time was taken up by comments from the audience. The video has been published, and offers a lot of insight into how Debian developers perceive each other and the project. She also made suggestions for the problems of visibility and motivation. These are crucial issues for Debian devotees to be considering, and I applaud Marga for her fortitude in drawing attention to them. This session was one of the highlights of this DebConf, and catalyzed a lot of discussion of vital issues in Debian.

Following her talk, there was a further discussion in the hallway which included many of the people who commented during the session, mostly about how to deal with problematic behavior in Debian. Although I agreed with much of what was said, I found it a bit painful to watch, because (ironically) this discussion displayed several of the characteristic “people problems” that Debian seems to have:

  • Many people had opinions, and although they agreed on many things, agreement was rarely expressed openly. Sometimes it helps a lot to simply say “I agree with you” and leave it at that. Lending support, rather than adding a new voice, helps to build consensus.
  • People waited for their turn to talk rather than listening to the person speaking, so the discussion didn’t build momentum toward a conclusion.
  • The conversation got louder and more dense over time, making it difficult to enter. It wasn’t argumentative; it was simply loud and fast-paced. This drowned out people who weren’t as vocal or willful.
  • Even where agreement was apparent, there was often no clear action agreed. No one had responsibility for changing the situation.

These same patterns are easily observed on Debian mailing lists for the past 10+ years. I exhibited them myself when I was active on these lists. This kind of cultural norm, once established, is difficult to intentionally change. It requires a fairly radical approach, which will inevitably mean coping with loss. In the case of a community, this can mean losing volunteer contributors cannot let go of this norm, and that is an emotionally difficult experience. However, it is nonetheless necessary to move forward, and I think that Debian as a community is capable of moving beyond it.

Juxtaposition

Given my history with both Debian and Ubuntu, I couldn’t help but take a comparative view of some of this. These problems are not new to Debian, and indeed they inspired many of the key decisions we made when founding the Ubuntu project in 2004. We particularly wanted to foster a culture which was supportive, encouraging and welcoming to potential contributors, something Debian has struggled with. Ubuntu has been, quite deliberately, an experiment in finding solutions to problems such as these. We’ve learned a lot from this experiment, and I’ve always hoped that this would help to find solutions for Debian as well.

Unfortunately, I don’t think Debian has benefited from these Ubuntu experiments as much as we might have hoped. A common example of this is the Ubuntu Code of Conduct. The idea of a project code of conduct predates Ubuntu, of course, but we did help to popularize it within the free software community, and this is now a common (and successful) practice used by many free software projects. The idea of behavioral standards for Debian has been raised in various forms for years now, but never seems to get traction. Hearing people talk about it at DebConf, it sometimes seemed almost as if the idea was dismissed out of hand because it was too closely associated with Ubuntu.

I learned from Marga’s talk that Enrico Zini drafted a set of Debian Community Guidelines over four years ago in 2006. It is perhaps a bit longand structured, but is basically excellent. Enrico has done a great job of compiling best practices for participating in an open community project. However, his document seems to be purely informational, without any official standing in the Debian project, and Debian community leaders have hesitated to make it something more.

Perhaps Ubuntu leaders (myself included) could have done more to nurture these ideas in Debian. At least in my experience, though, I found that my affiliation with Ubuntu almost immediately labeled me an “outsider” in Debian, even when I was still active as a developer, and this made it very difficult to make such proposals. Perhaps this is because Debian is proud of its independence, and does not want to be unduly influenced by external forces. Perhaps the initial “growing pains” of the Debian/Ubuntu relationship got in the way. Nonetheless, I think that Debian could be stronger by learning from Ubuntu, just as Ubuntu has learned so much from Debian.

Closing thoughts

I enjoyed this DebConf very much. This was the first DebConf to be hosted in the US, and there were many familiar faces that I hadn’t seen in some time. Columbia University offered an excellent location, and the presentation content was thought-provoking. There seemed to be a positive attitude toward Ubuntu, which was very good to see. Although there is always more work to do, it feels like we’re making progress in improving cooperation between Debian and Ubuntu.

I was a bit sad to leave, but was fortunate enough to meet up with Debian folk during my subsequent stay in the Boston area as well. It felt good to reconnect with this circle of friends again, and I hope to see you again soon.

Looking forward to next year’s DebConf in Bosnia

Written by Matt Zimmerman

August 25, 2010 at 16:57

Embracing the Web

with 7 comments

The web offers a compelling platform for developing modern applications. How can free software benefit more from web technology, and at the same time promote more software freedom on the web? What would the world be like if FLOSS web applications were as plentiful and successful as traditional FLOSS applications are today?

Web architecture

The web, as a collection of interlinked hypertext documents available on the Internet, has been well established for over a decade. However, the web as an application architecture is only just hitting its stride. With modern tools and frameworks, it’s relatively straightforward to build rich applications with browser-oriented frontends and HTTP-accessible backends.

This architecture has its limitations, of course: browser compatibility nightmares, limited offline capabilities, network latency, performance challenges, server-side scalability, complicated multimedia story, and so on. Most of these are slowly but surely being addressed or ameliorated as web technology improves.

However, for a large class of applications, these limitations are easily outweighed by the advantages: cross-platform support, instantaneous upgrades, global availability, etc. The web enables developers to reach the largest audience of users with the most compelling functionality, and simplifies users’ lives by giving them immediate access to their digital lives from anywhere.

Some web advocates would go so far as to say that if an application can be built for the web, it should be built for the web because it will be more successful. It’s no surprise that new web applications are being developed at a staggering rate, and I expect this trend to continue.

So what?

This trend represents a significant threat, and a corresponding opportunity, to free software. Relatively few web applications are free software, and relatively few free software applications are built for the web. Therefore, the momentum which is leading developers and users to the web is also leading them (further) away from free software.

Traditionally, pragmatists have adopted free software applications because they offered immediate gratification: it’s much faster and easier to install a free software application than to buy a proprietary one. The SaaS model of web applications offers the same (and better) immediacy, so free software has lost some of its appeal among pragmatists, who instead turn to proprietary web applications. Why install and run a heavyweight client application when you can just click a link?

Many web applications—perhaps even a majority—are built using free software, but are not themselves free. A new generation of developers share an appreciation for free software tools and frameworks, but see little value in sharing their own software. To these developers, free software is something you use, not something you make.

Free software cannot afford to ignore the web. Instead, we should embrace the web more completely, more powerfully, and more effectively than proprietary systems do.

What would that look like?

In my view, a FLOSS client platform which fully embraced the web would:

  • treat web applications as first-class citizens. The web would not be just another application, represented by a browser, but more like a native application runtime. Web applications could feel much more “native” while still preserving the advantages of a web-style user experience. There would be no web browser: that’s a tool for legacy systems to run web applications within a compatibility environment.
  • provide a seamless experience for developers to build web applications. It would be as fast and easy to develop a trivial client/server web application as it is to write “Hello, world!” in PyGTK using Quickly. For bonus points, it would be easy to develop and run web applications locally, and then deploy directly to a PaaS or IaaS cloud.
  • empower the user to manage their applications and data regardless of where they are hosted. Traditional operating systems act as a connecting fabric for local applications, providing a shared namespace, file store and IPC mechanisms, but web applications are lacking this. The web’s security model requires that applications are thoroughly sandboxed from each other, but a mediating operating system could connect them in meaningful ways, just as web browsers store cookies and passwords for various websites while protecting them from each other.

Imagine a world where free web applications are as plentiful and malleable as free native applications are today. Developers would be able to branch, test and submit patches to them.

What about Chrome OS?

Chrome OS is a step in the right direction, but doesn’t yet realize this vision. It’s a traditional operating system which is stripped down and focused on running one application (a web browser) very, very well. In some ways, it elevates web applications to first-class status, though its paradigm is still fundamentally that of a web browser.

It is not designed for development, but for consuming the web. Developers who want to create and deploy web applications must use a more traditional operating system to do so.

It does not put the end user in control. On the contrary, the user is almost entirely dependent on SaaS applications for all of their needs.

Although it is constructed using free software, it does not seem to deliver the principles or benefits of software freedom to the web itself.

How?

Just as free software was bootstrapped on proprietary UNIX, the present-day web is fertile ground for the development of free web applications. The web is based on open standards. There are already excellent web development tools, web application frameworks and server software which are FLOSS. Leading-edge web browsers like Firefox and Chrome/Chromium, where much web innovation is happening today, are already open source.

This is a huge head start toward a free web. I think what’s missing is a client platform which catalyzes the development and use of FLOSS web applications.

Written by Matt Zimmerman

July 26, 2010 at 10:43

We’ve packaged all of the free software…what now?

with 61 comments

Today, virtually all of the free software available can be found in packaged form in distributions like Debian and Ubuntu. Users of these distributions have access to a library of thousands of applications, ranging from trivial to highly sophisticated software systems. Developers can find a vast array of programming languages, tools and libraries for constructing new applications.

This is possible because we have a mature system for turning free software components into standardized modules (packages). Some software is more difficult to package and maintain, and I’m occasionally surprised to find something very useful which isn’t packaged yet, but in general, the software I want is packaged and ready before I realize I need it. Even the “long tail” of niche software is generally packaged very effectively.

Thanks to coherent standards, sophisticated management tools, and the principles of software freedom, these packages can be mixed and matched to create complete software stacks for a wide range of devices, from netbooks to supercomputing clusters. These stacks are tightly integrated, and can be tested, released, maintained and upgraded as a unit. The Debian system is unparalleled for this purpose, which is why Ubuntu is based on it. The vision, for a free software operating system which is highly modular and customizable, has been achieved.

Rough edges

This is a momentous achievement, and the Debian packaging system fulfills its intended purpose very well. However, there are a number of areas where it introduces friction, because the package model doesn’t quite fit some new problems. Most of these are becoming more common over time as technology evolves and changes shape.

  • Embedded systems need to be pared down to the essentials to minimize storage, distribution, computation and maintenance costs. Standardized packaging introduces excessive code, data and interdependency which make the system larger than necessary. Tight integration makes it difficult to bootstrap the system from scratch for custom hardware. Projects like Embedded Debian aim to adapt the Debian system to be more suitable for use in these environments, to varying degrees of success. Meanwhile, smart phones will soon become the most common type of computer globally.
  • Data, in contrast to software, has simple requirements. It just needs to be up to date and accessible to programs. Packaging and distributing it through the standardized packaging process is awkward, doesn’t offer tangible benefits, and introduces overhead. There have been extensive debates in Debian about how to handle large data sets. Meanwhile, this problem is becoming increasingly important as data science catalyzes a new wave of applications.
  • Client/server and other types of distributed applications are notoriously tricky to package. The packaging system works within the context of a single OS instance, and so relationships which span multiple OS instances (e.g. a server application which depends on a database running on another server) are not straightforward. Meanwhile, the web has become a first-class application development platform, and this kind of interdependency is extremely common on both clients and servers.
  • Cross-platform applications such as Firefox, Chromium and OpenOffice.org have long struggled with packaging. In order to be portable, they tend to bundle the components they depend on, such as libraries. Packagers strive for normalization, and want these applications to use the packaged versions of these libraries instead. Application developers build, test and ship one set of dependencies, but their users receive a different stack when they use the packaged version of the application. Developers on both sides are in constant tension as they expect their configuration to be the canonical one, and want it to be tightly integrated. Cross-platform application developers want to provide their own, application-specific cross-platform update mechanism, while distributions want to use the same mechanism for all their components.
  • Virtual appliances aim to combine application and operating system into a portable bundle. While a modular OS is definitely called for, appliances face some of the same problems as embedded systems as they need to be minimized. Furthermore, the appliance becomes a component in itself, and requires metadata, distribution mechanisms and so on. If someone wants to “install” a virtual appliance, how should that work? Packaging them up as .debs doesn’t make much sense for the same reasons that apply to large data sets. I haven’t seen virtual appliances really taking off, but I expect cloud to change that.
  • Runtime libraries for languages such as Perl, Python and Ruby provide their own packaging systems, which manage dependencies and other metadata, installation, upgrades and removal in a standardized way. Because these operate independently of the OS package manager, all sorts of problems arise. Projects such as GoboLinux have attempted to tie them together, to varying degrees of success. Meanwhile, each new programming language we invent comes with a different, incompatible package manager, and distribution developers need to spend time repackaging them into their preferred format.

Why are we stuck?

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
– Abraham Maslow

The packaging ecosystem is very strong. Not only do we have powerful tools for working with packages, we also benefit from packages being a well-understood concept, and having established processes for developing, exchanging and talking about them. Once something is packaged, we know what it is and how to work with it, and it “fits” into everything else. So, it is tempting to package everything in sight, as we already know how to make sense of packages. However, this may not always be the right tool for the job.

Various attempts have been made to extend the packaging concept to make it more general, for example:

  • Portage, of Gentoo fame, offers impressive flexibility by building packages with a custom configuration, tailored for the needs of the target system.
  • Conary, from rPath, offers finer-grained dependencies, powerful revision control and object-oriented build recipes.
  • Nix provides a consistent build and runtime environment, ensuring that programs are run with the same dependencies used to build them, by keeping the relevant versions installed. I don’t know much about it, but it sounds like all dependencies implicitly refer to an exact version.

Other package managers aim to solve a specific problem, such as providing lightweight package management for embedded systems, or lazy dependency installation, or fixing the filesystem hierarchy. There is a long list of package managers of various levels which solve different problems.

Most of these systems suffer from an important fundamental tradeoff: they are designed to manage the entire system, from the kernel through applications, and so they must be used wholesale in order to reap their full benefit. In other words, in their world, everything is a package, and anything which is not a package is out of scope. Therefore, each of these systems requires a separate collection of packages, and each time we invent a new one, its adherents set about packaging everything in the new format. It takes a very long time to do this, and most of them lose momentum before a mature ecosystem can form around them.

This lock-in effect makes it difficult for new packaging technologies to succeed.

Divide and Conquer

No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won’t account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system.

  • Decouple applications from the platform. Debian packaging is an excellent solution for managing the network of highly interdependent components which make up the core of a modern Linux distribution. It falls short, however, for managing the needs of modern applications: fast-moving, cross-platform and client/server (especially web). Let’s stop trying to fit these square pegs into round holes, and adopt a different solution for this space, preferably one which is comprehensible and useful to application developers so that they can do most of the work.
  • Treat data as a service. It’s no longer useful to package up documentation in order to provide local copies of it on every Linux system. The web is a much, much richer and more effective solution to that problem. The same principle is increasingly applicable to structured data. From documents and contacts to anti-virus signatures and PCI IDs, there’s much better data to be had “out there” on the web than “down here” on the local filesystem.
  • Simplify integration between packaging systems in order to enable a heterogeneous model. When we break the assumption that everything is a package, we will need new tools to manage the interfaces between different types of components. Applications will need to introspect their dependency chain, and system management tools will need to be able to interrogate applications. We’ll need thoughtfully designed interfaces which provide an appropriate level of abstraction while offering sufficient flexibility to solve many different packaging problems. There is unarguably a cost to this heterogeneity, but I believe it would easily outweigh the shortcomings of our current model.

But I like things how they are!

We don’t have a choice. The world is changing around us, and distributions need to evolve with it. If we don’t adapt, we will eventually give way to systems which do solve these problems.

Take, for example, modern web browsers like Firefox and Chromium. Arguably the most vital application for users, the browser is coming under increasing pressure to keep up with the breakneck pace of innovation on the web. The next wave of real-time collaboration and multimedia applications relies on the rapid development of new capabilities in web browsers. Browser makers are responding by accelerating deployment in the field: both aggressively push new releases to their users. A report from Google found that Chrome upgrades 97% of their users within 21 days of a new release, and Firefox 85% (both impressive numbers). Mozilla recently changed their maintenance policies, discontinuing maintenance of stable releases and forcing Ubuntu to ship new upstream releases to users.

These applications are just the leading edge of the curve, and the pressure will only increase. Equally powerful trends are pressing server applications, embedded systems, and data to adapt as well. The ideas I’ve presented here are only one possible way forward, and I’m sure there are more and better ideas brewing in distribution communities. I’m sure that I’m not the only one thinking about these problems.

Whatever it looks like in the end, I have no doubt that change is ahead.

Written by Matt Zimmerman

July 6, 2010 at 15:31

Navigating the PolicyKit maze

with 12 comments

I’ve written a simple application which will automatically extract media from CDs and DVDs when they are inserted into the drive attached to my server. This makes it easy for me to compile all of my media in one place and access it anytime I like. The application uses the modern udisks API, formerly known as DeviceKit-disks, and I wrote it in part to learn get some experience working with udisks (which, it turns out, is rather nice indeed).

Naturally, I wanted to grant this application the privileges necessary to mount, unmount and eject removable media. The server is headless, and the application runs as a daemon, so this would require explicit configuration. udisks uses PolicyKit for authorization, so I expected this to be very simple to do. In fact, it is very simple, but finding out exactly how to do it wasn’t quite so easy.

The Internet is full of web pages which recommend editing /etc/PolicyKit/PolicyKit.conf. As far as I can tell, nothing pays attention to this file anymore, and all of these instructions have been rendered meaningless. My system was also full of tools like polkit-auth, from the apparently-obsolete policykit package, which kept their configuration in some other ignored place, i.e. /var/lib/PolicyKit. It seems the configuration system has been through a revolution or two recently.

In Ubuntu 10.04, the right place to configure these things seems to be /var/lib/polkit-1/localauthority, and this is documented in pklocalauthority(8). Authorization can be tested using pkcheck(1), and the default policy can be examined using pkaction(1).

I solved my problem by creating a file in /var/lib/polkit-1/localauthority/50-local.d with a .pkla extension with the following contents:

[Access to removable media for the media group]
Identity=unix-group:media
Action=org.freedesktop.udisks.drive-eject;org.freedesktop.udisks.filesystem-mount
ResultAny=yes

This took effect immediately and did exactly what I needed. I lost quite some time trying to figure out why the other methods weren’t working, so perhaps this post will save the next person a bit of time. It may also inspire some gratitude for the infrastructure which makes all of this work automatically for more typical usage scenarios, so that most people don’t need to worry about any of this.

Along the way, I whipped up a patch to add a --eject option to the handy udisks(1) tool, which made it easier for me to test along the way.

Written by Matt Zimmerman

June 27, 2010 at 14:38

Summary of development plans for Ubuntu 10.10

with 3 comments

With the 10.10 developer summit behind us, several teams have published engineering plans for the 10.10 release cycle, including:

  • The Desktop team plan, via Rick Spencer, includes both Desktop Edition and Netbook Edition, and covers Software Center, GNOME, application and browser selection (including Chromium), X11 and social networking
  • The Server team plan, via Jos Boumans, covers migrating services to Upstart, improvements to the mail stack, cloud and cluster storage systems, cloud deployment and load balancing, Ubuntu Enterprise Cloud, and Java
  • The Foundations team plan, via Duncan McGreggor, covers system initialization, btrfs, cleaning house, improvements to the installers, Software Center (also) and Upstart (itself), along with a batch of miscellaneous projects, including compiling Ubuntu for the i686 instruction set
  • The Kernel team plan, via Leann Ogasawara, covers the choice of kernel 2.6.35, the current Ubuntu patch set, the configuration of our various kernel flavours, bug management, and the availability of backports of newer kernels to Ubuntu LTS releases

Written by Matt Zimmerman

May 31, 2010 at 10:41

Rethinking the Ubuntu Developer Summit

with 16 comments

This is a repost from the ubuntu-devel mailing list, where there is probably some discussion happening by now.

After each UDS, the organizers evaluate the event and consider how it could be further improved in the future. As a result of this process, the format of UDS has evolved considerably, as it has grown from a smallish informal gathering to a highly structured matrix of hundreds of 45-to-60-minute sessions with sophisticated audiovisual facilities.

If you participated in UDS 10.10 (locally or online), you have hopefully already completed the online survey, which is an important part of this evaluation process.

A survey can’t tell the whole story, though, so I would also like to start a more free-form discussion here among Ubuntu developers as well. I have some thoughts I’d like to share, and I’m interested in your perspectives as well.

Purpose

The core purpose of UDS has always been to help Ubuntu developers to explore, refine and share their plans for the subsequent release. It has expanded over the years to include all kinds of contributors, not only developers, but the principle remains the same.

We arrive at UDS with goals, desires and ideas, and leave with a plan of action which guides our work for the rest of the cycle.

The status quo

UDS looks like this:

This screenshot is only 1600×1200, so there are another 5 columns off the right edge of the screen for a total of 18 rooms. With 7 time slots per day over 5 days, there are over 500 blocks in the schedule grid. 9 tracks are scattered over the grid. We produce hundreds of blueprints representing projects we would like to work on.

It is an impressive achievement to pull this event together every six months, and the organizers work very hard at it. We accomplish a great deal at every UDS, and should feel good about that. We must also constantly evaluate how well it is working, and make adjustments to accommodate growth and change in the project.

How did we get here?

(this is all from memory, but it should be sufficiently accurate to have this discussion)

In the beginning, before it was even called UDS, we worked from a rough agenda, adding items as they came up, and ticking them off as we finished talking about them. Ad hoc methods worked pretty well at this scale.

As the event grew, and we held more and more discussions in parallel, it was hard to keep track of who was where, and we started to run into contention. Ubuntu and Launchpad were planning their upcoming work together at the same time. One group would be discussing topic A, and find that they needed the participation of person X, who was already involved in another discussion on topic B. The A group would either block, or go ahead without the benefit of person X, neither of which was seen to be very effective. By the end of the week, everyone was mentally and physically exhausted, and many were ill.

As a result, we decided to adopt a schedule grid, and ensure that nobody was expected to be in two places at once. Our productivity depended on getting precisely the right people face to face to tackle the technical challenges we faced. This meant deciding in advance who should be present in each session, and laying out the schedule to satisfy these constraints. New sessions were being added all the time, so the UDS organizers would stay up late at night during the event, creating the schedule grid for the next day. In the morning, over breakfast, everyone would tell them about errors, and request revisions to the schedule. Revisions to the schedule were painful, because we had to re-check all of the constraints by hand.

So, in the geek spirit, we developed a program which would read in the constraints and generate an error-free schedule. The UDS organizers ran this at the end of each day during the event, checked it over, and posted it. In the morning, over breakfast, everyone would tell them about constraints they hadn’t been aware of, and request revisions to the schedule. Revisions to the schedule were painful, because a single changed constraint would completely rearrange the schedule. People found themselves running all over the place to different rooms throughout the day, as they were scheduled into many different meetings back-to-back.

At around this point, UDS had become too big, and had too many constraints, to plan on the fly (unconference style). We resolved to plan more in advance, and agree on the scheduling constraints ahead of time. We divided the event into tracks, and placed each track in its own room. Most participants could stay in one place throughout the day, taking part in a series of related meetings except where they were specifically needed in an adjacent track. We created the schedule through a combination of manual and automatic methods, so that scheduling constraints could be checked quickly, but a human could decide how to resolve conflicts. There was time to review the schedule before the start of the event, to identify and fix problems. Revisions to the schedule during the event were fewer and less painful. We added keynote presentations, to provide opportunities to communicate important information to everyone, and ease back into meetings after lunch. Everyone was still exhausted and/or ill, and tiredness took its toll on the quality of discussion, particularly toward the end of the week.

Concerns were raised that people weren’t participating enough, and might stay on in the same room passively when they might be better able to contribute to a different session happening elsewhere. As a result, the schedule was randomly rearranged so that related sessions would not be held in the same room, and everyone would get up and move at the end of each hour.

This brings us roughly to where things stand today.

Problems with the status quo

  1. UDS is big and complex. Creating and maintaining the schedule is a lot of work in itself, and this large format requires a large venue, which in turn requires more planning and logistical work (not to mention cost). This is only worthwhile if we get proportionally more benefit out of the event itself.
  2. UDS produces many more blueprints than we need for a cycle. While some of these represent an explicit decision not to pursue a project, most of them are set aside simply because we can’t fit them in. We have the capacity to implement over 100 blueprints per cycle, but we have *thousands* of blueprints registered today. We finished less than half of the blueprints we registered for 10.04. This means that we’re spending a lot of time at UDS talking about things which can’t get done that cycle (and may never get done).
  3. UDS is (still) exhausting. While we should work hard, and a level of intensity helps to energize us, I think it’s a bit too much. Sessions later in the week are substantially more sluggish than early on, and don’t get the full benefit of the minds we’ve brought together. I believe that such an intense format does not suit the type of work being done at the event, which should be more creative and energetic.
  4. The format of UDS is optimized for short discussions (as many as we can fit into the grid). This is good for many technical decisions, but does not lend itself as well to generating new ideas, deeply exploring a topic, building broad consensus or tackling “big picture” issues facing the project. These deeper problems sometimes require more time. They also benefit tremendously from face-to-face interaction, so UDS is our best opportunity to work on them, and we should take advantage of it.
  5. UDS sessions aim for the minimum level of participation necessary, so that we can carry on many sessions in parallel: we ask, “who do we need in order to discuss this topic?” This is appropriate for many meetings. However, some would benefit greatly from broader participation, especially from multiple teams. We don’t always know in advance where a transformative idea will come from, and having more points of view represented would be valuable for many UDS topics.
  6. UDS only happens once per cycle, but design and planning need to continue throughout the cycle. We can’t design everything up front, and there is a lot of information we don’t have at the beginning. We should aim to use our time at UDS to greatest effect, but also make time to continue this work during the development cycle. “design a little, build a little, test a little, fly a little”

Proposals

  1. Concentrate on the projects we can complete in the upcoming cycle. If we aren’t going to have time to implement something until the next cycle, the blueprint can usually be deferred to the next cycle as well. By producing only moderately more blueprints than we need, we can reduce the complexity of the event, avoid waste, prepare better, and put most of our energy into the blueprints we intend to use in the near future.
  2. Group related sessions into clusters, and work on them together, with a similar group of people. By switching context less often, we can more easily stay engaged, get less fatigued, and make meaningful connections between related topics.
  3. Organize for cross-team participation, rather than dividing teams into tracks. A given session may relate to a Desktop Edition feature, but depends on contributions from more than just the Desktop Team. There is a lot of design going on at UDS outside of the “Design” track. By working together to achieve common goals, we can more easily anticipate problems, benefit from diverse points of view, and help each other more throughout the cycle.
  4. Build in opportunities to work on deeper problems, during longer blocks of time. As a platform, Ubuntu exists within a complex ecosystem, and we need to spend time together understanding where we are and where we are going. As a project, we have grown rapidly, and need to regularly evaluate how we are working and how we can improve. This means considering more than just blueprints, and sometimes taking more than an hour to cover a topic.

Written by Matt Zimmerman

May 27, 2010 at 16:15

Using Mumble with a bluetooth headset

with 14 comments

At Canonical, we’ve started experimenting with Mumble as an alternative to telephone calls for real-time conversations. The operating model is very much like IRC, based on channels within which everyone can hear everyone else as they speak.

Mumble works best with a headset, which offers better audio recording quality due to the proximity of the microphone, and avoids problems with echo and feedback. I like to pace around while I talk, and so I’ve already invested in a Plantronics Calisto Pro, which includes a DECT handset, a Bluetooth headset and a nice charging base. My laptop has bluetooth onboard, so I set about trying to get Mumble set up to use the headset via bluetooth.

The first thing I tried was to click on the bluetooth icon on the panel, and select Set up new device.... After setting the headset to pairing mode, I waited quite some time for it to show up in the list, but it never did. After opening the preferences dialog, I discovered that this was (presumably) because I had already paired it, ages ago.

So, I went about trying to get PulseAudio to talk to it. After some hunting, I tried:


pactl load-module module-bluetooth-device address=00:23:xx:xx:xx:xx

This created a new Card in PulseAudio, which I could see in the Hardware tab of the Sound Preferences dialog and in pactl list, but it was inactive:


Card #1
Name: bluez_card.00_23_XX_XX_XX_XX
Driver: module-bluetooth-device.c
Owner Module: 17
Properties:
device.description = "Calisto PLT"
device.string = "00:23:XX:XX:XX:XX"
device.api = "bluez"
device.class = "sound"
device.bus = "bluetooth"
device.form_factor = "headset"
bluez.path = "/org/bluez/13899/hci0/dev_00_23_XX_XX_XX_XX"
bluez.class = "0x200404"
bluez.name = "Calisto PLT"
device.icon_name = "audio-headset-bluetooth"
Profiles:
hsp: Telephony Duplex (HSP/HFP) (sinks: 1, sources: 1, priority. 20)
off: Off (sinks: 0, sources: 0, priority. 0)
Active Profile: off

The Hardware tab confirmed that the device was Disabled, and using the “Off” profile. I could manually select the “Telephony Duplex (HSP/HFP)” profile, but this had no apparent effect. There were no Sources or Sinks to send or receive audio data to or from the headset (and thus nothing new in the Input or Output tabs of the preferences dialog). syslog hinted:


pulseaudio[15239]: module-bluetooth-device.c: Default profile not connected, selecting off profile

At this point, I recalled that when I suspend this laptop, the bluetooth driver gets unhappy. I don’t commonly use bluetooth on this laptop, so I hypothesized that the driver was in a weird state, and I decided to try unloading and reloading the btusb module. Once I did so, the device showed up in the panel menu, with a “Connect” menu item. Aha! The manual module loading above may turn out to be unnecessary if the device shows up in the menu initially.

I selected the Connect menu item, and a bunch of magic happened, with the result that I heard the headset’s tone to indicate it was activated. Sound Preferences now showed it under Hardware as active using the Telephony Duplex profile, and it appeared under Input and Output as well. pactl list showed its sources and sinks. Mumble offered it as a choice for the input and output device. Progress!

Some experimentation (thanks, Colin) revealed that other people could hear me through the headset just fine.

However (Problem #1), I couldn’t hear them clearly through the headset. If I switch Mumble’s output to my speakers, they sound fine, so it’s not Mumble. So I tried:


paplay -d bluez_sink.00_23_XX_XX_XX_XX /usr/share/sounds/alsa/Front_Center.wav

…which also sounds awful. There is a whining noise, which gets louder when the audio signal is louder, which makes it very difficult to hear. I don’t know if the problem is with PulseAudio, bluez, the kernel, or the device, but using speakers for output is a workaround. This does seem to cause some echo, though, so I’ll need to track this down eventually.

Problem #2 is that Mumble seems to prevent PulseAudio from suspending the headset’s source. Even if I set it to “Push to Talk” mode, the headset stays active all the time, which will drain the battery. PulseAudio seems to do the right thing, and kill the radio link, if the source is left idle, and brings it up when there is activity, so this looks to be Mumble’s fault. I’ll need to fix this as well.

Written by Matt Zimmerman

May 26, 2010 at 12:58

Ubuntu 10.10 (Maverick) Developer Summit

with 14 comments

I spent last week at the Ubuntu Developer Summit in Belgium, where we kicked off the 10.10 development cycle.

Due to our time-boxed release cycle, not everything discussed here will necessarily appear in Ubuntu 10.10, but this should provide a reasonable overview of the direction we’re taking.

Presentations

While most of our time at UDS is spent in small group meetings to discuss specific topics, there are also a handful of presentations to convey key information and stimulate creative thinking.

A few of the more memorable ones for me were:

  • Mark Shuttleworth talked about the desktop, in particular the introduction of the new Unity shell for Ubuntu Netbook Edition
  • Fanny Chevalier presented Diffamation, a tool for visualizing and navigating the history of a document in a very flexible and intuitive way
  • Rick Spencer talked about the development process for 10.10 and some key changes in it, including a greater focus on meeting deadlines for freezes (and making fewer exceptions)
  • Stefano Zacchiroli, the current Debian project leader, gave an overview of how Ubuntu and Debian developers are working together today, and how this could be improved. He has posted a summary on the debian-project mailing list.

The talks were all recorded, though they may not all be online yet.

Foundations

The Foundations team provides essential infrastructure, tools, packages and processes which are central to the development of all Ubuntu products. They make it possible for the desktop and server teams to focus on their areas of expertise, building on a common base system and development procedures.

Highlights from their track:

Desktop

The desktop team manages both Desktop Edition and Netbook Edition, on a mission to provide a top-notch experience to users across a range of client computing devices.

Highlights from their track:

Server/Cloud

The server team is charging ahead with making Ubuntu the premier server OS for cloud computing environments.

Highlights from their track:

ARM

Kiko Reis gave a talk introducing ARM and the corresponding opportunity for Ubuntu. The ARM team ran a full track during the week on all aspects of their work, from the technical details of the kernel and toolchain, to the assembly of a complete port of Netbook Edition 10.10 for several ARM platforms.

Kernel

The kernel team provided essential input and support for the above efforts, and also held their own track where they selected 2.6.35 as their target version, agreed on a variety of changes to the Ubuntu kernel configuration, and created a plan for providing backports of newer kernels to LTS releases to support deployment on newer hardware.

Security

Like the kernel team, the security team provided valuable input into the technical plans being developed by other teams, and also organized a security track to tackle some key security topics such as clarifying the duration of maintenance for various collections of packages, and the ongoing development of AppArmor and Ubuntu’s AppArmor profiles.

QA

The QA team focuses on testing, test automation and bug management throughout the project. While quality is everyone’s responsibility, the QA team helps to coordinate these activities across different teams, establish common standards, and maintain shared infrastructure and tools.

Highlights from their track include:

Design

The design team organized a track at UDS for the first time this cycle, and team manager Ivanka Majic gave a presentation to help explain its purpose and scope.

Toward the end of the week, I joined in a round table discussion about some of the challenges faced by the team in engaging with the Ubuntu community and building support for their work. This is a landmark effort in mating professional design with free software community, and there is still much to learn about how to do this well.

Community

The community track discussed the usual line-up of events, outreach and advocacy programs, organizational tools, and governance housekeeping for the 10.10 cycle, as well as goals for improving the translation of Ubuntu and related resources into many languages.

One notable project is an initiative to aggressively review patches submitted to the bug tracker, to show our appreciation for these contributions by processing them more quickly and completely.

Written by Matt Zimmerman

May 17, 2010 at 12:01

Lucid ruminations

with 8 comments

A few months ago, I wrote about changes in our development process for Ubuntu 10.04 LTS in order to meet our goals for this long-term release. So, how has it turned out?

Well, the development teams are still very busy preparing for the upcoming release, so there hasn’t been too much time for retrospection yet. Here are some of my initial thoughts, though.

  • Merge from Debian testing – Martin Pitt has started a discussion on ubuntu-devel about how this went. For my part, I found that Lucid included fewer surprises than Karmic.
  • Add fewer features – This is difficult to evaluate objectively, but my gut feeling is that we kept this largely under control. As usual, a few surprise desktop features were implemented that not everyone is happy about, myself included.
  • Avoid major infrastructure changes – I think we did reasonably well here, though Plymouth is a notable exception. It resulted (unsurprisingly) in some nasty bugs which we’ve had to spend time dealing with.
  • Extend beta testing – This will be difficult to assess, though if 10.04 beta was at least as good as 9.10 or 9.04 beta, then it will have arguably been a success.
  • Freeze with Debian – Although early indications were good, this didn’t work out so well, as Debian’s freeze was delayed
  • Visualize progress – The feature status page provided a lot of visual progress information, and the system behind it allowed us to keep track of work slippage throughout the cycle, both of which seemed like a firm step in the right direction. I’m looking forward to hearing from development teams how this information helped them (or not).

A more complete set of retrospectives on Lucid should give us some good ideas for how to improve further in Maverick and beyond.

Update: Fixed broken link.

Written by Matt Zimmerman

April 20, 2010 at 09:23

Follow

Get every new post delivered to your Inbox.

Join 2,244 other followers