We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Debian

DebConf 10: Day 2

Today was the first day of DebConf proper, where all of the sessions were aimed at project participants.

Bits from the DPL (Stefano Zacchiroli)

Stefano delivered an excellent address to the Debian project. As Project Leader, he offered a perspective on how far Debian has come, raised some of the key questions facing Debian today, and challenged the project to move forward and improve in several important ways.

He asked the audience: Is Debian better than other distributions? Is Debian still relevant? Why/how?

Having asked this question on identi.ca and Twitter recently, he presented a summary. There was a fairly standard list of technical concerns, but also:

  • A focus on quality, as defined by Debian’s highly modular approach. Each package maintainer is an expert on the software they package, and Debian as a whole offers a superior repository of packages.
  • The principles of software freedom, as embodied in Debian’s Social Contract. The Debian community’s current interpretation is a purist one, and Stefano cited the elimination of non-free firmware as a milestone in the upcoming Squeeze release. I wonder, though, how many of the audience, tapping away on WiFi-connected laptops, were able to do so without such firmware.
  • The project’s independent status, supported by donations and volunteers, which empowers it to make its own decisions, free of external impositions.
  • Debian’s ability to make decisions, as embodied in the constitution. This happens mostly through do-ocracy (individuals are empowered to decide questions concerning their own work), though larger scope issues are decided democratically. This one evoked a bit of a chuckle, as decision making in Debian is not always perceived as fully effective.

He pointed out some areas which we would like to see improve, including:

  • Developers accepting shared responsibility for the release as a whole. Making one’s own packages ready for release is necessary, but not sufficient. He cited evidence that the culture around NMUs is changing: historically, due to the do-ocratic system mentioned above, Debian developers have been somewhat territorial about their packages, and non-maintainer uploads were seen as stepping on their toes. However, recent experiments have indicated that this may no longer be the case, and Stefano encouraged more developers to help each other through NMUs.
  • When making decisions, we should seek consensus, not unanimity. In a project with thousands of contributors, whose operations are open to the public, there will never be unanimous support for a proposal, and seeking unanimity leads to stalled decisions.
  • In order to gain more contributors, Debian needs to welcome new and inexperienced contributors, as well as users (who can grow into contributors. He suggested reaching out to derivatives to find more of both. He decried the conventional wisdom that a “thick skin” should be a prerequisite for joining the project, pointing out that this attitude simply leads to fewer contributors. This point was met with applause by the DebConf audience.

All in all, I thought this was an accurate, timely and inspirational message for the project, and the talk is worth watching for any current or prospective contributor to Debian.

Debian Policy BoF (Russ Albery)

Russ facilitated a discussion about the Debian policy document itself and the process for managing it. He has recently put in a lot of time working on the backlog (down from 160+ to 120), but this is not sustainable for him, and help is needed.

There was a wide-ranging discussion of possible improvements including:

  • Editing the policy manual so that it is more readable start to finish as a document, rather than a reference
  • Creating a closer linkage between lintian and the policy manual, so that best practices from lintian get documented, and policy changes are accompanied by new checks
  • Separating the normative and informative parts of the policy manual

There was also some discussion in passing of the long-standing confusion (presumably among people new to the project) with regard to how policy is established. In Debian, best practices are first implemented in packages, then documented in policy (not the reverse). Sometimes, improvements are suggested at the policy level, when they need to start elsewhere. I’m not very familiar with how the policy manual is maintained at present, but listening to the discussion, it sounded like it might help to extend the process to include the implementation stage. This would allow standards improvements to be tracked all the way through from concept, to implementation, to documentation.

The Java Packaging Nightmare (Torsten Werner)

Torsten described the current state of Java packaging in Debian and the general problems involved, including licensing issues, build system challenges (e.g. maven) and dependency management. His slides were information-dense, so I didn’t take a lot of notes.

His presentation inspired a lively discussion about why upstream developers of Java applications and libraries often do not engage with Debian. Suggested reasons included:

  • They are not interested in Linux as a target platform
  • Although their code is released under a free license, they are not interested in meeting Debian standards for freedom and license correctness
  • They use Java because it is cross-platform, and so do not want to concern themselves with platform-specific issues
  • Because Java applications are easy to download and run manually, they perceive relatively little value in the Debian packaging system

Collaboration between Ubuntu and Debian (Jorge Castro)

Jorge talked about the connections between Debian and Ubuntu, how people in the projects perceive each other, and how to foster good relationships between developers.

He talked about past efforts to quantify collaboration between the projects, but the focus is now on building personal relationships. There were many good questions and comments afterward, and I’m looking forward to the Debian derivatives BoF session tomorrow to get into more detail.

Tonight is the traditional wine and cheese party. When this tradition started, I was one of just a handful of people in a room with some cheese and paper plates, but it’s now a large social gathering with contributions of cheese and wine from around the world. I’m looking forward to it.

Advertisements

Written by Matt Zimmerman

August 2, 2010 at 23:23

DebConf 10: Day 1

This week, I am attending DebConf 10 at Columbia University in New York.

The first day of DebConf is known as Debian Day. While most of DebConf is for the benefit of people involved in Debian itself, Debian Day is aimed at a wider audience, and invites the public to learn about, and interact with, the Debian project.

These are the talks I attended.

Debian Day Opening Plenary (Gabriella Coleman, Hans-Christoph Steiner)

Hans-Christoph discussed Debian and free software from a big picture perspective: why software freedom matters, challenging the producer/consumer dichotomy, how the Debian ecosystem hangs together, and so on.

Steps to adopting F/OSS in government (Andy Oram)

Andy discussed FLOSS adoption in governments, drawing on examples from Peru, the city of Munich, the state of Massachusetts. He covered the reasons why this is valuable, the relationship between government transparency and software freedom, and practical advice for successful adoption and deployment.”

Pedagogical Freedom (panel, Jonah Bossewitch et al)

The panelists discussed the use of technology in education, especially free software, some of the parallels between free software and education, and what these communities could learn from each other. This is a promising topic, though the perspectives seemed to be mostly from the education realm. There is much to be learned on both sides.

Google Summer of Code 2010 at Debian (Obey Arthur Liu)

This talk covered the student projects for this year’s Summer of Code. Most of the students were in attendance, and presented their own work. They ranged from more specialized projects like the Hurd installer, to core infrastructure improvements like multi-arch in APT.

Beyond Sharing: Open Source Design (Mushon Zer-Aviv)

Mushon gave an excellent talk on open design. This is a subject I’ve thought quite a bit about, and Asheesh validated many of my conclusions from a different angle. I’ve added a new post to my todo list to go into more detail on this subject.

Some points from his talk which resonated with me:

  • When collaborating on code, everyone must reason with one collaborator: the computer. This forces a level playing field and a common encoding.
  • Collaborating on other types of creative work is more difficult in part because of the differences encoding/decoding information between different individuals
  • Making this easier for design work requires improving motivational factors and language as well as tools and processes
  • Many design decisions are actually rational, and are compatible with a group consensus project. Too often, I hear that design can’t be done collaboratively, citing “too many cooks in the kitchen” analogies, but I have never believed it.
  • Mushon’s own project, shiftspace.org, seems to be a browser-plugin-based system for collaboratively remixing web applications. I haven’t looked at it yet.
  • Leadership and openness are not mutually exclusive. This is another pet peeve of mine, and there are so many examples of open leadership in the free software community that I don’t see how anyone can think otherwise.
  • Mushon’s presentation is available in revision control so that it can be freely used and improved

How Government can Foster Freedom in Technology (Hon. Gale Brewer)

Councillor Brewer paid a visit to DebConf to tell us about the work she is doing on the city council to promote better government through technology.

Brewer seems to be a strong advocate of open data, saying essentially that all government data should be public. She summarized a bill to mandate that New York City government data be public, shared in raw form using open standards, and kept up to date. It sounded like a very strong move which would encourage third party innovation around the data.

She also discussed the need for greater access to computers and Internet connectivity, particularly in educational settings, and a desire to to have all public hearings and meetings shared online.

Why is GNU/Linux Like a Player Piano? (Jon Anderson Hall, Esq.)

Jon is a very engaging speaker. He drew parallels between the development of player pianos, reproducing pianos, reed organs, pipe organs…and free software. He even tied in Hedi Lamarr’s work which led to spread spectrum wireless technology. To be quite honest, I did not find that these analogies taught me much about either free software or player pianos, but nonetheless, I couldn’t help but take an interest in what he was saying and how he presented it.

DebConf Opening Plenary (Gabriella Coleman)

Biella and company explained all the ins and outs of the event: where to go, what to do (and not do), and most importantly, whom to thank for all of it. Now in its 11th year, DebConf is an impressively well-run conference.

I’m looking forward to the rest of the week!

Written by Matt Zimmerman

August 2, 2010 at 00:50

We’ve packaged all of the free software…what now?

Today, virtually all of the free software available can be found in packaged form in distributions like Debian and Ubuntu. Users of these distributions have access to a library of thousands of applications, ranging from trivial to highly sophisticated software systems. Developers can find a vast array of programming languages, tools and libraries for constructing new applications.

This is possible because we have a mature system for turning free software components into standardized modules (packages). Some software is more difficult to package and maintain, and I’m occasionally surprised to find something very useful which isn’t packaged yet, but in general, the software I want is packaged and ready before I realize I need it. Even the “long tail” of niche software is generally packaged very effectively.

Thanks to coherent standards, sophisticated management tools, and the principles of software freedom, these packages can be mixed and matched to create complete software stacks for a wide range of devices, from netbooks to supercomputing clusters. These stacks are tightly integrated, and can be tested, released, maintained and upgraded as a unit. The Debian system is unparalleled for this purpose, which is why Ubuntu is based on it. The vision, for a free software operating system which is highly modular and customizable, has been achieved.

Rough edges

This is a momentous achievement, and the Debian packaging system fulfills its intended purpose very well. However, there are a number of areas where it introduces friction, because the package model doesn’t quite fit some new problems. Most of these are becoming more common over time as technology evolves and changes shape.

  • Embedded systems need to be pared down to the essentials to minimize storage, distribution, computation and maintenance costs. Standardized packaging introduces excessive code, data and interdependency which make the system larger than necessary. Tight integration makes it difficult to bootstrap the system from scratch for custom hardware. Projects like Embedded Debian aim to adapt the Debian system to be more suitable for use in these environments, to varying degrees of success. Meanwhile, smart phones will soon become the most common type of computer globally.
  • Data, in contrast to software, has simple requirements. It just needs to be up to date and accessible to programs. Packaging and distributing it through the standardized packaging process is awkward, doesn’t offer tangible benefits, and introduces overhead. There have been extensive debates in Debian about how to handle large data sets. Meanwhile, this problem is becoming increasingly important as data science catalyzes a new wave of applications.
  • Client/server and other types of distributed applications are notoriously tricky to package. The packaging system works within the context of a single OS instance, and so relationships which span multiple OS instances (e.g. a server application which depends on a database running on another server) are not straightforward. Meanwhile, the web has become a first-class application development platform, and this kind of interdependency is extremely common on both clients and servers.
  • Cross-platform applications such as Firefox, Chromium and OpenOffice.org have long struggled with packaging. In order to be portable, they tend to bundle the components they depend on, such as libraries. Packagers strive for normalization, and want these applications to use the packaged versions of these libraries instead. Application developers build, test and ship one set of dependencies, but their users receive a different stack when they use the packaged version of the application. Developers on both sides are in constant tension as they expect their configuration to be the canonical one, and want it to be tightly integrated. Cross-platform application developers want to provide their own, application-specific cross-platform update mechanism, while distributions want to use the same mechanism for all their components.
  • Virtual appliances aim to combine application and operating system into a portable bundle. While a modular OS is definitely called for, appliances face some of the same problems as embedded systems as they need to be minimized. Furthermore, the appliance becomes a component in itself, and requires metadata, distribution mechanisms and so on. If someone wants to “install” a virtual appliance, how should that work? Packaging them up as .debs doesn’t make much sense for the same reasons that apply to large data sets. I haven’t seen virtual appliances really taking off, but I expect cloud to change that.
  • Runtime libraries for languages such as Perl, Python and Ruby provide their own packaging systems, which manage dependencies and other metadata, installation, upgrades and removal in a standardized way. Because these operate independently of the OS package manager, all sorts of problems arise. Projects such as GoboLinux have attempted to tie them together, to varying degrees of success. Meanwhile, each new programming language we invent comes with a different, incompatible package manager, and distribution developers need to spend time repackaging them into their preferred format.

Why are we stuck?

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
– Abraham Maslow

The packaging ecosystem is very strong. Not only do we have powerful tools for working with packages, we also benefit from packages being a well-understood concept, and having established processes for developing, exchanging and talking about them. Once something is packaged, we know what it is and how to work with it, and it “fits” into everything else. So, it is tempting to package everything in sight, as we already know how to make sense of packages. However, this may not always be the right tool for the job.

Various attempts have been made to extend the packaging concept to make it more general, for example:

  • Portage, of Gentoo fame, offers impressive flexibility by building packages with a custom configuration, tailored for the needs of the target system.
  • Conary, from rPath, offers finer-grained dependencies, powerful revision control and object-oriented build recipes.
  • Nix provides a consistent build and runtime environment, ensuring that programs are run with the same dependencies used to build them, by keeping the relevant versions installed. I don’t know much about it, but it sounds like all dependencies implicitly refer to an exact version.

Other package managers aim to solve a specific problem, such as providing lightweight package management for embedded systems, or lazy dependency installation, or fixing the filesystem hierarchy. There is a long list of package managers of various levels which solve different problems.

Most of these systems suffer from an important fundamental tradeoff: they are designed to manage the entire system, from the kernel through applications, and so they must be used wholesale in order to reap their full benefit. In other words, in their world, everything is a package, and anything which is not a package is out of scope. Therefore, each of these systems requires a separate collection of packages, and each time we invent a new one, its adherents set about packaging everything in the new format. It takes a very long time to do this, and most of them lose momentum before a mature ecosystem can form around them.

This lock-in effect makes it difficult for new packaging technologies to succeed.

Divide and Conquer

No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won’t account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system.

  • Decouple applications from the platform. Debian packaging is an excellent solution for managing the network of highly interdependent components which make up the core of a modern Linux distribution. It falls short, however, for managing the needs of modern applications: fast-moving, cross-platform and client/server (especially web). Let’s stop trying to fit these square pegs into round holes, and adopt a different solution for this space, preferably one which is comprehensible and useful to application developers so that they can do most of the work.
  • Treat data as a service. It’s no longer useful to package up documentation in order to provide local copies of it on every Linux system. The web is a much, much richer and more effective solution to that problem. The same principle is increasingly applicable to structured data. From documents and contacts to anti-virus signatures and PCI IDs, there’s much better data to be had “out there” on the web than “down here” on the local filesystem.
  • Simplify integration between packaging systems in order to enable a heterogeneous model. When we break the assumption that everything is a package, we will need new tools to manage the interfaces between different types of components. Applications will need to introspect their dependency chain, and system management tools will need to be able to interrogate applications. We’ll need thoughtfully designed interfaces which provide an appropriate level of abstraction while offering sufficient flexibility to solve many different packaging problems. There is unarguably a cost to this heterogeneity, but I believe it would easily outweigh the shortcomings of our current model.

But I like things how they are!

We don’t have a choice. The world is changing around us, and distributions need to evolve with it. If we don’t adapt, we will eventually give way to systems which do solve these problems.

Take, for example, modern web browsers like Firefox and Chromium. Arguably the most vital application for users, the browser is coming under increasing pressure to keep up with the breakneck pace of innovation on the web. The next wave of real-time collaboration and multimedia applications relies on the rapid development of new capabilities in web browsers. Browser makers are responding by accelerating deployment in the field: both aggressively push new releases to their users. A report from Google found that Chrome upgrades 97% of their users within 21 days of a new release, and Firefox 85% (both impressive numbers). Mozilla recently changed their maintenance policies, discontinuing maintenance of stable releases and forcing Ubuntu to ship new upstream releases to users.

These applications are just the leading edge of the curve, and the pressure will only increase. Equally powerful trends are pressing server applications, embedded systems, and data to adapt as well. The ideas I’ve presented here are only one possible way forward, and I’m sure there are more and better ideas brewing in distribution communities. I’m sure that I’m not the only one thinking about these problems.

Whatever it looks like in the end, I have no doubt that change is ahead.

Written by Matt Zimmerman

July 6, 2010 at 15:31

Ubuntu 10.10 (Maverick) Developer Summit

I spent last week at the Ubuntu Developer Summit in Belgium, where we kicked off the 10.10 development cycle.

Due to our time-boxed release cycle, not everything discussed here will necessarily appear in Ubuntu 10.10, but this should provide a reasonable overview of the direction we’re taking.

Presentations

While most of our time at UDS is spent in small group meetings to discuss specific topics, there are also a handful of presentations to convey key information and stimulate creative thinking.

A few of the more memorable ones for me were:

  • Mark Shuttleworth talked about the desktop, in particular the introduction of the new Unity shell for Ubuntu Netbook Edition
  • Fanny Chevalier presented Diffamation, a tool for visualizing and navigating the history of a document in a very flexible and intuitive way
  • Rick Spencer talked about the development process for 10.10 and some key changes in it, including a greater focus on meeting deadlines for freezes (and making fewer exceptions)
  • Stefano Zacchiroli, the current Debian project leader, gave an overview of how Ubuntu and Debian developers are working together today, and how this could be improved. He has posted a summary on the debian-project mailing list.

The talks were all recorded, though they may not all be online yet.

Foundations

The Foundations team provides essential infrastructure, tools, packages and processes which are central to the development of all Ubuntu products. They make it possible for the desktop and server teams to focus on their areas of expertise, building on a common base system and development procedures.

Highlights from their track:

Desktop

The desktop team manages both Desktop Edition and Netbook Edition, on a mission to provide a top-notch experience to users across a range of client computing devices.

Highlights from their track:

Server/Cloud

The server team is charging ahead with making Ubuntu the premier server OS for cloud computing environments.

Highlights from their track:

ARM

Kiko Reis gave a talk introducing ARM and the corresponding opportunity for Ubuntu. The ARM team ran a full track during the week on all aspects of their work, from the technical details of the kernel and toolchain, to the assembly of a complete port of Netbook Edition 10.10 for several ARM platforms.

Kernel

The kernel team provided essential input and support for the above efforts, and also held their own track where they selected 2.6.35 as their target version, agreed on a variety of changes to the Ubuntu kernel configuration, and created a plan for providing backports of newer kernels to LTS releases to support deployment on newer hardware.

Security

Like the kernel team, the security team provided valuable input into the technical plans being developed by other teams, and also organized a security track to tackle some key security topics such as clarifying the duration of maintenance for various collections of packages, and the ongoing development of AppArmor and Ubuntu’s AppArmor profiles.

QA

The QA team focuses on testing, test automation and bug management throughout the project. While quality is everyone’s responsibility, the QA team helps to coordinate these activities across different teams, establish common standards, and maintain shared infrastructure and tools.

Highlights from their track include:

Design

The design team organized a track at UDS for the first time this cycle, and team manager Ivanka Majic gave a presentation to help explain its purpose and scope.

Toward the end of the week, I joined in a round table discussion about some of the challenges faced by the team in engaging with the Ubuntu community and building support for their work. This is a landmark effort in mating professional design with free software community, and there is still much to learn about how to do this well.

Community

The community track discussed the usual line-up of events, outreach and advocacy programs, organizational tools, and governance housekeeping for the 10.10 cycle, as well as goals for improving the translation of Ubuntu and related resources into many languages.

One notable project is an initiative to aggressively review patches submitted to the bug tracker, to show our appreciation for these contributions by processing them more quickly and completely.

Written by Matt Zimmerman

May 17, 2010 at 12:01

Lucid ruminations

A few months ago, I wrote about changes in our development process for Ubuntu 10.04 LTS in order to meet our goals for this long-term release. So, how has it turned out?

Well, the development teams are still very busy preparing for the upcoming release, so there hasn’t been too much time for retrospection yet. Here are some of my initial thoughts, though.

  • Merge from Debian testing – Martin Pitt has started a discussion on ubuntu-devel about how this went. For my part, I found that Lucid included fewer surprises than Karmic.
  • Add fewer features – This is difficult to evaluate objectively, but my gut feeling is that we kept this largely under control. As usual, a few surprise desktop features were implemented that not everyone is happy about, myself included.
  • Avoid major infrastructure changes – I think we did reasonably well here, though Plymouth is a notable exception. It resulted (unsurprisingly) in some nasty bugs which we’ve had to spend time dealing with.
  • Extend beta testing – This will be difficult to assess, though if 10.04 beta was at least as good as 9.10 or 9.04 beta, then it will have arguably been a success.
  • Freeze with Debian – Although early indications were good, this didn’t work out so well, as Debian’s freeze was delayed
  • Visualize progress – The feature status page provided a lot of visual progress information, and the system behind it allowed us to keep track of work slippage throughout the cycle, both of which seemed like a firm step in the right direction. I’m looking forward to hearing from development teams how this information helped them (or not).

A more complete set of retrospectives on Lucid should give us some good ideas for how to improve further in Maverick and beyond.

Update: Fixed broken link.

Written by Matt Zimmerman

April 20, 2010 at 09:23

Interviewed by Ubuntu Turkey

As part of my recent Istanbul visit, I was interviewed by Ubuntu Turkey. They’ve now published the interview in Turkish. With their permission, the original English interview has been published on Ubuntu User by Amber Graner.

Written by Matt Zimmerman

April 19, 2010 at 15:15

Ubuntu Inside Out – Free Software and Linux Days 2010 in Istanbul

In early April, I visited Istanbul to give a keynote at the Free Software and Linux Days event. This was an interesting challenge, because this was my first visit to Turkey, and my first experience presenting with simultaneous translation.

In my new talk, Ubuntu Inside Out, I spoke about:

  • What Ubuntu is about, and where it came from
  • Some of the challenges we face as a growing project with a large community
  • Some ways in which we’re addressing those challenges
  • How to get involved in Ubuntu and help
  • What’s coming next in Ubuntu

The organizers have made a video available if you’d like to watch it (WordPress.com won’t let me embed it here).

My first game of backgammonAfterward, Calyx and I wandered around Istanbul, with the help of our student guide, Oğuzhan. We don’t speak any Turkish, apart from a few vocabulary words I learned on the way to Turkey, so we were glad to have his help as we visited restaurants, cafes and shops, and wandered through various neighborhoods. We enjoyed a variety of delicious food, and the unexpected company of many friendly stray cats.

It was only a brief visit, but I was grateful for the opportunity to meet the local free software community and to see some of the city.