We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Ubuntu

Back to the future

In my professional role as Ubuntu CTO, I take on a number of different perspectives, which sometimes compete for my attention, including:

  • Inward – supporting the people in my department, alignment with other departments in Canonical and reporting upward
  • Outward – connecting with customers, partners and the free software community, including Debian
  • Forward – considering the future of the Ubuntu platform and products, based on the needs of their users, our customers and business stakeholders within Canonical
  • Outside-in – taking off my Canonical hat and putting on an Ubuntu hat, and looking at what we’re doing from an outside perspective

My recent work, as Canonical has gone through a period of organizational growth and change, has prioritized the inward perspective. I took on a six-month project which was inwardly focused, temporarily handing off many of my day-to-day responsibilities (well done, Robbie!). I’ve grappled with an assortment of growing pains as many new people joined Canonical over the past year.

With that work behind me, it’s time to rebalance myself and focus more outside of Canonical again. It’s good to be back!

In my outward facing capacity, I’ll shortly be attending Web 2.0 Summit in San Francisco. I attend several free software conferences each year, but this is a different crowd. I hope to renew some old ties, form some new ones, and generally derive inspiration from the people and organizations represented there. Being in the San Francisco Bay area will also give me an opportunity to meet with some of Canonical’s partners there, as well as friends and acquaintances from the free software community. With my head down, working hard to make things happen, it’s easy to lose perspective on how that work fits into the outside world. Spending more time with people outside of Canonical and Ubuntu is an important way of balancing that effect.

Looking forward, I’ll be thinking about the longer term direction for the Ubuntu platform. The platform is the layer of Ubuntu which makes everything else possible: it’s how we weave together products like Desktop Edition and Server Edition, and it’s what developers target when they write applications. Behind the user interfaces and applications, there is a rich platform of tools and services which link it all together. It’s in this aspect of Ubuntu that I’ll be investing my time in research, experimentation and imagination. This includes considering how we package and distribute software, how we adapt to technological shifts, and highlighting opportunities to cooperate with other open source projects.

My primary outside-in role is as chair of the Ubuntu Technical Board. In this capacity, I’m accountable to the Ubuntu project, the interests of its members, and the people who use the software we provide. Originally, the TB was closely involved with a range of front-line technical decisions in Ubuntu, but today, there are strong, autonomous teams in place for the most active parts of the project, so we only get involved when there is a problem, or if a technical question comes up which doesn’t “fit” the charter of an established team. It’s something of a catch-all. I’d like to re-establish the TB in a more central role in Ubuntu, looking after concerns which affect the project as a whole, such as transparency and development processes. I’m also re-joining Debian as a non-uploading contributor, to work on stimulating and coordinating cooperation between Debian and Ubuntu. I’m looking forward to working more with Zack on joint projects in this area.

This change will help me to support Canonical and Ubuntu more effectively as they continue to grow and change. I look forward to exercising some mental muscles I haven’t used very much lately, and facing some new challenges as well.

Written by Matt Zimmerman

November 11, 2010 at 15:42

Weathering the Ubuntu brainstorm

In our first few years, Ubuntu experienced explosive growth, from zero to millions of users. Because Ubuntu is an open project, these people don’t just use Ubuntu, but can see what’s happening next and influence it through suggestions and contributions. The volume of suggestions quickly became unmanageable through ad hoc discussion, because the volume of feedback overwhelmed the relatively few people who were actively developing Ubuntu.

Ubuntu Brainstorm logo

In order to better manage user feedback at this scale, Ubuntu Brainstorm was created in 2008. It’s a collaborative filtering engine which allows anyone to contribute an idea, and have it voted on by others. Since then, it’s been available to Ubuntu developers and leaders as an information source, which has been used in various ways. The top ideas are printed in the Ubuntu Weekly Newsletter each week. We experimented with producing a report each release cycle and sharing it with the developer community. People have been encouraged to take these suggestions to the Ubuntu Developer Summits. We continue to look for new and better ways to process the feedback provided by the user community.

Most recently, I asked my colleagues on the Ubuntu Technical Board in a meeting whether we should take responsibility for responding to the feedback available in Ubuntu Brainstorm. They agreed that this was worth exploring, and I put forward a proposal for how it might work. The proposal was unanimously accepted at a later meeting, and I’m working on the first feedback cycle now.

In short, the Technical Board will ensure that, every three months, the highest voted topics on Ubuntu Brainstorm receive an official response from the Ubuntu project. The Technical Board won’t respond to all of them personally, but will identify subject matter experts within the project, ask them to write a short response, and compile these responses for publication.

My hope is that this approach will bring more visibility to common user concerns, help users understand what we’re doing with their feedback, and generally improve transparency in Ubuntu. We’ve already selected the topics for the first iteration based on the most popular items of the past six months, and are organizing responses now. Please visit brainstorm.ubuntu.com and cast your votes for next time!

Written by Matt Zimmerman

November 3, 2010 at 11:55

Looking forward to UDS for Ubuntu 11.04 (Natty)

For some time now, we’ve been gearing up to begin development on Ubuntu 11.04. While some folks have been putting the finishing touches on the 10.10 release, and bootstrapping the infrastructure for 11.04, others have been meeting with Canonical stakeholders, coordinating community brainstorm sessions, and otherwise collecting information about what our priorities should be in the next cycle.

We’re using what we’ve learned to plan the Ubuntu Developer Summit next week in Orlando, where we’ll refine these ideas into a plan for the cycle. We’re organizing UDS a little bit differently this time, with the main program divided into the following tracks to reflect the key considerations for Ubuntu today:

  • Application Developers – Making it faster, easier, and more enjoyable to develop and distribute new applications on (and for) Ubuntu
  • Cloud – Delivering the best experience of cloud computing, whether hosting in a public cloud or building your own private cloud
  • Hardware Compatibility – Measuring and improving compatibility with a wide range of laptops, netbooks, servers and desktops
  • Multimedia – Formulating the best software stacks for graphics, audio and video in Ubuntu
  • Package Selection and System Defaults – Choosing the right components to keep Ubuntu lean, flexible and ready-to-run, while ensuring that the pieces fit and work together cleanly
  • Performance – Squeezing the best performance out of today’s free software stack, from the Linux kernel and GNU toolchain through user interfaces
  • Ubuntu the Project – Continuously improving the way we work together to produce Ubuntu, both within the project and with our upstream and downstream partners

You can click on the links above for a preview of the schedule for the week, with links to more detailed blueprints which will develop during and following UDS. If you’ll be joining us in person, then I’ll see you there! If not, be sure to review Laura’s guide on how to participate remotely.

Written by Matt Zimmerman

October 21, 2010 at 12:29

Ubuntu and Qt

I like to think that in the Ubuntu project, we’re pragmatic about technology. This means keeping an open mind, considering alternatives, and evaluating them objectively. It means bearing in mind the needs of the user, and measuring ourselves based on how well we solve their problems (not merely our own).

It is in this spirit that I have been thinking about Qt recently. We want to make it fast, easy and painless to develop applications for Ubuntu, and Qt is an option worth exploring for application developers. In thinking about this, I’ve realized that there is quite a bit of commonality between the strengths of Qt and some of the new directions in Ubuntu:

  • Qt has a long history of use on ARM as well as x86, by virtue of being popular on embedded devices. Consumer products have been built using Qt on ARM for over 10 years. We’ve been making Ubuntu products available for ARM for nearly two years now, and 10.10 supports more ARM boards than ever, including reference boards from Freescale, Marvell and TI. Qt is adding ARMv7 optimizations to benefit the latest ARM chips. We do this in order to offer OEMs a choice of hardware, without sacrificing software choice. Qt preserves this same choice for application developers.
  • Qt is a cross-platform application framework, with official ports for Windows, MacOS and more, and experimental community ports to Android, the iPhone and WebOS. Strong cross-platform support was one of the original principles of Qt, and it shows in the maturity of the official ports. With Ubuntu Light being installed on computers with Windows, and Ubuntu One landing on Android and the iPhone, we need interoperability with other platforms. There is also a large population of developers who already know how to target Windows, who can reach Ubuntu users as well by choosing Qt.
  • Qt has a fairly mature touch input system, which now has support for multi-touch and gestures (including QML), though it’s only complete on Windows 7 and Mac OS X 10.6. Meanwhile, Canonical has been working with the community to develop a low-level multi-touch framework for Linux and X11, for the benefit of Qt and other toolkits. These efforts will eventually meet in the middle.

Overall, I think Qt has a lot to offer people who want to develop applications for (and on) Ubuntu, particularly now. It already powers popular cross-platform applications like VLC, not to mention the entire Kubuntu distribution. I missed it when this happened last year, but Qt is now available under either the LGPL 2.1 or the GPL 3.0, which should make it suitable for virtually any Ubuntu application. It has strong commercial backing as well as a large developer community. No single solution will meet all developers’ needs, of course, and Ubuntu supports multiple toolkits and frameworks for this reason, but Qt seems like a great tool to have in our toolbox for the road ahead.

Written by Matt Zimmerman

October 20, 2010 at 10:08

Posted in Uncategorized

Tagged with , ,

DebConf 10: Last day and retrospective

DebConf continued until Saturday, but Friday the 6th was my last day as I left New York that evening. I’m a bit late in getting this summary written up.

Making Debian Rule, Again (Margarita Manterola)

Marga took a bold look at the challenges facing Debian today. She says that Debian is perceived to be less innovative, out of date, difficult to use, and shrinking as a community. She called out Ubuntu as the “elephant in the room”, which is “‘taking away’ from Debian.” She insists that she is not opposed to Ubuntu, but that nonetheless Ubuntu is to some extent displacing Debian as a focal point for newcomers (both users and contributors).

Marga points out that Debian’s work is still meaningful, because many users still prefer Debian, and it is perceived to be of higher quality, as well as being the essential basis for derivatives like Ubuntu.

She conducted a survey (about 40 respondents) to ask what Debian’s problems are, and grouped them into categories like “motivation” and “communication” (tied for the #1 spot), “visibility” (#3, meaning public awareness and perception of Debian) and so on. She went on to make some suggestions about how to address these problems.

On the topic of communication, she proposed changing Debian culture by:

  • Spreading positive messages, celebrating success
  • Thanking contributors for their work
  • Avoiding escalation by staying away from email and IRC when angry
  • Treating every contributor with respect, “no matter how wrong they are”

This stimulated a lot of discussion, and most of the remaining time was taken up by comments from the audience. The video has been published, and offers a lot of insight into how Debian developers perceive each other and the project. She also made suggestions for the problems of visibility and motivation. These are crucial issues for Debian devotees to be considering, and I applaud Marga for her fortitude in drawing attention to them. This session was one of the highlights of this DebConf, and catalyzed a lot of discussion of vital issues in Debian.

Following her talk, there was a further discussion in the hallway which included many of the people who commented during the session, mostly about how to deal with problematic behavior in Debian. Although I agreed with much of what was said, I found it a bit painful to watch, because (ironically) this discussion displayed several of the characteristic “people problems” that Debian seems to have:

  • Many people had opinions, and although they agreed on many things, agreement was rarely expressed openly. Sometimes it helps a lot to simply say “I agree with you” and leave it at that. Lending support, rather than adding a new voice, helps to build consensus.
  • People waited for their turn to talk rather than listening to the person speaking, so the discussion didn’t build momentum toward a conclusion.
  • The conversation got louder and more dense over time, making it difficult to enter. It wasn’t argumentative; it was simply loud and fast-paced. This drowned out people who weren’t as vocal or willful.
  • Even where agreement was apparent, there was often no clear action agreed. No one had responsibility for changing the situation.

These same patterns are easily observed on Debian mailing lists for the past 10+ years. I exhibited them myself when I was active on these lists. This kind of cultural norm, once established, is difficult to intentionally change. It requires a fairly radical approach, which will inevitably mean coping with loss. In the case of a community, this can mean losing volunteer contributors cannot let go of this norm, and that is an emotionally difficult experience. However, it is nonetheless necessary to move forward, and I think that Debian as a community is capable of moving beyond it.

Juxtaposition

Given my history with both Debian and Ubuntu, I couldn’t help but take a comparative view of some of this. These problems are not new to Debian, and indeed they inspired many of the key decisions we made when founding the Ubuntu project in 2004. We particularly wanted to foster a culture which was supportive, encouraging and welcoming to potential contributors, something Debian has struggled with. Ubuntu has been, quite deliberately, an experiment in finding solutions to problems such as these. We’ve learned a lot from this experiment, and I’ve always hoped that this would help to find solutions for Debian as well.

Unfortunately, I don’t think Debian has benefited from these Ubuntu experiments as much as we might have hoped. A common example of this is the Ubuntu Code of Conduct. The idea of a project code of conduct predates Ubuntu, of course, but we did help to popularize it within the free software community, and this is now a common (and successful) practice used by many free software projects. The idea of behavioral standards for Debian has been raised in various forms for years now, but never seems to get traction. Hearing people talk about it at DebConf, it sometimes seemed almost as if the idea was dismissed out of hand because it was too closely associated with Ubuntu.

I learned from Marga’s talk that Enrico Zini drafted a set of Debian Community Guidelines over four years ago in 2006. It is perhaps a bit longand structured, but is basically excellent. Enrico has done a great job of compiling best practices for participating in an open community project. However, his document seems to be purely informational, without any official standing in the Debian project, and Debian community leaders have hesitated to make it something more.

Perhaps Ubuntu leaders (myself included) could have done more to nurture these ideas in Debian. At least in my experience, though, I found that my affiliation with Ubuntu almost immediately labeled me an “outsider” in Debian, even when I was still active as a developer, and this made it very difficult to make such proposals. Perhaps this is because Debian is proud of its independence, and does not want to be unduly influenced by external forces. Perhaps the initial “growing pains” of the Debian/Ubuntu relationship got in the way. Nonetheless, I think that Debian could be stronger by learning from Ubuntu, just as Ubuntu has learned so much from Debian.

Closing thoughts

I enjoyed this DebConf very much. This was the first DebConf to be hosted in the US, and there were many familiar faces that I hadn’t seen in some time. Columbia University offered an excellent location, and the presentation content was thought-provoking. There seemed to be a positive attitude toward Ubuntu, which was very good to see. Although there is always more work to do, it feels like we’re making progress in improving cooperation between Debian and Ubuntu.

I was a bit sad to leave, but was fortunate enough to meet up with Debian folk during my subsequent stay in the Boston area as well. It felt good to reconnect with this circle of friends again, and I hope to see you again soon.

Looking forward to next year’s DebConf in Bosnia

Written by Matt Zimmerman

August 25, 2010 at 16:57

Embracing the Web

The web offers a compelling platform for developing modern applications. How can free software benefit more from web technology, and at the same time promote more software freedom on the web? What would the world be like if FLOSS web applications were as plentiful and successful as traditional FLOSS applications are today?

Web architecture

The web, as a collection of interlinked hypertext documents available on the Internet, has been well established for over a decade. However, the web as an application architecture is only just hitting its stride. With modern tools and frameworks, it’s relatively straightforward to build rich applications with browser-oriented frontends and HTTP-accessible backends.

This architecture has its limitations, of course: browser compatibility nightmares, limited offline capabilities, network latency, performance challenges, server-side scalability, complicated multimedia story, and so on. Most of these are slowly but surely being addressed or ameliorated as web technology improves.

However, for a large class of applications, these limitations are easily outweighed by the advantages: cross-platform support, instantaneous upgrades, global availability, etc. The web enables developers to reach the largest audience of users with the most compelling functionality, and simplifies users’ lives by giving them immediate access to their digital lives from anywhere.

Some web advocates would go so far as to say that if an application can be built for the web, it should be built for the web because it will be more successful. It’s no surprise that new web applications are being developed at a staggering rate, and I expect this trend to continue.

So what?

This trend represents a significant threat, and a corresponding opportunity, to free software. Relatively few web applications are free software, and relatively few free software applications are built for the web. Therefore, the momentum which is leading developers and users to the web is also leading them (further) away from free software.

Traditionally, pragmatists have adopted free software applications because they offered immediate gratification: it’s much faster and easier to install a free software application than to buy a proprietary one. The SaaS model of web applications offers the same (and better) immediacy, so free software has lost some of its appeal among pragmatists, who instead turn to proprietary web applications. Why install and run a heavyweight client application when you can just click a link?

Many web applications—perhaps even a majority—are built using free software, but are not themselves free. A new generation of developers share an appreciation for free software tools and frameworks, but see little value in sharing their own software. To these developers, free software is something you use, not something you make.

Free software cannot afford to ignore the web. Instead, we should embrace the web more completely, more powerfully, and more effectively than proprietary systems do.

What would that look like?

In my view, a FLOSS client platform which fully embraced the web would:

  • treat web applications as first-class citizens. The web would not be just another application, represented by a browser, but more like a native application runtime. Web applications could feel much more “native” while still preserving the advantages of a web-style user experience. There would be no web browser: that’s a tool for legacy systems to run web applications within a compatibility environment.
  • provide a seamless experience for developers to build web applications. It would be as fast and easy to develop a trivial client/server web application as it is to write “Hello, world!” in PyGTK using Quickly. For bonus points, it would be easy to develop and run web applications locally, and then deploy directly to a PaaS or IaaS cloud.
  • empower the user to manage their applications and data regardless of where they are hosted. Traditional operating systems act as a connecting fabric for local applications, providing a shared namespace, file store and IPC mechanisms, but web applications are lacking this. The web’s security model requires that applications are thoroughly sandboxed from each other, but a mediating operating system could connect them in meaningful ways, just as web browsers store cookies and passwords for various websites while protecting them from each other.

Imagine a world where free web applications are as plentiful and malleable as free native applications are today. Developers would be able to branch, test and submit patches to them.

What about Chrome OS?

Chrome OS is a step in the right direction, but doesn’t yet realize this vision. It’s a traditional operating system which is stripped down and focused on running one application (a web browser) very, very well. In some ways, it elevates web applications to first-class status, though its paradigm is still fundamentally that of a web browser.

It is not designed for development, but for consuming the web. Developers who want to create and deploy web applications must use a more traditional operating system to do so.

It does not put the end user in control. On the contrary, the user is almost entirely dependent on SaaS applications for all of their needs.

Although it is constructed using free software, it does not seem to deliver the principles or benefits of software freedom to the web itself.

How?

Just as free software was bootstrapped on proprietary UNIX, the present-day web is fertile ground for the development of free web applications. The web is based on open standards. There are already excellent web development tools, web application frameworks and server software which are FLOSS. Leading-edge web browsers like Firefox and Chrome/Chromium, where much web innovation is happening today, are already open source.

This is a huge head start toward a free web. I think what’s missing is a client platform which catalyzes the development and use of FLOSS web applications.

Written by Matt Zimmerman

July 26, 2010 at 10:43

We’ve packaged all of the free software…what now?

Today, virtually all of the free software available can be found in packaged form in distributions like Debian and Ubuntu. Users of these distributions have access to a library of thousands of applications, ranging from trivial to highly sophisticated software systems. Developers can find a vast array of programming languages, tools and libraries for constructing new applications.

This is possible because we have a mature system for turning free software components into standardized modules (packages). Some software is more difficult to package and maintain, and I’m occasionally surprised to find something very useful which isn’t packaged yet, but in general, the software I want is packaged and ready before I realize I need it. Even the “long tail” of niche software is generally packaged very effectively.

Thanks to coherent standards, sophisticated management tools, and the principles of software freedom, these packages can be mixed and matched to create complete software stacks for a wide range of devices, from netbooks to supercomputing clusters. These stacks are tightly integrated, and can be tested, released, maintained and upgraded as a unit. The Debian system is unparalleled for this purpose, which is why Ubuntu is based on it. The vision, for a free software operating system which is highly modular and customizable, has been achieved.

Rough edges

This is a momentous achievement, and the Debian packaging system fulfills its intended purpose very well. However, there are a number of areas where it introduces friction, because the package model doesn’t quite fit some new problems. Most of these are becoming more common over time as technology evolves and changes shape.

  • Embedded systems need to be pared down to the essentials to minimize storage, distribution, computation and maintenance costs. Standardized packaging introduces excessive code, data and interdependency which make the system larger than necessary. Tight integration makes it difficult to bootstrap the system from scratch for custom hardware. Projects like Embedded Debian aim to adapt the Debian system to be more suitable for use in these environments, to varying degrees of success. Meanwhile, smart phones will soon become the most common type of computer globally.
  • Data, in contrast to software, has simple requirements. It just needs to be up to date and accessible to programs. Packaging and distributing it through the standardized packaging process is awkward, doesn’t offer tangible benefits, and introduces overhead. There have been extensive debates in Debian about how to handle large data sets. Meanwhile, this problem is becoming increasingly important as data science catalyzes a new wave of applications.
  • Client/server and other types of distributed applications are notoriously tricky to package. The packaging system works within the context of a single OS instance, and so relationships which span multiple OS instances (e.g. a server application which depends on a database running on another server) are not straightforward. Meanwhile, the web has become a first-class application development platform, and this kind of interdependency is extremely common on both clients and servers.
  • Cross-platform applications such as Firefox, Chromium and OpenOffice.org have long struggled with packaging. In order to be portable, they tend to bundle the components they depend on, such as libraries. Packagers strive for normalization, and want these applications to use the packaged versions of these libraries instead. Application developers build, test and ship one set of dependencies, but their users receive a different stack when they use the packaged version of the application. Developers on both sides are in constant tension as they expect their configuration to be the canonical one, and want it to be tightly integrated. Cross-platform application developers want to provide their own, application-specific cross-platform update mechanism, while distributions want to use the same mechanism for all their components.
  • Virtual appliances aim to combine application and operating system into a portable bundle. While a modular OS is definitely called for, appliances face some of the same problems as embedded systems as they need to be minimized. Furthermore, the appliance becomes a component in itself, and requires metadata, distribution mechanisms and so on. If someone wants to “install” a virtual appliance, how should that work? Packaging them up as .debs doesn’t make much sense for the same reasons that apply to large data sets. I haven’t seen virtual appliances really taking off, but I expect cloud to change that.
  • Runtime libraries for languages such as Perl, Python and Ruby provide their own packaging systems, which manage dependencies and other metadata, installation, upgrades and removal in a standardized way. Because these operate independently of the OS package manager, all sorts of problems arise. Projects such as GoboLinux have attempted to tie them together, to varying degrees of success. Meanwhile, each new programming language we invent comes with a different, incompatible package manager, and distribution developers need to spend time repackaging them into their preferred format.

Why are we stuck?

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
– Abraham Maslow

The packaging ecosystem is very strong. Not only do we have powerful tools for working with packages, we also benefit from packages being a well-understood concept, and having established processes for developing, exchanging and talking about them. Once something is packaged, we know what it is and how to work with it, and it “fits” into everything else. So, it is tempting to package everything in sight, as we already know how to make sense of packages. However, this may not always be the right tool for the job.

Various attempts have been made to extend the packaging concept to make it more general, for example:

  • Portage, of Gentoo fame, offers impressive flexibility by building packages with a custom configuration, tailored for the needs of the target system.
  • Conary, from rPath, offers finer-grained dependencies, powerful revision control and object-oriented build recipes.
  • Nix provides a consistent build and runtime environment, ensuring that programs are run with the same dependencies used to build them, by keeping the relevant versions installed. I don’t know much about it, but it sounds like all dependencies implicitly refer to an exact version.

Other package managers aim to solve a specific problem, such as providing lightweight package management for embedded systems, or lazy dependency installation, or fixing the filesystem hierarchy. There is a long list of package managers of various levels which solve different problems.

Most of these systems suffer from an important fundamental tradeoff: they are designed to manage the entire system, from the kernel through applications, and so they must be used wholesale in order to reap their full benefit. In other words, in their world, everything is a package, and anything which is not a package is out of scope. Therefore, each of these systems requires a separate collection of packages, and each time we invent a new one, its adherents set about packaging everything in the new format. It takes a very long time to do this, and most of them lose momentum before a mature ecosystem can form around them.

This lock-in effect makes it difficult for new packaging technologies to succeed.

Divide and Conquer

No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won’t account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system.

  • Decouple applications from the platform. Debian packaging is an excellent solution for managing the network of highly interdependent components which make up the core of a modern Linux distribution. It falls short, however, for managing the needs of modern applications: fast-moving, cross-platform and client/server (especially web). Let’s stop trying to fit these square pegs into round holes, and adopt a different solution for this space, preferably one which is comprehensible and useful to application developers so that they can do most of the work.
  • Treat data as a service. It’s no longer useful to package up documentation in order to provide local copies of it on every Linux system. The web is a much, much richer and more effective solution to that problem. The same principle is increasingly applicable to structured data. From documents and contacts to anti-virus signatures and PCI IDs, there’s much better data to be had “out there” on the web than “down here” on the local filesystem.
  • Simplify integration between packaging systems in order to enable a heterogeneous model. When we break the assumption that everything is a package, we will need new tools to manage the interfaces between different types of components. Applications will need to introspect their dependency chain, and system management tools will need to be able to interrogate applications. We’ll need thoughtfully designed interfaces which provide an appropriate level of abstraction while offering sufficient flexibility to solve many different packaging problems. There is unarguably a cost to this heterogeneity, but I believe it would easily outweigh the shortcomings of our current model.

But I like things how they are!

We don’t have a choice. The world is changing around us, and distributions need to evolve with it. If we don’t adapt, we will eventually give way to systems which do solve these problems.

Take, for example, modern web browsers like Firefox and Chromium. Arguably the most vital application for users, the browser is coming under increasing pressure to keep up with the breakneck pace of innovation on the web. The next wave of real-time collaboration and multimedia applications relies on the rapid development of new capabilities in web browsers. Browser makers are responding by accelerating deployment in the field: both aggressively push new releases to their users. A report from Google found that Chrome upgrades 97% of their users within 21 days of a new release, and Firefox 85% (both impressive numbers). Mozilla recently changed their maintenance policies, discontinuing maintenance of stable releases and forcing Ubuntu to ship new upstream releases to users.

These applications are just the leading edge of the curve, and the pressure will only increase. Equally powerful trends are pressing server applications, embedded systems, and data to adapt as well. The ideas I’ve presented here are only one possible way forward, and I’m sure there are more and better ideas brewing in distribution communities. I’m sure that I’m not the only one thinking about these problems.

Whatever it looks like in the end, I have no doubt that change is ahead.

Written by Matt Zimmerman

July 6, 2010 at 15:31

Follow

Get every new post delivered to your Inbox.

Join 2,397 other followers