We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Software quality

DEX: Debian and its derivatives, getting things done together

Since I resumed active status in Debian, I’ve been thinking about how to bridge the gap between Debian and its derivatives*. I’ve spoken at length with Zack, the attendees of the Derivatives BoF at DebConf 10, and the fine folks at the Derivatives Front Desk about the technical and social issues affecting derivative projects, and could probably write a very thorough series of blog posts on the subject.

Instead, Zack and I decided to try doing something about it: we have begun a project to test out a new approach to the problem.

Introducing DEX

DEX is all about action: merging patches, fixing bugs, crunching data, whatever is necessary to get changes from derivatives into Debian proper. DEX doesn’t try to change the way any existing project works, but adds a “fast path” for getting code from one place to another.

DEX is a joint task force where developers from Debian and its derivatives work together on this common goal. As a pilot project, we’ve established an Ubuntu DEX Team focused on merging code from Ubuntu into Debian. With members from both projects, we hope to be able to resolve blockage anywhere in the pipeline. Whatever needs to get done in order to merge an Ubuntu patch, someone in the Ubuntu DEX team will know what to do. If we get good results with Ubuntu, we hope that other derivatives will follow. With thanks to David Paleino, we’re excited that the Utnubu project is merging into DEX as it aligns well with their goals. I’m very grateful to have Colin Watson and James Westby signed up to contribute as well.

Our first project is simple: turn this list green. This is an archive of quite old patches from Ubuntu, most of which have probably been merged already or made obsolete, but they pre-date any kind of tracking system so they need to be verified. Once that’s done, we’ll move on to a new project with a new todo list.

If you want to see Debian benefit from technical work done in derivatives, DEX is a chance for you to act together to make it happen. If you work on a derivative and want to carry a smaller delta, come and join us. I’m sure we’ll learn a lot from this experience.

* There are many instances of great cooperation between Debian and derivative distributions, including joint package maintenance teams, and some derivatives are even part of the Debian project. Nonetheless, there are areas were most people I’ve spoken to agree that we need to do better. This is what I’ve referred to as the “gap”.

Advertisement

Written by Matt Zimmerman

March 16, 2011 at 21:30

Listening to users

In the software community, people hold strong opinions on the subject of listening to users. Some feel that users are an essential source of information for making successful products, as evidenced in the customer development methodology, and seek to involve users deeply in product development. Others believe that users don’t know what they want, invoking the quote attributed to Henry Ford, “If I’d asked customers what they wanted, they would have said ‘a faster horse'”. Some say that user needs are unknowable except through the lens of a marketplace, where people choose in aggregate which products suit them best, and customers “vote with their wallets” (anything else is “anecdata”).

Regular readers will not be surprised that I believe they are all right, but only in certain contexts. The right strategy for involving users in product decisions will depend on factors related to the product itself, the market, and the product development method being used.

One of the most important is the life cycle stage of the product: is it a new and rapidly evolving concept, or a mature commodity, or somewhere in between? Simon Wardley explains this well over on his blog, so I won’t rehash his points here, but will add a few of my own.

If what we’re looking for is inspiration for a new product, it’s here that Henry Ford was right: users generally won’t hand you a complete product vision on a silver platter. They’ll frame their input in terms of what they know, and the choices already available to them. However, this doesn’t mean that users don’t have a role to play in this instance: watching users can be a great source of inspiration. It’s the combination of domain knowledge and passionate imagination which triggers the creative spark. Henry Ford applied his engineer’s interests to a problem which was evident all around him.

If our goal is to test whether a new product is a good fit for its users, there is no substitute for user feedback. We can guess at whether there is a fit, and our intuition may be good, but users are the ultimate judges, and we don’t know if we’re right or wrong until users evaluate it. So ask them! By engaging in dialogue with individual users, we can learn unexpected things which will help to refine the idea. If we don’t find what they think until our new product is released, we risk making something that no one wants. Why wait until it’s too late? It can be challenging to extract useful feedback for a product which doesn’t yet exist, but this effort is well worth it to avoid wasting much more effort in software engineering.

When our objective is to incrementally improve an existing product, individual anecdotes can mislead us. A given change may be an improvement for one user, but a disaster for another. What we want to know is whether the new version is better for the population as a whole, and in this case, we do well to rely on data. There are pitfalls here as well, of course. We need to choose our questions carefully, and realize that users will often resist any change: not because they’re stodgy by nature, but because they have to invest effort in adapting to the change. I think of incremental improvement as a joint investment made between product developers and their users, to improve the whole system of people and technology for the better.

By choosing the right tool for the job, we can make better decisions, improve faster, and ultimately solve the right problem for our users.

Written by Matt Zimmerman

March 7, 2011 at 12:56

Ubuntu Brainstorm Top 10 for December 2010

As I mentioned recently, the Ubuntu Technical Board is reviewing the most popular topics in Ubuntu Brainstorm and coordinating official responses on behalf of the project. This means that the most popular topics on Ubuntu Brainstorm receive expert answers from the people working in these areas.

This is the first batch, and we plan to repeat this process each quarter. We’ll use feedback and experiences from this run to improve it for next time, so let us know what you think.

Power management (idea #24782)

Laptops are now outselling desktops globally, and laptop owners want to get the most out of their expensive and heavy batteries. So it’s no surprise that people are wondering about improved power management in Ubuntu. This is a complex topic which spans the Linux software stack, and certainly isn’t an issue which will be “solved” in the foreseeable future, but we see a lot of good work being done in this area.

To tell us about it, Amit Kucheria, Ubuntu kernel developer and leader of the Linaro working group on Power Management, contributed a great writeup on this topic, with technical analysis, tips and recommendations, and a look at what’s coming next.

I am going to attempt to summarize the various use profiles and what Ubuntu does (or can do) to prolong battery life in those profiles. Power management, when done right, should not require the user to make several (difficult) choices. It should just work – providing a good balance of performance and battery life.

IP address conflicts (idea #25648)

IP addressing is a subject that most people should never have to think about. When something isn’t working, and two computers end up with the same IP address, it can be hard to tell what’s wrong. I was personally surprised to find this one near the top of the list on Ubuntu Brainstorm, since it seems unlikely to be a very common problem. Nonetheless, it was voted up, and we’re listening.

There is a tool called ipwatchd which is already available in the package repository, and was created specifically to address this problem. This seems like a further indication that this problem may be more widespread than I might assume.

The idea has already been marked as “implemented” in Brainstorm based on the existence of this package, but that doesn’t help people who have never heard of ipwatchd, much less found and installed it.

What do you think? Have you ever run into this problem? Would it have helped you if your computer had told you what was wrong, or would it have only confused you further? Is it worth considering this for inclusion in the default install? Post your comments in Brainstorm.

Selecting the only available username to login (idea #6974)

Although Linux is designed as a multi-user operating system, most Ubuntu systems are only used by one person. In that light, it seems a bit redundant to ask the user to identify themselves every time they login, by clicking on their username. Why not just preselect it? Indeed, this would be relatively simple to implement, but the real question is whether it is the right choice for users.

Martin Pitt of the Ubuntu Desktop Team notes that consistency is an important factor in ease of use, and asks for further feedback.

So in summary, we favored consistency and predictablility over the extra effort to press Enter once. This hasn’t been a very strong opinion or decision, though, and the desktop team would be happy to revise it.

Icon for .deb packages (idea #25197)

Building on the invaluable efforts of Debian developers, we work hard to make sure that people can get all of the software they need from Ubuntu repositories through Software Center and APT, where they are authenticated and secure. However, in practice, it is occasionally necessary for users to work with .deb files directly.

Brainstorm idea 25197 suggests that the icon used to represent .deb packages in the file manager is not ideal, and can be confusing.

Matthew Paul Thomas of the Canonical Design Team responds with encouragement for deb-thumbnailer, which makes the icon both more distinctive and more informative. He has opened bug 685851 to track progress on getting it packaged and into the main repository.

I have reviewed the proposed solutions with Michael Vogt, our packaging expert. Solution #1 is straightforward, but we particularly like solutions #5 and #10, using a thumbnailer to show the application icon from inside each package.

Keeping the time accurate over the Internet by default (idea #25301)

It’s important for an Internet connected computer to know the correct time of day, which is why Ubuntu has included automatic Internet time synchronization with NTP since the very first release (4.10 “warty”). So some of us were a little surprised to see this as one of the most popular ideas on Ubuntu Brainstorm.

Colin Watson of the Ubuntu Technical Board investigated and discovered a case where this wasn’t working correctly. It’s now fixed for Ubuntu 11.04, and Colin has sent the patches upstream to Debian and GNOME.

My first reaction was “hey, that’s odd – I thought we already did that?”. We install the ntpdate package by default (although it’s deprecated upstream in favour of other tools, but that shouldn’t be important here). ntpdate is run from /etc/network/if-up.d/ntpdate, in other words every time you connect to a network, which should be acceptably frequent for most people, so it really ought to Just Work by default. But this is one of the top ten problems where users have gone to the trouble of proposing solutions on Brainstorm, so it couldn’t be that simple. What was going on?

More detail in GNOME system monitor (idea #25887)

Under System, Preferences, System Monitor, you can find a tool to peek “under the hood” at the Linux processes which power every Ubuntu system. Power users, hungry for more detail on their systems’ inner workings, voted to suggest that more detail be made available through this interface.

Robert Ancell of the Ubuntu Desktop Team answered their call by offering to mentor a volunteer to develop a patch, and someone has already stepped up with a first draft.

Help the user understand when closing a window does not close the app (idea #25801)

When the user clicks the close button, most applications obediently exit. A few, though, will just hide, and continue running, because they assume that’s what the user actually wants, and it can be hard to tell which has happened.

Ivanka Majic, Creative Strategy Lead at Canonical, shares her perspective on this issue, with a pointer to work in progress to resolve it.

This is more than a good idea, it’s an important gap in the usability of most of the desktop operating systems in widespread use today.

Ubuntu Software Centre Removal of Configuration Files (idea #24963)

One feature of the Debian packaging system used in Ubuntu is that it draws a distinction between “removing” a package and “purging” it. Purging should remove all traces of the package, such that installing and then immediately purging a package should return the system to the same state. Removing will leave certain files behind, including system configuration files and sometimes runtime data.

This subtle distinction is useful to system administrators, but only serves to confuse most end users, so it’s not exposed by Software Center: it just defaults to “removing” packages. This proposal in Ubuntu Brainstorm suggests that Software Center should purge packages by default instead.

Michael Vogt of the Ubuntu Foundations Team explains the reasoning behind this default, and offers an alternative suggestion based on his experience with the package management system.

This is not a easy problem and we need to carefully balance the needs to keep the UI simple with the needs to keep the system from accumulating cruft.

Ubuntu One file sync progress (idea #25417)

Ubuntu One file synchronization works behind the scenes, uploading and downloading as needed to replicate your data to multiple computers. It does most of its work silently, and it can be hard to tell what it is doing or when it will be finished.

John Lenton, engineering manager for the Ubuntu One Desktop+ team, posts on the AskUbuntu Q&A site with tools and tips which work today, and their plans to address this issue comprehensively in the future.

Multimedia performance (idea #24878)

With a cornucopia of multimedia content available online today, it’s important that users be able to access it quickly and easily. Poor performance in the audio, video and graphics subsystems can spoil the experience, if resource-hungry multimedia applications can’t keep up with the flow of data.

Allison Randal, Ubuntu Technical Architect, answers with an analysis of the problem and the proposed solutions, an overview of current activity in this area, and pointers for getting involved.

The fundamental concern is a classic one for large systems: changes in one part of the system affect the performance of another part of the system. It’s modestly difficult to measure the performance effects of local changes, but exponentially more difficult to measure the “network effects” of changes across the system.

Written by Matt Zimmerman

December 10, 2010 at 14:04

Weathering the Ubuntu brainstorm

In our first few years, Ubuntu experienced explosive growth, from zero to millions of users. Because Ubuntu is an open project, these people don’t just use Ubuntu, but can see what’s happening next and influence it through suggestions and contributions. The volume of suggestions quickly became unmanageable through ad hoc discussion, because the volume of feedback overwhelmed the relatively few people who were actively developing Ubuntu.

Ubuntu Brainstorm logo

In order to better manage user feedback at this scale, Ubuntu Brainstorm was created in 2008. It’s a collaborative filtering engine which allows anyone to contribute an idea, and have it voted on by others. Since then, it’s been available to Ubuntu developers and leaders as an information source, which has been used in various ways. The top ideas are printed in the Ubuntu Weekly Newsletter each week. We experimented with producing a report each release cycle and sharing it with the developer community. People have been encouraged to take these suggestions to the Ubuntu Developer Summits. We continue to look for new and better ways to process the feedback provided by the user community.

Most recently, I asked my colleagues on the Ubuntu Technical Board in a meeting whether we should take responsibility for responding to the feedback available in Ubuntu Brainstorm. They agreed that this was worth exploring, and I put forward a proposal for how it might work. The proposal was unanimously accepted at a later meeting, and I’m working on the first feedback cycle now.

In short, the Technical Board will ensure that, every three months, the highest voted topics on Ubuntu Brainstorm receive an official response from the Ubuntu project. The Technical Board won’t respond to all of them personally, but will identify subject matter experts within the project, ask them to write a short response, and compile these responses for publication.

My hope is that this approach will bring more visibility to common user concerns, help users understand what we’re doing with their feedback, and generally improve transparency in Ubuntu. We’ve already selected the topics for the first iteration based on the most popular items of the past six months, and are organizing responses now. Please visit brainstorm.ubuntu.com and cast your votes for next time!

Written by Matt Zimmerman

November 3, 2010 at 11:55

We’ve packaged all of the free software…what now?

Today, virtually all of the free software available can be found in packaged form in distributions like Debian and Ubuntu. Users of these distributions have access to a library of thousands of applications, ranging from trivial to highly sophisticated software systems. Developers can find a vast array of programming languages, tools and libraries for constructing new applications.

This is possible because we have a mature system for turning free software components into standardized modules (packages). Some software is more difficult to package and maintain, and I’m occasionally surprised to find something very useful which isn’t packaged yet, but in general, the software I want is packaged and ready before I realize I need it. Even the “long tail” of niche software is generally packaged very effectively.

Thanks to coherent standards, sophisticated management tools, and the principles of software freedom, these packages can be mixed and matched to create complete software stacks for a wide range of devices, from netbooks to supercomputing clusters. These stacks are tightly integrated, and can be tested, released, maintained and upgraded as a unit. The Debian system is unparalleled for this purpose, which is why Ubuntu is based on it. The vision, for a free software operating system which is highly modular and customizable, has been achieved.

Rough edges

This is a momentous achievement, and the Debian packaging system fulfills its intended purpose very well. However, there are a number of areas where it introduces friction, because the package model doesn’t quite fit some new problems. Most of these are becoming more common over time as technology evolves and changes shape.

  • Embedded systems need to be pared down to the essentials to minimize storage, distribution, computation and maintenance costs. Standardized packaging introduces excessive code, data and interdependency which make the system larger than necessary. Tight integration makes it difficult to bootstrap the system from scratch for custom hardware. Projects like Embedded Debian aim to adapt the Debian system to be more suitable for use in these environments, to varying degrees of success. Meanwhile, smart phones will soon become the most common type of computer globally.
  • Data, in contrast to software, has simple requirements. It just needs to be up to date and accessible to programs. Packaging and distributing it through the standardized packaging process is awkward, doesn’t offer tangible benefits, and introduces overhead. There have been extensive debates in Debian about how to handle large data sets. Meanwhile, this problem is becoming increasingly important as data science catalyzes a new wave of applications.
  • Client/server and other types of distributed applications are notoriously tricky to package. The packaging system works within the context of a single OS instance, and so relationships which span multiple OS instances (e.g. a server application which depends on a database running on another server) are not straightforward. Meanwhile, the web has become a first-class application development platform, and this kind of interdependency is extremely common on both clients and servers.
  • Cross-platform applications such as Firefox, Chromium and OpenOffice.org have long struggled with packaging. In order to be portable, they tend to bundle the components they depend on, such as libraries. Packagers strive for normalization, and want these applications to use the packaged versions of these libraries instead. Application developers build, test and ship one set of dependencies, but their users receive a different stack when they use the packaged version of the application. Developers on both sides are in constant tension as they expect their configuration to be the canonical one, and want it to be tightly integrated. Cross-platform application developers want to provide their own, application-specific cross-platform update mechanism, while distributions want to use the same mechanism for all their components.
  • Virtual appliances aim to combine application and operating system into a portable bundle. While a modular OS is definitely called for, appliances face some of the same problems as embedded systems as they need to be minimized. Furthermore, the appliance becomes a component in itself, and requires metadata, distribution mechanisms and so on. If someone wants to “install” a virtual appliance, how should that work? Packaging them up as .debs doesn’t make much sense for the same reasons that apply to large data sets. I haven’t seen virtual appliances really taking off, but I expect cloud to change that.
  • Runtime libraries for languages such as Perl, Python and Ruby provide their own packaging systems, which manage dependencies and other metadata, installation, upgrades and removal in a standardized way. Because these operate independently of the OS package manager, all sorts of problems arise. Projects such as GoboLinux have attempted to tie them together, to varying degrees of success. Meanwhile, each new programming language we invent comes with a different, incompatible package manager, and distribution developers need to spend time repackaging them into their preferred format.

Why are we stuck?

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.
– Abraham Maslow

The packaging ecosystem is very strong. Not only do we have powerful tools for working with packages, we also benefit from packages being a well-understood concept, and having established processes for developing, exchanging and talking about them. Once something is packaged, we know what it is and how to work with it, and it “fits” into everything else. So, it is tempting to package everything in sight, as we already know how to make sense of packages. However, this may not always be the right tool for the job.

Various attempts have been made to extend the packaging concept to make it more general, for example:

  • Portage, of Gentoo fame, offers impressive flexibility by building packages with a custom configuration, tailored for the needs of the target system.
  • Conary, from rPath, offers finer-grained dependencies, powerful revision control and object-oriented build recipes.
  • Nix provides a consistent build and runtime environment, ensuring that programs are run with the same dependencies used to build them, by keeping the relevant versions installed. I don’t know much about it, but it sounds like all dependencies implicitly refer to an exact version.

Other package managers aim to solve a specific problem, such as providing lightweight package management for embedded systems, or lazy dependency installation, or fixing the filesystem hierarchy. There is a long list of package managers of various levels which solve different problems.

Most of these systems suffer from an important fundamental tradeoff: they are designed to manage the entire system, from the kernel through applications, and so they must be used wholesale in order to reap their full benefit. In other words, in their world, everything is a package, and anything which is not a package is out of scope. Therefore, each of these systems requires a separate collection of packages, and each time we invent a new one, its adherents set about packaging everything in the new format. It takes a very long time to do this, and most of them lose momentum before a mature ecosystem can form around them.

This lock-in effect makes it difficult for new packaging technologies to succeed.

Divide and Conquer

No single package management framework is flexible enough to accommodate all of the needs we have today. Even more importantly, a generic solution won’t account for the needs we will have tomorrow. I propose that in order to move forward, we must make it possible to solve packaging problems separately, rather than attempting to solve them all within a single system.

  • Decouple applications from the platform. Debian packaging is an excellent solution for managing the network of highly interdependent components which make up the core of a modern Linux distribution. It falls short, however, for managing the needs of modern applications: fast-moving, cross-platform and client/server (especially web). Let’s stop trying to fit these square pegs into round holes, and adopt a different solution for this space, preferably one which is comprehensible and useful to application developers so that they can do most of the work.
  • Treat data as a service. It’s no longer useful to package up documentation in order to provide local copies of it on every Linux system. The web is a much, much richer and more effective solution to that problem. The same principle is increasingly applicable to structured data. From documents and contacts to anti-virus signatures and PCI IDs, there’s much better data to be had “out there” on the web than “down here” on the local filesystem.
  • Simplify integration between packaging systems in order to enable a heterogeneous model. When we break the assumption that everything is a package, we will need new tools to manage the interfaces between different types of components. Applications will need to introspect their dependency chain, and system management tools will need to be able to interrogate applications. We’ll need thoughtfully designed interfaces which provide an appropriate level of abstraction while offering sufficient flexibility to solve many different packaging problems. There is unarguably a cost to this heterogeneity, but I believe it would easily outweigh the shortcomings of our current model.

But I like things how they are!

We don’t have a choice. The world is changing around us, and distributions need to evolve with it. If we don’t adapt, we will eventually give way to systems which do solve these problems.

Take, for example, modern web browsers like Firefox and Chromium. Arguably the most vital application for users, the browser is coming under increasing pressure to keep up with the breakneck pace of innovation on the web. The next wave of real-time collaboration and multimedia applications relies on the rapid development of new capabilities in web browsers. Browser makers are responding by accelerating deployment in the field: both aggressively push new releases to their users. A report from Google found that Chrome upgrades 97% of their users within 21 days of a new release, and Firefox 85% (both impressive numbers). Mozilla recently changed their maintenance policies, discontinuing maintenance of stable releases and forcing Ubuntu to ship new upstream releases to users.

These applications are just the leading edge of the curve, and the pressure will only increase. Equally powerful trends are pressing server applications, embedded systems, and data to adapt as well. The ideas I’ve presented here are only one possible way forward, and I’m sure there are more and better ideas brewing in distribution communities. I’m sure that I’m not the only one thinking about these problems.

Whatever it looks like in the end, I have no doubt that change is ahead.

Written by Matt Zimmerman

July 6, 2010 at 15:31

DevOps and Cloud

DevOps

I first heard about DevOps from Lindsay Holmwood at linux.conf.au 2010. Since then, I’ve been following the movement with interest. It seems to be about cross-functional involvement in software teams, specifically between software development and system administration (or operations). In many organizations, especially SaaS shops, these two groups are placed in opposition to each other: developers are driven to deliver new features to users, while system administrators are held accountable for the operation of the service. In the best case, they maintain a healthy balance by pushing in opposite directions, but more typically, they resent each other for getting in the way, as a result of this dichotomy:

Development Operations
is responsible for… creating products offering services
is measured on… delivery of new features high reliability
optimizes by… increasing velocity controlling change
and so is perceived as… reckless and irresponsible obstructing progress

Of course, both functions are essential to a viable service, and so DevOps aims to replace this opposition with cooperation. By removing this friction from the organization, we hope to improve efficiency, lower costs, and generally get more work done.


So, DevOps promotes the formation of cross-functional teams, where individuals still take on specialist “development” or “operations” roles, but work together toward the common goal of delivering a great experience to users. By working as teammates, rather than passing work “over the wall”, they can both contribute to development, deployment and maintenance according to their skills and expertise. The team becomes a “devops” team, and is responsible for the entire product life cycle. Particular tasks may be handled by specialists, but when there’s a problem, it’s the team’s problem.

Some take it a step further, and feel that what’s needed is to combine the two disciplines, so that individuals contribute in both ways. Rather than thinking of themselves as “developers” or “sysadmins”, these folks consider themselves “devops”. They work to become proficient in both roles, and to synthesize new ways of working by drawing on both types of skills and experience. A common crossover activity is the development of sophisticated tools for automating deployment, monitoring, capacity management and failure resolution.

DevOps meets Cloud

Like DevOps, cloud is not a specific technology or method, but a reorganization of the model (as I’ve written previously). It’s about breaking down the problem in a different way, splitting and merging its parts, and creating a new representation which doesn’t correspond piece-for-piece to the old one.

DevOps drives cloud because it offers a richer toolkit for the way they work: fast, flexible, efficient. Tools like Amazon EC2 and Google App Engine solve the right sorts of problems. Cloud also drives DevOps because it calls into question the traditional way of organizing software teams. A development/operations division just doesn’t “fit” cloud as well as a DevOps model.

Deployment is a classic duty of system administrators. In many organizations, only the IT department can implement changes in the production environment. Reaping the benefits of an IaaS environment requires deploying through an API, and therefore deployment requires development. While it is already common practice for system administrators to develop tools for automating deployment, and tools like Puppet and Chef are gaining momentum, IaaS makes this a necessity, and raises the bar in terms of sophistication. Doing this well requires skills and knowledge from both sides of the “fence” between development and operations, and can accelerate development as well as promote stability in production.


This is exemplified by infrastructure service providers like Amazon Web Services, where customers pay by the hour for “black box” access to computing resources. How those resources are provisioned and maintained is entirely Amazon’s problem, while its customers must decide how to deploy and manage their applications within Amazon’s IaaS framework. In this scenario, some operations work has been explicitly outsourced to Amazon, but IaaS is not a substitute for system administration. Deployment, monitoring, failure recovery, performance management, OS maintenance, system configuration, and more are still needed. A development team which is lacking the experience or capacity for this type of work cannot simply “switch” to an IaaS model and expect these needs to be taken care of by their service provider.


With platform service providers, the boundaries are different. Developers, if they build their application on the appropriate platform, can effectively outsource (mostly) the management of the entire production environment to their service provider. The operating system is abstracted away, and its maintenance can be someone else’s problem. For applications which can be built with the available facilities, this will be a very attractive option for many organizations. The customers of these services may be traditional developers, who have no need for operations expertise. PaaS providers, though, will require deep expertise in both disciplines in order to build and improve their platform and services, and will likely benefit from a DevOps approach.

Technical architecture draws on both development and operations expertise, because design goals like performance and robustness are affected by all layers of the stack, from hardware, power and cooling all the way up to application code. DevOps itself promotes greater collaboration on architecture, by involving experts in both disciplines, but cloud is a great catalyst because cloud architecture can be described in code. Rather than talking to each other about their respective parts of the system, they can work together on the whole system at once. Developers, sysadmins and hybrids can all contribute to a unified source tree, containing both application code and a description of the production environment: how many virtual servers to deploy, their specifications, which components run on which servers, how they are configured, and so on. In this way, system and network architecture can evolve in lockstep with application architecture.

Cloudy promises such as dynamic scaling and fault tolerance call for a DevOps approach in order to be realized in a real-world scenario. These systems involve dynamically manipulating production infrastructure in response to changing conditions, and the application must adapt to these changes. Whether this takes the form of an active, intelligent response or a passive crash-only approach, development and operational considerations need to be aligned.

So what?

DevOps and cloud will continue to reinforce each other and gain momentum. Both individuals and organizations will need to adapt in order to take advantage of the opportunities provided by these new models. Because they’re complementary, it makes sense to adopt them together, so those with expertise in both will be at an advantage.

Written by Matt Zimmerman

June 8, 2010 at 10:28

The behavioral economics of free software

People who use and promote free software cite various reasons for their choice, but do those reasons tell the whole story? If, as a community, we want free software to continue to grow in popularity, especially in the mainstream, we should understand better the true reasons for choosing it—especially our own.

Some believe that it offers higher quality, that the availability of source code results in a better product with higher reliability. Although it’s difficult to do an apples-to-apples comparison of software, there are certainly instances where free software components have been judged superior to their proprietary counterparts. I’m not aware of any comprehensive analysis of the general case, though, and there is plenty of anecdotal evidence on both sides of the debate.

Others prefer it for humanitarian reasons, because it’s better for society or brings us closer to the world we want to live in. These are more difficult to analyze objectively, as they are closely linked to the individual, their circumstances and their belief system.

For developers, a popular reason is the possibility of modifying the software to suit their needs, as enshrined in the Free Software Foundation’s freedom 1. This is reasonable enough, though the practical value of this opportunity will vary greatly depending on the software and circumstances.

The list goes on: cost savings, educational benefits, universal availability, social rewards, etc.

The wealth of evidence of cognitive bias indicates that we should not take these preferences at face value. Not only are human choices seldom rational, they are rarely well understood even by the human themselves. When asked to explain our preferences, we often have a ready answer—indeed, we may never run out of reasons—but they may not withstand analysis. We have many different ways of fooling ourselves with regard to our own past decisions and held beliefs, as well as those of others.

Behavioral economics explores the way in which our irrational behavior affects economies, and the results are curious and subtle. For example, the riddle of experience versus memory (TED video), or the several examples in “The Marketplace of Perception” (Harvard Magazine article). I think it would be illuminating to examine free software through this lens, and consider that the vagaries of human perception may have a very strong influence on our choices.

Some questions for thought:

  • Does using free software make us happier? If so, why? If not, why do we use it anyway?
  • Do we believe in free software because we have a great experience using it, or because we feel good about having used it? (Daniel Kahneman explains the difference)
  • Why do we want other people to use free software? Is it only because we want them to share our preference, or because we will benefit ourselves, or do we believe they will appreciate it for their own reasons?

If you’re aware of any studies along these lines, I would be interested to read about them.

Written by Matt Zimmerman

May 25, 2010 at 10:42

Ubuntu 10.10 (Maverick) Developer Summit

I spent last week at the Ubuntu Developer Summit in Belgium, where we kicked off the 10.10 development cycle.

Due to our time-boxed release cycle, not everything discussed here will necessarily appear in Ubuntu 10.10, but this should provide a reasonable overview of the direction we’re taking.

Presentations

While most of our time at UDS is spent in small group meetings to discuss specific topics, there are also a handful of presentations to convey key information and stimulate creative thinking.

A few of the more memorable ones for me were:

  • Mark Shuttleworth talked about the desktop, in particular the introduction of the new Unity shell for Ubuntu Netbook Edition
  • Fanny Chevalier presented Diffamation, a tool for visualizing and navigating the history of a document in a very flexible and intuitive way
  • Rick Spencer talked about the development process for 10.10 and some key changes in it, including a greater focus on meeting deadlines for freezes (and making fewer exceptions)
  • Stefano Zacchiroli, the current Debian project leader, gave an overview of how Ubuntu and Debian developers are working together today, and how this could be improved. He has posted a summary on the debian-project mailing list.

The talks were all recorded, though they may not all be online yet.

Foundations

The Foundations team provides essential infrastructure, tools, packages and processes which are central to the development of all Ubuntu products. They make it possible for the desktop and server teams to focus on their areas of expertise, building on a common base system and development procedures.

Highlights from their track:

Desktop

The desktop team manages both Desktop Edition and Netbook Edition, on a mission to provide a top-notch experience to users across a range of client computing devices.

Highlights from their track:

Server/Cloud

The server team is charging ahead with making Ubuntu the premier server OS for cloud computing environments.

Highlights from their track:

ARM

Kiko Reis gave a talk introducing ARM and the corresponding opportunity for Ubuntu. The ARM team ran a full track during the week on all aspects of their work, from the technical details of the kernel and toolchain, to the assembly of a complete port of Netbook Edition 10.10 for several ARM platforms.

Kernel

The kernel team provided essential input and support for the above efforts, and also held their own track where they selected 2.6.35 as their target version, agreed on a variety of changes to the Ubuntu kernel configuration, and created a plan for providing backports of newer kernels to LTS releases to support deployment on newer hardware.

Security

Like the kernel team, the security team provided valuable input into the technical plans being developed by other teams, and also organized a security track to tackle some key security topics such as clarifying the duration of maintenance for various collections of packages, and the ongoing development of AppArmor and Ubuntu’s AppArmor profiles.

QA

The QA team focuses on testing, test automation and bug management throughout the project. While quality is everyone’s responsibility, the QA team helps to coordinate these activities across different teams, establish common standards, and maintain shared infrastructure and tools.

Highlights from their track include:

Design

The design team organized a track at UDS for the first time this cycle, and team manager Ivanka Majic gave a presentation to help explain its purpose and scope.

Toward the end of the week, I joined in a round table discussion about some of the challenges faced by the team in engaging with the Ubuntu community and building support for their work. This is a landmark effort in mating professional design with free software community, and there is still much to learn about how to do this well.

Community

The community track discussed the usual line-up of events, outreach and advocacy programs, organizational tools, and governance housekeeping for the 10.10 cycle, as well as goals for improving the translation of Ubuntu and related resources into many languages.

One notable project is an initiative to aggressively review patches submitted to the bug tracker, to show our appreciation for these contributions by processing them more quickly and completely.

Written by Matt Zimmerman

May 17, 2010 at 12:01

Lucid ruminations

A few months ago, I wrote about changes in our development process for Ubuntu 10.04 LTS in order to meet our goals for this long-term release. So, how has it turned out?

Well, the development teams are still very busy preparing for the upcoming release, so there hasn’t been too much time for retrospection yet. Here are some of my initial thoughts, though.

  • Merge from Debian testing – Martin Pitt has started a discussion on ubuntu-devel about how this went. For my part, I found that Lucid included fewer surprises than Karmic.
  • Add fewer features – This is difficult to evaluate objectively, but my gut feeling is that we kept this largely under control. As usual, a few surprise desktop features were implemented that not everyone is happy about, myself included.
  • Avoid major infrastructure changes – I think we did reasonably well here, though Plymouth is a notable exception. It resulted (unsurprisingly) in some nasty bugs which we’ve had to spend time dealing with.
  • Extend beta testing – This will be difficult to assess, though if 10.04 beta was at least as good as 9.10 or 9.04 beta, then it will have arguably been a success.
  • Freeze with Debian – Although early indications were good, this didn’t work out so well, as Debian’s freeze was delayed
  • Visualize progress – The feature status page provided a lot of visual progress information, and the system behind it allowed us to keep track of work slippage throughout the cycle, both of which seemed like a firm step in the right direction. I’m looking forward to hearing from development teams how this information helped them (or not).

A more complete set of retrospectives on Lucid should give us some good ideas for how to improve further in Maverick and beyond.

Update: Fixed broken link.

Written by Matt Zimmerman

April 20, 2010 at 09:23

Ubuntu Inside Out – Free Software and Linux Days 2010 in Istanbul

In early April, I visited Istanbul to give a keynote at the Free Software and Linux Days event. This was an interesting challenge, because this was my first visit to Turkey, and my first experience presenting with simultaneous translation.

In my new talk, Ubuntu Inside Out, I spoke about:

  • What Ubuntu is about, and where it came from
  • Some of the challenges we face as a growing project with a large community
  • Some ways in which we’re addressing those challenges
  • How to get involved in Ubuntu and help
  • What’s coming next in Ubuntu

The organizers have made a video available if you’d like to watch it (WordPress.com won’t let me embed it here).

My first game of backgammonAfterward, Calyx and I wandered around Istanbul, with the help of our student guide, Oğuzhan. We don’t speak any Turkish, apart from a few vocabulary words I learned on the way to Turkey, so we were glad to have his help as we visited restaurants, cafes and shops, and wandered through various neighborhoods. We enjoyed a variety of delicious food, and the unexpected company of many friendly stray cats.

It was only a brief visit, but I was grateful for the opportunity to meet the local free software community and to see some of the city.