We'll see | Matt Zimmerman

a potpourri of mirth and madness

Archive for 2008

The gift of constructive criticism

Giving and receiving feedback is a key part of working in a team, whether as a professional colleague, a volunteer contributor or a friend.  It gives us the opportunity to see ourselves as others see us, to identify our blind spots, and become better at what we do.

However, it doesn’t always go as planned.  Even otherwise accurate criticism, when delivered without appropriate care, can have the opposite effect.  It makes us feel defensive and undervalued, and gives us no incentive to change our behavior.  It can distract us away from our own problems, and lead us to focus on the person giving us feedback instead.

Here are some practical techniques which I have found to make a positive difference in giving and receiving criticism:

  • Ask questions: Don’t assume that you already know what the problem is.  Often, you don’t.
  • Provide context: It’s easy to focus too much on the problem at hand, and neglect to put it in perspective.  If you only ever hear from me when something is wrong, you probably won’t receive my feedback with openness and acceptance.  Therefore, try to present criticism in the context of the bigger picture.  Take the opportunity to share your overall view of the situation, particularly where it’s good, before analyzing what’s wrong with it.  Many promote the simple idea of a “sandwich” which surrounds criticism on both sides with positive messages, though I think it’s more important to provide relevant perspective than to simply balance positive and negative commentary.  Contrast “This function is buggy and should be rewritten” with “My understanding of this module is that it should work this way, and for the most part, it seems to, but this function doesn’t work the way I expect, and it seems like a bug to me.”
  • Explain why: If you feel that a change is needed, provide rationale for it.  You may have already worked out for yourself what is wrong and how to fix it, but if I haven’t been a part of that process, I won’t understand why it seems important to you that I change what I’m doing.  Therefore, take a moment to explain the benefits of  your proposed change from my point of view.  “I suggest doing it this way, because…”
  • Suggest how: It may not be obvious to me how to implement the change you’ve proposed, and this can give the impression that you’re oversimplifying the situation, or don’t understand what you’re asking me to do.  Note that it’s usually appropriate to ask first whether I want this type of help, as unsolicited advice may be unwelcome.  Contrast “You never answer my emails” with “Are you having trouble keeping up with the volume of email you receive?  I have a similar problem, and what worked for me was…”
  • Show support: Position yourself firmly on the same side, not in opposition.  The problem is the enemy, and you should aim to work cooperatively with your colleague to address it.  Make sure they know that they’re not on their own, that you’re available to follow through and work with them to improve the situation.  Again, make sure you have their permission to help.  Contrast “You really need to fix this” with “If you want, we can try this together next time, and see if the new approach works better.”

Written by Matt Zimmerman

December 24, 2008 at 14:57

Posted in Uncategorized

Tagged with ,

Ubuntu quality: or, “but what about my bug?”

Leading up to the Ubuntu 8.10 release, Ubuntu developers and quality assurance engineers have been very busy sorting bugs, deciding what can and should be fixed for the final release, and what cannot.  They make these decisions by estimating the importance of each bug, identifying whether it is a regression, assessing the risk of potential fixes, and by applying their best judgement.  Developers can then focus their efforts where they are most needed.

On the whole, I think that we do remarkably well at this.  In September, for example, the total number of open bugs in Ubuntu increased by only 70.  This doesn’t sound like much of an achievement, if not for the fact that in the same time period, 7872 new bug reports were filed.  The remaining 7802, or over 99%, were resolved (some duplicates, some invalid, some fixed, etc.).

The news isn’t all good, of course.  There are currently over 46000 open Ubuntu bug reports in Launchpad.  Even at this impressive rate of throughput, and even if we were to freeze all development and stop accepting new bug reports entirely, I estimate it would take over half a year just to sift through the backlog of reports we have already received.  There is a lot of noise in that data.

When 8.10 is released, as with each previous release, some users will be disappointed that it has a bug which affects them.  This is regrettable, and I feel badly for affected users each time that I read about this, but it is unlikely to ever change.  There will never be a release of Ubuntu which is entirely free of bugs, and every non-trivial bug is important to someone.

So, what do we do?  What should be our key goals where quality is concerned?  We don’t currently have a clear statement of this, but here’s a strawman:

Prioritize bug reports effectively.  It’s usually difficult to say whether a bug report is valid or serious until a human has reviewed it, so this means having enough people to review and acknowledge incoming bug reports, and helping them to work as efficiently as possible.  The Ubuntu bug squad is a focal point for this type of work, though a great deal of this is done by developers in their everyday work as well.  Projects like 5-a-day are a good way to get started.

Measure our performance objectively.  By tracking metrics for each part of the quality assurance process, we can understand where we need to improve.  The QA team has been developing tools, such as the package status pages, to collect hard data on bugs.

Improve incrementally.  By minimizing regressions from one release to the next, and making some progress on old bugs, we can hope to make each Ubuntu release better, in terms of overall quality, than the one before it (or at least no worse).  The regression tracker (which will hopefully move to the QA site soon) will help to coordinate this effort.

Ensure that the most serious bugs are identified and fixed early.  It’s important to the success of the project that we continue to produce regular releases, and showstopper bugs risk delaying our release dates and adversely affecting the next release cycle.  The release management weather report is one tool which helps monitor this process, though a great deal of coordination is required in order to provide it with useful data.

Communicate about known bugs.  It is inevitable that there will be known bugs remaining in each release, and we should do our best to advise our users about them, including any known workarounds.  The Ubuntu 8.10 beta release notes are a good example of this.

I think that to do well in all of these areas would be a good goal for quality in Ubuntu.

Written by Matt Zimmerman

October 29, 2008 at 12:09

Neil Gaiman: Piracy vs. Obscurity

Credit: The Kitten's Toe

On Friday, 24th October, I attended an Open Rights Group event where Neil Gaiman spoke about the impact of digital “piracy” on authors such as himself.  His thesis was that the free availability of content from books, even when it is contemporaneously for sale in print, is in fact beneficial for the creator.  This is, he said, because their greatest challenge, in a world crowded with writers, is to be “tasted” by potential readers and publishers.

He recounted a few of his own experiences with publishing free content on his website, noting that in each instance, the net effect was an increase in his book sales.  He explained that the free availability of books enables new readers to try them out, and (somewhat counter-intuitively) does not discourage potential buyers from buying them.  Some of the explanations he offered were:

Reading books is more pleasant than reading from a computer screen. This seems intuitively true for most people, as the practical advantages of books (low cost, durability, light weight, ease of use, and so on) outweigh those of digital content.  He quoted Douglas Adams as having said that “books are sharks”, highly evolved to suit their purpose, and that “nothing is better at being a book than a book”.  He acknowledged that this could change in the future, but pointed out that there were other ways that he could uniquely profit from his works, such as live readings.

Book buyers derive pleasure from the tangible experiences of owning books and passing them on to friends.  Digital storage and copying, while extremely efficient, do not evoke the same feelings as these physical acts.  Contrast browsing a bookshelf with scanning filenames, or receiving an email attachment with opening a neatly wrapped book.  They offer very different social and sensory experiences.

Readers buy books explicitly to support (“give back to”) the author.  Gaiman compared this to his experience of tasting many flavors of ice cream at Baskin Robbins and then feeling compelled by guilt to purchase one.  I found this one the least compelling, but he does seem to have many loyal readers.

I think there is some truth in each of these, and would be interested to see research on this subject.  What does motivate buyers, and does this change when they are aware of corresponding content which is free?

There were several good questions and comments, though I didn’t have the opportunity to present one.  This is because I didn’t put my hand up.  I have always been hesitant to ask questions from within an audience.  This, for some reason, makes me even more self-conscious than addressing an audience, perhaps because many of the people are behind me.

I wanted to pose a question about the impact of technological advancement.  The model that Gaiman described for digital books relies on the existence of related works and experiences which cannot be digitally reproduced: physical books, live performances, autographs, and so on.  The technological trend, however, is that digital experiences are becoming richer and richer, and science fiction writers have long extrapolated that they will become indistinguishable from first-hand sensory experiences.

The experience of browsing a bookshop, wandering around an art museum, or attending a musical performance, will eventually be able to be reproduced with such fidelity that it can be exchanged as easily as a photograph.

What will be left to sell when a digital reproduction is virtually as good as the real thing?  How will creativity be rewarded in a world where most of today’s creative works are merely information, and information is truly free?

This situation exists today for the creators of digital works, and the answers are unclear.  The growth and diversity of these works will depend on whether we find ways to sustain their creators.  Tools like the GNU GPL and Creative Commons BY-NC-SA help us to define boundaries around our work, but the system as a whole is not yet well defined or understood, particularly where free software is concerned.

There are many experiments underway in free software economics.  Canonical sells services associated with the software we distribute.  Some developers publish their work in hope of earning a reputation, followed by a job, in much the same way that Gaiman describes.

How about you?  How will your free software works sustain you, so that you can continue to create?

Written by Matt Zimmerman

October 27, 2008 at 12:00

Posted in Uncategorized

Tagged with

The free software ecosystem and its denizens

Free software is a remarkable phenomenon.  It is a highly evolved form of collaboration: compared to other creative endeavors, free software developers all over the world are able to work together on a project with surprisingly little friction.  It is a grassroots political movement which has grown from small online communities to span geographical and national boundaries.  It is a multicultural social group with unique and diverse characteristics.  It has spawned a variety of self-governing organizations and successful corporations.

It is also an interesting example of a gift economy.  In general, participants in free software donate their work to the greater good, with no expectation of an exchange of value.  Some contributions are rewarded by social or professional recognition, where the author achieves standing among their peers or receives gratitude from recipients of their work.  Others indirectly evoke rewards in kind, such as where the creator of a program is rewarded by contributions from others which improve it further.  Some are works for hire, where a corporation commissions a contribution through its employees, in pursuit of its own aims.  There are other types of exchanges where I do not personally understand what motivates the contributor.

There are many recognized roles in free software, but they can be broadly classified into three types:

Developers are the heart of this economy.  They are continuously creating and improving free software technology, and publishing their source code for other developers to use and learn from.  Some developers write one program and then vanish from the community, while others contribute to many different projects over the course of decades.  Highly effective developers are celebrities in the free software community.

Users, also known as “people”, are the reason why software is written.  Their needs and wants determine which software is considered valuable.  Many of them also contribute directly to free software in one way or another, by promoting awareness, providing testing and feedback, supporting other users, writing documentation, or building communication links.  Historically, most users of free software were also its developers, but today, this is no longer true, and millions of people use free software who are not developers.

Packagers connect these two groups.  They gather up the source code produced by developers, wrap it in standardized packaging, and bundle it into collections (distributions) designed to meet the needs of users.  Users experience free software almost exclusively through a distribution.  As the free software stack has grown in size and complexity, so have distributions, and the maintenance of a modern distribution is a large-scale development project in itself: selecting appropriate software and versions, getting the lot working smoothly together, and releasing it in the form of a product which is accessible to users.  Packagers create only a small fraction of the software included in their distribution, but they define several key aspects of “what it is like” to use it.  Users most strongly associate their experience of free software with a distribution.

If free software were film, developers would be some combination of writer and crew, creating and expressing characters and a story.  Users would be viewers (including critics and fans), who receive and interpret the work.  Packagers would be the film crew, realizing the production on film so that it can be seen.

If free software were food, developers would be chefs, developing recipes and cooking.  Users would eat and critique the dishes.  Packagers would be restaurateurs, serving customers and creating an environment in which they can experience the food.

Neither of these analogies are very complete.  In particular, these analogies fail to capture the strange loops of free software.  Every developer is also a user, reliant upon of thousands of other programs which they receive in packaged form order to do their development work.  Every team of packagers develops some software in order to make their distribution work, and they use the distribution itself in order to do so.

Distributions, and the integration work that they do, are a critical part of this ecosystem.  Many of us would not be using free software today if not for the efforts of projects like Debian, whose mission is to produce a complete system out of free software created by others.  In my case, Debian provided both the means and the motivation for my contributions to free software, and later made Ubuntu possible.

Strong, productive relationships between these groups are essential to continuing the growth and development of free software.  Whichever groups you’re part of, get to know your counterparts in other groups.  Talk to the people who are packaging your software, writing the software you package, using your software or packages.  Learn about the problems they face and how you can help each other.  Don’t assume that this communication is someone else’s job: reach out and make it happen.

Written by Matt Zimmerman

October 7, 2008 at 12:46

Death, taxes and television

I’ve lived in a flat in Islington for about a year now, and in that time I’ve received many letters demanding money.  Annual invoices for council tax recommend that I pay by direct debit.  Flat-rate water bills arrive quarterly.  I’m asked to provide a gas meter reading from time to time.  Two electricity suppliers are still arguing about which of them supplies electricity to the flat, and they both demand payment.

The most insistent, the most colorful, and the most bizarre of all are the letters regarding TV licensing.  I don’t own a television set, nor is one provided by the landlord.  The only television programs I watch are those on rented DVDs, which I view using a computer.  According to their own website, I am not obliged to pay them anything.

Just another bill

Naturally, their first attempt was to send an invoice, which I declined, as I hadn’t ordered anything from them.  This sort of behavior is known as mail fraud in the US.  Scam artists send thousands of invoices to random people, some small percentage of whom pay them or are tricked into agreeing to do so.

WARNING AGAINST UNLAWFUL ACTION

Next, they began to send warning letters.  These, at first glance, accuse the recipient of illegal activity, and “strongly advise” the purchase of a license.  They quote statistics showing how efficient they are at catching “evaders”, even those in the recipient’s own neighborhood!

At this point, I phoned and told them that I did not require a license and would appreciate it if they would stop sending the letters.

Knock, knock

Some time later, a man knocked on our door on a weekend afternoon to investigate.  He stood at the doorstop and asked whether there was a television in the house, and a few other things, and left.

OFFICIAL WARNING

The latest letter threatens us with a “full investigation of the above address” and a “fine of up to £1000”.  This is because “there is still no record of a TV Licence at this property, despite our previous letters.”  They invite me to call them (again) to update their records if I do not require a license.

Given that we have already been visited by an investigator, I wonder what this “full investigation” will entail.  Questioning my neighbors?  Surveillance?  Wiretapping?  Midnight raids?

Woe to those who dare defy the TV police.

Update: On my way home tonight, I saw this advertisement at Charing Cross station.  Watch out!

Written by Matt Zimmerman

September 27, 2008 at 12:28

Posted in Uncategorized

Tagged with ,

Plumbers Conference retrospective

The Linux Plumbers Conference has ended, and on the whole it was a productive forum despite its rocky start.

One of the reasons for this was that there was a strong presence from the kernel community, carried over from Kernel Summit.  Since the purpose of Plumbers was to explore problems which span subsystems, having these folks in the room was a key factor.  I’m told that it’s unlikely that the conferences will colocate next year, and I hope that it will succeed in drawing participation from kernel developers anyway.

There was a strong sense of cooperation among the different distributions, companies and projects which were represented, though less so between the kernel developers and userspace developers.  These two groups would benefit from a better understanding of one another’s problems, and I hope that can be achieved through cross-participation in working events like Plumbers.

It’s common to picture the ecosystem as a stack or a sphere with some components at the bottom/center and others at the periphery, but these simplistic metaphors belie the complex and non-linear interdependencies which exist between projects.  The kernel, the toolchain, the “plumbing”, applications, distributions, companies, and so on, don’t form a neat diagram, and each performs an essential function in making the overall ecosystem work.

I had a chance to talk briefly with Greg KH about his concerns and the way they were expressed, and have hope that some goodwill can be fostered there.  I introduced him to Pete, who manages our kernel team, as a point of contact for a more nuanced dialog about our working relationship with the kernel community.

The discussions about the boot process were particularly interesting, as a great example of a problem which needs broad cooperation in order to solve effectively.  For example, as a result of comparing the (quite different) bootcharts between Fedora and Ubuntu, developers from both distributions identified areas where significant gains were clearly possible without deep structural changes.  Scott has isolated a long-standing issue which made our module loading sequence in Ubuntu much slower than it could be.

In between talks, I did some work on integrating apport with kerneloops.  The result is that kernel oopses can be captured as Apport problem reports with full detail, and semi-automatically filed as bugs, in addition to being counted on kerneloops.org’s statistics.  I’ve put an initial version into Ubuntu and sent the patch to Arjan for merging upstream, and we’re exploring the addition of kerneloops to our default installation to provide testing feedback to kernel developers from our users.

Written by Matt Zimmerman

September 20, 2008 at 18:19

Posted in Uncategorized

Tagged with , ,

Greg Kroah-Hartman’s Linux Ecosystem

As the opening keynote at the Linux Plumbers Conference, Greg Kroah-Hartman delivered a talk entitled “The Linux Ecosystem, where do you fit in it?

There were, let’s say, a few elements of it which I found objectionable.

The central issue, of course, was that he devoted a large portion of the talk to showing that Canonical contributes fewer patches to the Linux kernel than many other companies.  While it’s somewhat flattering for Canonical’s role in the community to be a headline topic for a conference like this, the message that he chose to deliver was a negative one.  He presented list after list of contributors, ranked by number of patches, and pointed out how low Canonical was on each one.

I approached him immediately after his talk to suggest that we have a conversation about the topics he raised, but he wasn’t interested in talking with me at that time.  I made a standing offer to talk with him at any time during the three-day conference, and hope that we can get to the bottom of this.  Until then, I’m not sure what exactly he’s trying to accomplish.

Meanwhile, I’d like to make a few points.  I’ll start with a disclaimer, something that Greg chose to omit from his presentation: I work for Canonical, and am one of the founding members of the Ubuntu project.

Greg’s view of the ecosystem is odd

Greg considers the “Linux ecosystem” to be GCC, binutils, the Linux kernel, X.org, and a handful of other projects.  He disregards most of the desktop stack (including GNOME and KDE), all desktop and server applications, and most anything else that is recognizable to an end user as “Linux”.

Some members of the audience picked up on this and commented.  His justification for this was that these other components are not specific to Linux.  “Any contribution to GNOME also benefits OpenSolaris”, says Greg.  Apparently, this means that GNOME isn’t an important part of Greg’s Linux ecosystem.  “I had to draw the line somewhere”, he says.

This is not the Linux ecosystem that I use and contribute to.

Greg’s figures are wrong

The first slide in his presentation acknowledged that he had miscounted Canonical’s contributions to the Linux kernel.  As he freely admits, his method is not an exact science and there were many other errors.  However, given that he intends to use these statistics to attack Canonical, he should take more care in compiling them.

His original claim, given at a Google tech talk in June 2008, was “Canonical doesn’t give back to the community”.  He supported this by saying that “Canonical made six changes to the kernel in the last five years.”

Greg now states that Canonical has in fact contributed in excess of 100 patches.  This means that his raw data for the kernel was incorrect by two orders of magnitude.

His LPC presentation also put forth some new claims regarding contributions to other projects.  In particular, he lambasted Canonical for not contributing to binutils at all (zero patches).  What that actually means in terms of our position in the ecosystem is debatable, but numbers are not.  It’s true that Canonical does not contribute as much to binutils as Red Hat does (and more on that later), but as Kees Cook pointed out after the presentation, he personally contributed a patch (now merged) which credited Canonical as his employer, which fell within the date range of Greg’s analysis.

Of course, none of these errors impact his fundamental conclusion, which is that Canonical engages in relatively small-scale development compared to Red Hat and Novell.  No one is disputing that.  However, the fact remains that his data is inaccurate.

Greg is failing to disclose his bias

Greg is, of course, a well respected contributor to the Linux kernel, having sustained a significant level of contribution over a period of several years.  I’m grateful to him for his technical contributions, which of course benefit Ubuntu as a consumer of the Linux kernel.  However, his contribution to the public dialog about the Linux ecosystem leaves much to be desired.

We all have bias, and the best that we can do is to disclose it so that others can take it into account when hearing our ideas.  Unlike the presentations given by other Novell employees at this and other conferences, Greg’s slides omitted the Novell logo.

Novell is, of course, a competitor of Canonical, being an operating system vendor (and a large one at that).  To attack his corporate competitors without acknowledging his affiliation is in poor taste.

Greg’s comparisons are bogus

The fundamental argument he makes is that Canonical doesn’t contribute as much as Red Hat, Novell and many other organizations which he names.  This is absolutely true.  He says that it’s unethical to claim more contributions than one has made, and he’s absolutely right there as well.

However, no one, certainly not Canonical, has ever claimed that Canonical does as much Linux development as Red Hat or Novell.  He’s refuting a claim which has, quite simply, never been made.

Canonical is primarily a consumer of the Linux kernel.  It is one of the building blocks we need in order to fulfill our primary mission, which is to provide an operating system that end users want to use.  It is, on the whole, a good piece of software which meets our needs well.  We routinely backport patches from newer kernels, and fix bugs which are particularly relevant to us, but our kernel consists almost entirely of code we receive from upstream.

Why, then, does Greg feel that Canonical should be expected to make more changes to the Linux kernel?

Is it because Ubuntu is a very popular system, with a lot of users?  It is that, but most people who use Linux aren’t kernel developers, so a large user population doesn’t translate to a lot of Linux kernel patches.

Is it because he thinks Canonical is making a lot of money off of the Linux kernel, and should give some of that back?  We make no secret of the fact that Canonical as a company is not yet earning a profit.  We make a promise to our community that we will never charge money for Ubuntu.

Is it because he thinks Canonical developers are writing a lot of patches and not contributing them?  If he does, he hasn’t compared our kernel tree with Linus’.

Why then?

Greg’s approach is not constructive

If we give Greg the benefit of doubt, and assume that he doesn’t merely have an axe to grind, then there must be some genuine concern behind all of this.  He must feel that Canonical is somehow not doing the right thing.

If that’s true, why hasn’t he ever talked to me about it?  He has my email address, and we’ve exchanged mail and spoken on the phone before.  Why am I hearing about this through presentations given to Google employees, posted on YouTube, or delivered to audiences of kernel developers?

To present his commentary in this way is indefensible.  LPC is promoted as a productive community event aimed at solving problems, and Greg has used his voice as a speaker to promote a much less honorable agenda.

When this sort of thing happens on mailing lists, it’s called trolling.

Written by Matt Zimmerman

September 17, 2008 at 22:10

Posted in Uncategorized

Tagged with ,

Toward a free web

The web is no longer just a collection of sites one can visit with a browser.  It’s increasingly a rich set of programming interfaces upon which applications can be built.  This, at a technological level, is very good news.  Interfaces make possible a greater variety of applications.  This is an old idea for software in general, but a relatively new trend on the web.

Application cooperation

Remember when desktop programs didn’t talk to each other?  Incompatible file formats, proprietary development, primitive multitasking features and monolithic design kept each program in its own silo.  Each one was only as good as what came in the original box, and if you wanted to do more, you had to look for a better program.  Usually, the one you found, with just the right feature you needed at that moment, was missing several others you couldn’t live without.  Power users would keep several different programs for doing different variations of the same task: one word processor had a great macro facility, while the other had beautiful fonts or could open the right file formats.

Proprietary systems were evolving toward a more cooperative model in the 1980s, when technologies like DDE enabled applications to work together in meaningful and standard ways.  Although its capabilities were limited, the benefit to end users was substantial.  There is of course a lot more which has happened in this area since then, but meanwhile, the modern free software movement has brought .  A free program could be improved to add “just one more feature” which would otherwise have required searching for (and perhaps buying) a completely different program.  Sometimes, new capabilities could even be “lifted” from one program to another, or shared as a library between multiple programs.  An improvement in one program could benefit other applications as well.

In short, free software programmers were no longer fundamentally limited by their own time and skill in writing programs: they could build on the work of others.  The programmable web, in some ways, promises even greater opportunities to do this.  The explosion of web APIs, and applications built on them, speaks for itself.  However, this freedom is in some ways more reminiscent of the limited power of proprietary application systems than of free software.

Open interface, closed application

Most web APIs seem to be open for use by anyone, within reasonable limits.  Given that they are, in effect, licenses to use someone else’s computing resources, it’s necessary to regulate their use, and appropriate to charge for the service.  However, non-trivial applications built on them are usually not themselves open.  This means that while application programmers can benefit from the availability of the API, they can’t build on each other’s work.

I can use the API to create a new application, but I can’t use an existing application to make a better one.

Open interface, closed implementation

Similarly, the backend software which provides these useful interfaces is usually proprietary.   Sometimes, this software is in fact free, but the API provider chooses not to share their version, sometimes exploiting a loophole in copyleft schemes which would otherwise require that it be shared.

I can use the API to create an application, but I can’t use it to build a new and better interface.

Developer freedom

What would be the characteristics of an online software ecosystem which promotes innovation at the same level that free software has done?  There are those who say that freedom on the web is about data: who owns the data, and what can they do with it?  These are interesting questions, but I’m more interested in how to promote the development of more and better software, and I think data is only a part of that picture.

A free web, to me, would support innovation through building on existing APIs, backend software and applications.  Everything new which was created would create new possibilities for further development.  The four freedoms largely apply, and for largely the same reasons.  The free software concept is not a complete solution, though, and needs to be adapted significantly for this purpose.  I can think of a few obstacles:

Copying software is virtually free, but providing a web service is not. One of the key benefits of online APIs is that they’re consistently available at a globally addressable location, and that requires high-availability hosting services.  Some providers, like Google App Engine, already offer a basic hosting service for free, but I think these are more likely loss leaders than sustainable free services.

Similarly, many interesting web applications, particularly “user-created content” sites, would be less interesting if they are decentralized.  When the data is fragmented, the overall system becomes less valuable.  Imagine if Facebook were free, and there were hundreds of smaller, specialized Facebooks rather than one large one.  The very thing which makes Facebook interesting (the ability to connect to other people in a variety of ways) would be significantly diminished unless these different instances were connected.  In order to support decentralization, applications need to share data. Unfortunately, building such applications with today’s tools is complex and difficult to get right.

What else is different?  What stands in the way of software freedom for the web?

Written by Matt Zimmerman

September 14, 2008 at 19:51

Posted in Uncategorized

Tagged with ,

Bring the needles and the knives

Well, maybe not the knives.  I visited an acupuncturist today, in the latest scouting mission in my campaign against RSI.  I’ve experienced chronic pain and discomfort in my arms and wrists for years now, and have tried various other types of physical therapy, but this was my first of this type.

I spent most of the session talking with the practitioner, explaining the history of my symptoms.  She was emphatic that I seek to fix the problem at its root by correcting the ergonomics of my workstations.  This is clearly necessary, but in spite of several rounds of experimentation I have not been able to solve it on my own yet.  Thankfully, she was able to recommend a colleague who specializes in this particular area of ergonomics, and I’ll see what comes of that.

The acupuncture treatment itself was a very curious affair.  The sensation of a needle penetrating the thick band of tension in my arm muscles was quite unfamiliar and difficult to describe.  It felt almost as if something were pressing hard on the entire length of the muscle.

It does seem to have relieved some of the tension, and I’m interested to see the effect of repeated treatments.  I’m also left with a curious feeling of lightness which lasted the entire 20-minute walk home.  It was a little bit like having ingested caffeine.

My question of the evening: How long before we have input devices of comparable speed and accuracy which don’t abuse our bodies so?

Written by Matt Zimmerman

September 8, 2008 at 21:23

Posted in Uncategorized

Tagged with ,

Linux Plumbers Conference

I’ve decided (somewhat late) to attend the inaugural Linux Plumbers Conference in Portland.  It’s shaping up to be an interesting collection of people and topics.  I think the idea of a conference which spans the kernel/userland boundary is a useful one, though so far it’s pretty heavy on the kernel side, probably in large part due to overflow of topics (and kernel developers) from the preceding Kernel Summit.

I’ll be there to meet up with some far-flung Ubuntu folk and learn about the next round of hurdles which will be faced by integrators like us.

Written by Matt Zimmerman

September 5, 2008 at 20:49

Posted in Uncategorized

Tagged with , ,