We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Community

Interviewed by Ubuntu Turkey

As part of my recent Istanbul visit, I was interviewed by Ubuntu Turkey. They’ve now published the interview in Turkish. With their permission, the original English interview has been published on Ubuntu User by Amber Graner.

Written by Matt Zimmerman

April 19, 2010 at 15:15

Ubuntu Inside Out – Free Software and Linux Days 2010 in Istanbul

In early April, I visited Istanbul to give a keynote at the Free Software and Linux Days event. This was an interesting challenge, because this was my first visit to Turkey, and my first experience presenting with simultaneous translation.

In my new talk, Ubuntu Inside Out, I spoke about:

  • What Ubuntu is about, and where it came from
  • Some of the challenges we face as a growing project with a large community
  • Some ways in which we’re addressing those challenges
  • How to get involved in Ubuntu and help
  • What’s coming next in Ubuntu

The organizers have made a video available if you’d like to watch it (WordPress.com won’t let me embed it here).

My first game of backgammonAfterward, Calyx and I wandered around Istanbul, with the help of our student guide, Oğuzhan. We don’t speak any Turkish, apart from a few vocabulary words I learned on the way to Turkey, so we were glad to have his help as we visited restaurants, cafes and shops, and wandered through various neighborhoods. We enjoyed a variety of delicious food, and the unexpected company of many friendly stray cats.

It was only a brief visit, but I was grateful for the opportunity to meet the local free software community and to see some of the city.

A story in numbers

During a 24 hour period:

  • One person whose virtual server instance was destroyed when they invoked an init script without the required parameter
  • One bug report filed about the problem
  • 6 bug metadata changes reclassifying the bug report
  • 286 word rant on Planet Ubuntu about how the bug was classified
  • 5 comments on the blog post
  • 22 comments on the bug report
  • 2 minutes on the command line
  • 6 commands using UDD (bzr branch, cd, vi, bzr commit, bzr push, bzr lp-open)
  • One line patch
  • 99% wasted energy

Written by Matt Zimmerman

April 8, 2010 at 09:08

Ada Lovelace Day 2010

Today is Ada Lovelace Day, “an international day of blogging to celebrate the achievements of women in technology and science”. By participating in this event, and blogging about women in these fields, my hopes are:

For Ada Lovelace Day 2010, I would like to honor two women in the free software community who have made a strong impression on me in the past year: Akkana Peck and Miriam Ruiz.

Akkana Peck

I became aware of Akkana’s work through Planet Ubuntu Women, which is something of a year-round Ada Lovelace Day as it aggregates the blogs of many of the women of the Ubuntu community. As a senior engineer at Netscape, Akkana was instrumental in the development of the Linux port of Mozilla. Having developed a number of extensions to the GIMP, she went on to write the book on using it as well.

Her website, blog and Twitter feed include top-notch content on both free software and astronomy.

Miriam Ruiz

I met Miriam through the Debian Women project. Her profile there seems to be out of date, as it says she is not an official Debian developer yet, but she was officially recognized over two years ago as a trusted member of the team, through Debian’s notoriously thorough New Maintainer process. Miriam has been particularly interested in game development, and as the founder of the Debian Games Team has been responsible for uncountable hours of pleasant distraction for Debian users and developers. Thanks to her leadership, the Debian Games team has been exemplary in terms of cross-participation between Debian and Ubuntu.

Miriam blogs and tweets on free software topics in English and Spanish.

Written by Matt Zimmerman

March 24, 2010 at 11:00

QCon London 2010: Day 3

The tracks which interested me today were “How do you test that?”, which dealt with scenarios where testing (especially automation) is particularly challenging, and “Browser as a Platform”, which is self-explanatory.

Joe Walker: Introduction to Bespin, Mozilla’s Web Based Code Editor

I didn’t make it to this talk, but Bespin looks very interesting. It’s “a Mozilla Labs Experiment to build a code editor in a web browser that Open Web and Open Source developers could love”.

I experimented briefly with the Mozilla hosted instance of Bespin. It seems mostly oriented for web application development, and still isn’t nearly as nice as desktop editors. However, I think something like this, combined with Bazaar and Launchpad, could make small code changes in Ubuntu very fast and easy to do, like editing a wiki.

Doron Reuveni: The Mobile Testing Challenge

Why Mobile Apps Need Real-World Testing Coverage and How Crowdsourcing Can Help

Doron explained how the unique testing requirements of mobile handset application are well suited to a crowdsourcing approach. As the founder of uTest, he explained their approach to connecting their customers (application vendors) with a global community of testers with a variety of mobile devices. Customers evaluate the quality of the testers’ work, and this data is used to grade them and select testers for future testing efforts in a similar domain. The testers earn money for their efforts, based on test case coverage (starting at about $20 each), bug reports (starting at about $5 each), and so on. Their highest performers earn thousands per month.

uTest also has a system, uTest Remote Access, which allows developers to “borrow” access to testers’ devices temporarily, for the purpose of reproducing bugs and verifying fixes. Doron gave us a live demo of the system, which (after verifying a code out of band through Skype) displayed a mockup of a BlackBerry device with the appropriate hardware buttons and a screenshot of what was displayed on the user’s screen. The updates were not quite real-time, but were sufficient for basic operation. He demonstrated taking a picture with the phone’s camera and seeing the photo within a few seconds.

Dylan Schiemann: Now What?

Dylan did a great job of extrapolating a future for web development based on the trend of the past 15 years. He began with a review of the origin of web technologies, which were focused on presentation and layout concerns, then on to JavaScript, CSS and DHTML. At this point, there was clear potential for rich applications, though there were many roadblocks: browser implementations were slow, buggy or nonexistent, security models were weak or missing, and rich web applications were generally difficult to engineer.

Things got better as more browsers came on the scene, with better implementations of CSS, DOM, XML, DHTML and so on. However, we’re still supporting an ancient implementation in IE. This is a recurring refrain among web developers, for whom IE seems to be the bane of their work. Dylan added something I hadn’t heard before, though, which was that Microsoft states that anti-trust restrictions were a major factor which prevented this problem from being fixed.

Next, there was an explosion of innovation around Ajax and related toolkits, faster javascript implementations, infrastructure as a service, and rich web applications like GMail, Google Maps, Facebook, etc.

Dylan believes that web applications are what users and developers really want, and that desktop and mobile applications will fall by the wayside. App stores, he says, are a short term anomaly to avoid the complexities of paying many different parties for software and services. I’m not sure I agree on this point, but there are massive advantages to the web as an application platform for both parties. Web applications are:

  • fast, easy and cheap to deploy to many users
  • relatively affordable to build
  • relatively easy to link together in useful ways
  • increasingly remix-able via APIs and code reuse

There are tradeoffs, though. I have an article brewing on this topic which I hope to write up sometime in the next few weeks.

Dylan pointed out that different layers of the stack exhibit different rates of change: browsers are slowest, then plugins (such as Flex and SilverLight), then toolkits like Dojo, and finally applications which can update very quickly. Automatically updating browsers are accelerating this, and Chrome in particular values frequent updates. This is good news for web developers, as this seems to be one of the key constraints for rolling out new web technologies today.

Dylan feels that technological monocultures are unhealthy, and prefers to see a set of competing implementations converging on standards. He acknowledged that this is less true where the monoculture is based on free software, though this can still inhibit innovation somewhat if it leads to everyone working from the same point of view (by virtue of sharing a code base and design). He mentioned that de facto standardization can move fairly quickly; if 2-3 browsers implement something, it can start to be adopted by application developers.

Comparing the different economics associated with browsers, he pointed out that Mozilla is dominated by search through the chrome (with less incentive to improve the rendering engine), Apple is driven by hardware sales, and Google by advertising delivered through the browser. It’s a bit of a mystery why Microsoft continues to develop Internet Explorer.

Dylan summarized the key platform considerations for developers:

  • choice and control
  • taste (e.g. language preferences, what makes them most productive)
  • performance and scalability
  • security

and surmised that the best way to deliver these is through open web technologies, such as HTML 5, which now offers rich media functionality including audio, video, vector graphics and animations. He closed with a few flashy demos of HTML 5 applications showing what could be done.

Written by Matt Zimmerman

March 12, 2010 at 17:14

linux.conf.au 2010: Day 5 (morning)

Andrew Tridgell: Patent defence for free software

I missed the start of this talk, but when I arrived, Andrew was explaining how to read and interpret patent claims. This is even less obvious than one might suppose. He offered advice on which parts to read first, and which could be disregarded or referred to only as needed.

Invalidating a patent entirely is difficult, but because patents are interpreted very narrowly, inventions can often be shown to be “different enough” from the patented one.

Where “workarounds” are found, which enable free software to interoperate or solve a problem in a different way than described in a patent, Andrew says it is important to publish them far and wide. This helps to discourage patent holders from attacking free software, because the discovery and publication of a workaround could lead to them losing all of their revenue from the patent (as their licensees could adopt that instead and stop paying for licenses).

Michael Koziarski: Lessons learned from a growing project

Michael, a member of the Rails core team, introduced himself as a pragmatist who is not interested in the principles of free software, only in working with the best tools he can find (many of which are actually proprietary). He gave an overview of what Rails is and where it came from, and a list of lessons he learned from its history.

Michael says that users make the best contributors, because they work to address user needs (which they understand first-hand). He contrasted this with developers who join the project to experiment with the latest technology or rewrite code without good reason. Therefore, in order to gain more contributors, it is important to market the project and attract more users.

He downplayed the conventional wisdom of “release early and often”, recommending a release early enough that there is plenty of incomplete work which contributors can help with, but not so early that the software is useless. In other words, release early, but not too early, and often, but not too often.

As is becoming thematic for the free software community, he recognized the necessity of dealing appropriately with people who do not advance the aims of the project. His example was people who don’t really want to use the software, because there is something about it they don’t like. Unless this one thing is changed, they say, it is of no interest to them. They may imply, or even state outright, that if the project changed in some way, they would join enthusiastically. Michael says that this is often untrue, and that even if they get the feature they want, they will not become valuable contributors. He also spoke of addressing trolls, not just the obvious ones, but more respectable-looking pundits as well.

Rails attracted many users early on because of its upstart status, and Michael pointed out that these people later left the project as it became more mainstream. The same was true of contributors, who left for other projects for their own reasons, to learn new things or explore a different direction.

Over time, the number of willing volunteers in the Rails community was much greater than the corresponding stack of “work to do”. Contributors became furious because their contributions were neglected; they threatened to fork the project and left the community. He stressed the importance of avoiding this scenario by tending to these contributors and their contributions.

He advised (mostly) ignoring your project’s competitors as a means of staying focused on the project’s core vision. In particular, he says that projects which define themselves in terms of their competition (“foo is like Rails, but…”) are not worth paying attention to.

He praised Rails’ use of a more permissive (non-copyleft) license, because it encouraged the growth of an ecosystem of hosting providers and tools. I didn’t quite follow his argument as to why this was.

Some of Michael’s lessons resonated with my experience of Ubuntu’s growth, while others did not. Regardless, it was useful to hear his perspective, and the differences may highlight the differing characters of the two projects.

Written by Matt Zimmerman

January 21, 2010 at 23:20

linux.conf.au 2010: Day 4

Lindsay Holmwood: Flapjack

Lindsay introduced Flapjack, which is a monitoring system designed to meet the scalability and extensibility requirements of cloud deployments. It shares the Nagios plugin interface, and so can use the same checks. It uses beanstalkd as a central message queue to coordinate the work of executing checks, recording results and making the appropriate notifications. Each of its components (worker, notifier, database) can be extended or replaced using an API, providing a great deal of flexibility.

Jeremy Allison: Microsoft and Free Software

Jeremy took us through Microsoft’s recognition of, and response to, the threat of free software to their monopoly position. After reviewing the major legal battles of this ongoing war (and the metaphor is apt), he says that Microsoft is turning to patents in an attempt to split the free software community and to earn revenue from the use of free software. Jeremy predicts that the outcome will be a never-ending conflict.

The key conflicts are likely to be around netbooks, mobile phones and appliances. How should the free software community respond?

We could ignore it, and keep making free software under copyleft licenses. Jeremy points out that this is perhaps our most effective strategy in the long run, to stay focused on the vision of a free software world.

We can continue to pressure governments and corporations to adopt truly open standards, and to investigate and challenge monopolies. Transparency is key to these efforts, as “elephants like to work in the dark” (Microsoft being “the elephant in the room”).

By lobbying against software patents, we can hope to contain the US software patent system from the rest of the world. Otherwise, the rapidly accumulating software patents in the US can suddenly and dramatically spread.

We might even convince the likes of Microsoft that patents, and patent trolls, represent a greater harm than good.

In response to direct patent attacks, we should search for prior art and attempt to undermine unjust patents. He also suggests calling out Microsoft employees on the company’s actions, to promote awareness particularly in the context of free software conferences.

He closed with a hope that Microsoft could change, citing IBM as having been “as feared and hated as Microsoft is today”.

Neil Brown:Digging for Design Patterns

Neil explored various design patterns in the kernel in order to illustrate how they are discovered, what their important attributes are, and how to use them effectively.

His examples were a binary search, “goto err”, accessor functions and kref. Naming patterns is important, especially getting that name into the code itself, so that it helps to cross-reference use, implementation and documentation of the pattern (e.g. uses of the kref pattern are sprinkled with the word “kref”). A successful pattern can both help to find bugs (this binary search doesn’t look the same as that one…why?) and to avoid bugs (by getting it right the first time).

Written by Matt Zimmerman

January 21, 2010 at 03:46

linux.conf.au 2010: day 3 (morning)

Benjamin Mako Hill: Antifeatures (keynote)

Mako delivered an entertaining and inspirational talk on antifeatures, those oddities which intentionally make technology less useful to its consumers (think DRM, though he provided a wide range of examples). Mako explained the main reasons why antifeatures exist, and how they are endemic to the business of proprietary software.

Mako offered a potential upside to antifeatures, which is that they can help the free software community to focus on fundamental concerns like autonomy, rather than (for example) the mechanics of licensing. Antifeatures can be used to explain to the uninitiated why software freedom is important to everyday folks, not just hackers.

Denise Paolucci and Mark Smith: Build Your Own Contributors, One Part At A Time

Denise and Mark provided a practical list of “dos” and “don’ts” for building a successful community based on respect, empowerment and collaboration. Much of this was elementary from an Ubuntu perspective, but they offered a variety of examples from Dreamwidth which were illustrative.

Their list of “three things to start right now”:

  1. appoint a “welcomer” and laud newcomers’ first contributions
  2. stop timing out on communication when people need responses from you
  3. Have words with “that person” and let them know their behavior is not okay

Chris Double: Implementing HTML5 video in Firefox

I knew I liked this idea, but I didn’t realize how much I liked it until I watched Chris’ very informative talk. The promise of an open standard for embedding video is exciting enough, but the standard offers much more than basic embedding and playback controls. Chris demonstrated javascript-driven subtitles loaded from an SRT file, custom controls, copying and analysis of pixel data, replacing the background in a video using a chroma key technique, and even more impressive real-time special effects.

The initial implementation used the xiph.org reference libraries (libogg, libtheora, libvorbis) and PortAudio, but had some problems, including poor A/V synchronization. The second iteration used higher level libraries liboggz, libfishsound, liboggplay and libsydneyaudio, and was included in Firefox 3.1 alpha and beta, but some limitations in liboggplay (a/v sync, chained oggs, etc.) led to difficulty. There were also proof of concept implementations which used GStreamer on Linux, DirectShow on Windows, and QuickTime on MacOS, but these were hampered by codec plugin complications. In the end, they’ve gone back to using the xiph.org reference libraries (but with libsydneyaudio), though the GStreamer backend is still actively developed. Chris has published a series of articles on his blog on reading, decoding and synchronizing A/V streams using various libraries.

There are still some kinks to work out: the lack of indexes and the like in Ogg complicates seeking, calculating duration and so on, and there is no satisfactory solution for cross-platform audio. Rendering is not hardware accelerated yet because the video element is part of the HTML rendering pipeline.

It will be very powerful when it’s ready, though. Theora playback is supported in Chrome, Firefox and Opera today, and Daily Motion, Wikipedia and Archive.org are using it. I can’t wait to see the full API working well on a massive scale.

Written by Matt Zimmerman

January 19, 2010 at 23:11

linux.conf.au 2010: Day 1 (afternoon)

After lunch, I had a three-way conflict, as I wanted to attend talks on SVG as an alternative to Flash (Libre Graphics Day), gnucash and domain-specific package management.

Sometimes, when I have a conflict like this, I try to attend the talk whose material is less familiar to me (in this case, probably the SVG/Flash one). However, since the talks are being recorded and made available on the Internet, this changes the dynamic a bit. I don’t have to miss out on watching anything, as I can download it later. So, it makes more sense for me to go where I can best participate, taking advantage of my presence at the conference.

Distro summit: Package management

So, I chose to attend the package management talk, as I might have something to contribute. It was about how to harmonize general distribution packaging mechanisms (dpkg, RPM, etc.) with special-purpose ones like those used by Ruby (gems), Lua (rocks), Perl (CPAN modules) and so on. The solution described employed a set of wrapper scripts to provide a standard API to these systems, so that they could be used by the distribution package manager to resolve dependencies.

Due up next was Scott James Remnant’s talk on booting faster, but due to travel difficulties, he hadn’t arrived yet. Instead, we had a free-form discussion on various other challenges in the area of package management.

I took the opportunity to put forward an idea I had been thinking about for some time, which is that we may need to move beyond a “one size fits all” or “everything is a package” approach to package management. Modern package management systems are very capable, and solve a range of difficult problems with a high degree of reliability. The cost of all of this functionality is complexity: making packages is difficult. The system can be made to work for a wide range of software, but the simple case is often not very simple.

I also made the point that there are non-technical challenges as well: it is difficult for developers and ISVs to understand the ecosystem of distributions, and even more difficult to succeed in making their software available in “the right way” to Linux users. The obstacles range from procedural to cultural, and if we only approach this as a technical problem, we risk adding more complexity and making the situation worse.

The opportunity to have this kind of participatory discussion really validated my decision about how to choose which talk to attend.

Liz Henry: Code of our Own

Back in the Haecksen/LinuxChix miniconf, Liz Henry presented an action plan
for increasing the involvement of women in open source, with many well-considered “dos” and “don’ts” based on observations of what has and has not worked for open source communities.

It was the first opportunity I’ve had to attend a free software conference session which went beyond the typical “yes, this is important” and “yes, there really is a problem here” content which is unfortunately as necessary as it is commonplace.

I won’t attempt to summarize it here, but I can definitely recommend Liz’ presentation to anyone who is looking for concrete, actionable methods to promote gender diversity in their technical communities.

Lucas Nussbaum: The Relationship between Debian and Ubuntu

Historically, in Lucas’ assessment, many Debian developers have been unhappy with Ubuntu, feeling that it had “stolen” from Debian, and was not “giving back”. He said that bad experiences with certain people associated with Canonical and Ubuntu reflected on the project as a whole.

However, he says, things have improved considerably, and today, most Debian developers see some good points in Ubuntu: it brings new users to free software and Debian technology, it provides a system which “just works” for their (less technical) friends and family, and brings new developers to the Debian community.

There are still some obstacles, though. Lucas says that many bugfix patches in Ubuntu are just workarounds, and so are not very helpful to Debian. He gave the example of a patch which disabled the test suite for a package because of a failure, rather than fixing the failure.

He felt that Canonical offered few “free gifts” to Debian, citing as the only example the website banner on ubuntu.com which was displayed for Debian’s 15th anniversary. I felt this was a bit unfair, as Canonical has done more than this over the years, including sponsoring DebConf every year since Canonical was founded.

It occurred to me that the distinctions between Canonical and Ubuntu are still not clear, even within the core free software community. For example, the “main” package repository for Ubuntu is often seen to be associated exclusively with Canonical, while “universe” is the opposite. In reality, Canonical works on whatever is most important to the company, and volunteers do the same. These interests often overlap, particularly in “main” (where the most essential and popular components are).

Lucas speculated that more mission-critical servers run Debian pre-releases (especially testing) than Ubuntu pre-releases. It would be interesting to test this, as it’s rare to get sufficient real-world testing for server software prior to an Ubuntu release.

Lucas presented a wishlist for Ubuntu:

  • more technical discussions between Ubuntu and Debian (particularly on the ubuntu-develdebian-devel mailing list
  • easier access to Launchpad data
  • Launchpad PPAs for Debian

The prominence of Launchpad in these discussions spawned a number of tangential discussions about Launchpad, which were eventually deferred to tomorrow’s Launchpad mini-conf. One audience member asked whether Debian would ever adopt Launchpad. The answer from Lucas and Martin Krafft was that it would definitely not adopt the Canonical instance, but that a separate, interoperating instance might eventually be palatable.

I made the point that there is no single Debian/Ubuntu “relationship”, but a large number of distinct relationships between individuals and smaller groups. Instead of focusing on large-scale questions like infrastructure, I think there would be more mileage in working to build relationships between people around their common work.

Written by Matt Zimmerman

January 18, 2010 at 04:45

Call for nominations: Ubuntu Developer Membership Board

Earlier this year, the Technical Board agreed to establish a Developer Membership Board (DMB) with responsibility for approving new Ubuntu developers and granting them the appropriate privileges in Launchpad.

Previously, this had been the responsibility of the Technical Board itself. For various reasons, it was prudent to separate this function into its own governance board, for example:

  • The Technical Board had been responsible for new core developer applications, while the MOTU Council was responsible for new MOTU applications. This was confusing for applicants, as the two groups evolved different processes, and doesn’t make as much sense in the context of the reorganization of developer privileges. The DMB will be a central governing body for all developers, regardless of which teams they contribute to.
  • The Technical Board would prefer to conduct all of its discussions in public, while DMB may have cause for private deliberation. This means that the TB mailing list can be public now.
  • The TB had difficulty keeping up with applications in addition to serving its other functions. In particular, TB meetings were difficult to keep on schedule.

Now that the DMB is formally established and active, we would like to hold an election to determine its membership. Until now, the members of the Technical Board have been standing in to fulfill the functions of the DMB.

Because the DMB comprises the functions previously served by the Technical Board and the MOTU Council, the current members of those teams are automatically nominated. Any members of the Technical Board or MOTU Council who *do not* wish to stand for election to the DMB will need to explicitly decline the nomination.

This is an open election, so anyone else may be nominated as well. Candidates should be well qualified to evaluate prospective Ubuntu developers and decide when to entrust them with developer privileges.

There will be a total of 7 seats on the board, chosen by Condorcet voting, similar to the Technical Board election earlier this year.

Nominations should be sent to developer-membership-board@lists.ubuntu.com.

Written by Matt Zimmerman

December 8, 2009 at 19:04

Posted in Uncategorized

Tagged with , ,


Get every new post delivered to your Inbox.

Join 2,245 other followers