We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Software quality

A story in numbers

During a 24 hour period:

  • One person whose virtual server instance was destroyed when they invoked an init script without the required parameter
  • One bug report filed about the problem
  • 6 bug metadata changes reclassifying the bug report
  • 286 word rant on Planet Ubuntu about how the bug was classified
  • 5 comments on the blog post
  • 22 comments on the bug report
  • 2 minutes on the command line
  • 6 commands using UDD (bzr branch, cd, vi, bzr commit, bzr push, bzr lp-open)
  • One line patch
  • 99% wasted energy

Written by Matt Zimmerman

April 8, 2010 at 09:08

QCon London 2010: Day 3

The tracks which interested me today were “How do you test that?”, which dealt with scenarios where testing (especially automation) is particularly challenging, and “Browser as a Platform”, which is self-explanatory.

Joe Walker: Introduction to Bespin, Mozilla’s Web Based Code Editor

I didn’t make it to this talk, but Bespin looks very interesting. It’s “a Mozilla Labs Experiment to build a code editor in a web browser that Open Web and Open Source developers could love”.

I experimented briefly with the Mozilla hosted instance of Bespin. It seems mostly oriented for web application development, and still isn’t nearly as nice as desktop editors. However, I think something like this, combined with Bazaar and Launchpad, could make small code changes in Ubuntu very fast and easy to do, like editing a wiki.

Doron Reuveni: The Mobile Testing Challenge

Why Mobile Apps Need Real-World Testing Coverage and How Crowdsourcing Can Help

Doron explained how the unique testing requirements of mobile handset application are well suited to a crowdsourcing approach. As the founder of uTest, he explained their approach to connecting their customers (application vendors) with a global community of testers with a variety of mobile devices. Customers evaluate the quality of the testers’ work, and this data is used to grade them and select testers for future testing efforts in a similar domain. The testers earn money for their efforts, based on test case coverage (starting at about $20 each), bug reports (starting at about $5 each), and so on. Their highest performers earn thousands per month.

uTest also has a system, uTest Remote Access, which allows developers to “borrow” access to testers’ devices temporarily, for the purpose of reproducing bugs and verifying fixes. Doron gave us a live demo of the system, which (after verifying a code out of band through Skype) displayed a mockup of a BlackBerry device with the appropriate hardware buttons and a screenshot of what was displayed on the user’s screen. The updates were not quite real-time, but were sufficient for basic operation. He demonstrated taking a picture with the phone’s camera and seeing the photo within a few seconds.

Dylan Schiemann: Now What?

Dylan did a great job of extrapolating a future for web development based on the trend of the past 15 years. He began with a review of the origin of web technologies, which were focused on presentation and layout concerns, then on to JavaScript, CSS and DHTML. At this point, there was clear potential for rich applications, though there were many roadblocks: browser implementations were slow, buggy or nonexistent, security models were weak or missing, and rich web applications were generally difficult to engineer.

Things got better as more browsers came on the scene, with better implementations of CSS, DOM, XML, DHTML and so on. However, we’re still supporting an ancient implementation in IE. This is a recurring refrain among web developers, for whom IE seems to be the bane of their work. Dylan added something I hadn’t heard before, though, which was that Microsoft states that anti-trust restrictions were a major factor which prevented this problem from being fixed.

Next, there was an explosion of innovation around Ajax and related toolkits, faster javascript implementations, infrastructure as a service, and rich web applications like GMail, Google Maps, Facebook, etc.

Dylan believes that web applications are what users and developers really want, and that desktop and mobile applications will fall by the wayside. App stores, he says, are a short term anomaly to avoid the complexities of paying many different parties for software and services. I’m not sure I agree on this point, but there are massive advantages to the web as an application platform for both parties. Web applications are:

  • fast, easy and cheap to deploy to many users
  • relatively affordable to build
  • relatively easy to link together in useful ways
  • increasingly remix-able via APIs and code reuse

There are tradeoffs, though. I have an article brewing on this topic which I hope to write up sometime in the next few weeks.

Dylan pointed out that different layers of the stack exhibit different rates of change: browsers are slowest, then plugins (such as Flex and SilverLight), then toolkits like Dojo, and finally applications which can update very quickly. Automatically updating browsers are accelerating this, and Chrome in particular values frequent updates. This is good news for web developers, as this seems to be one of the key constraints for rolling out new web technologies today.

Dylan feels that technological monocultures are unhealthy, and prefers to see a set of competing implementations converging on standards. He acknowledged that this is less true where the monoculture is based on free software, though this can still inhibit innovation somewhat if it leads to everyone working from the same point of view (by virtue of sharing a code base and design). He mentioned that de facto standardization can move fairly quickly; if 2-3 browsers implement something, it can start to be adopted by application developers.

Comparing the different economics associated with browsers, he pointed out that Mozilla is dominated by search through the chrome (with less incentive to improve the rendering engine), Apple is driven by hardware sales, and Google by advertising delivered through the browser. It’s a bit of a mystery why Microsoft continues to develop Internet Explorer.

Dylan summarized the key platform considerations for developers:

  • choice and control
  • taste (e.g. language preferences, what makes them most productive)
  • performance and scalability
  • security

and surmised that the best way to deliver these is through open web technologies, such as HTML 5, which now offers rich media functionality including audio, video, vector graphics and animations. He closed with a few flashy demos of HTML 5 applications showing what could be done.

Written by Matt Zimmerman

March 12, 2010 at 17:14

QCon London 2010: Day 2

I was talk-hopping today, so none of these are complete summaries, just enough to capture my impressions from the time I was there. I may go back and watch the video for the ones which turned out to be most interesting.

Yesterday, I noted a couple of practices employed by the QCon organizers which I wanted to note, to consider trying them out with Canonical and Ubuntu events:

  • As participants leave each talk, they pass a basket with a red, a yellow and a green square attached to it. Next to the wastebasket are three small stacks of colored paper, also red, yellow and green. There are no instructions, indeed no words at all, but the intent seemed clear enough: drop a card in the basket to give feedback.
  • The talks were spread across multiple floors in the conference center, which I find is usually awkward. They mitigated this somewhat by posting a directory of the rooms inside each lift.

Chris Read: The Cloud Silver Bullet

Which calibre is right for me?

Chris offered some familiar warnings about cloud technologies: that they won’t solve all problems, that effort must be invested to reap the benefits, and that no one tool or provider will meet all needs. He then classified various tools and services according to their suitability for long or short processing cycles, and high or low “data sensitivity”.

Simon Wardley: Situation Normal, Everything Must Change

I actually missed Simon’s talk this time, but I’ve seen him speak before and talk with him every week about cloud topics as a colleague at Canonical. I highly recommend his talks to anyone trying to make sense of cloud technology and decide how to respond to it.

In some of the talks yesterday, there was a murmur of anti-cloud sentiment, with speakers asserting it was not meaningful, or they didn’t know what it was, or that it was nothing new. Simon’s material is the perfect antidote to this attitude, as he makes it very clear that there is a genuinely important and disruptive trend in progress, and explains what it is.

Jesper Boeg: Kanban

Crossing the line, pushing the limit or rediscovering the agile vision?

Jesper shared experiences and lessons learned with Kanban, and some of the problems it addresses which are present in other methodologies. His material was well balanced and insightful, and I’d like to go back and watch the full video when it becomes available.

Here again was a clear and pragmatic focus on matching tools and processes to the specific needs of the team, business and situation.

Ümit Yalcinalp: Development Model for the Cloud

Paradigm Shift or the Same Old Same Old?

Ümit focused on the PaaS (platform as a service) layer, and the experience offered to developers who build applications for these platforms. An evangelist from Salesforce.com, she framed the discussion as a comparison between force.com, Google App Engine and Microsoft Azure.

Eric Evans: Folding Design into an Agile Process

Eric tackled the question of how to approach the problem of design within the agile framework. As an outspoken advocate of domain-driven design, he presented his view in terms of this school and its terminology.

He emphasized the importance of modeling “when the critical complexity of the project is in understanding and communicating about the domain”. The “expected” approach to modeling is to incorporate an up-front analysis phase, but Eric argues that this is misguided. Because “models are distilled knowledge”, and teams are relatively ignorant at the start of a project, modeling in this way captures that ignorance and makes it persist.

Instead, he says, we should employ to a “pull” approach (in the Lean sense), and decide to work on modeling when:

  • communications with stakeholders deteriorates
  • when solutions are more complex than the problems
  • when velocity slows (because completed work becomes a burden)

Eric illustrated his points in part by showing video clips of engineers and business people engaged in dialog (here again, the focus on people rather than tools and process). He used this material as the basis for showing how models underlie these interactions, but are usually implicit. These dialogs were full of hints that the people involved were working from different models, and the software model needed to be revised. An explicit model can be a very powerful communication tool on software projects.

He outlined the process he uses for modeling, which was highly iterative and involves identifying business scenarios, using them to develop and evaluate abstract models, and testing those models by experimenting with code (“code probes”). Along the way, he emphasized the importance of making mistakes, not only as a learning tool but as a way to encourage creative thinking, which is essential to modeling work. In order to encourage the team to “think outside the box” and improve their conceptual model, he goes as far as to require that several “bad ideas” are proposed along the way, as a precondition for completing the process.

Eric is working on a white paper describing this process. A first draft is available on his website, and he is looking for feedback on it.

Modeling work, he suggested, can be incorporated into:

  • a stand up meeting
  • a spike
  • an iteration zero
  • release planning

He pointed out that not all parts of a system are created equal, and some of them should be prioritized for modeling work:

  • areas of the system which seem to require frequent change across projects/features/etc.
  • strategically important development efforts
  • user experiences which are losing coherence

This was a very compelling talk, whose concepts were clearly applicable beyond the specific problem domain of agile development.

Written by Matt Zimmerman

March 11, 2010 at 17:43

QCon London 2010: Day 1

For the first time in several years, I had the opportunity to attend a software conference in the city where I lived at the time. I’ve benefited from many InfoQ articles in the past couple of years, and watched recordings of some excellent talks from previous QCon events, so I jumped at the opportunity to attend QCon London 2010. It is being held in the Queen Elizabeth II Conference Center, conveniently located a short walk away from Canonical’s London office.

Whenever I attend conferences, I can’t help taking note of which operating systems are in use, and this tells me something about the audience. I was surprised to notice that in addition to the expected Mac and Windows presence, there was a substantial Ubuntu contingent and some Fedora as well.

Today’s tracks included two of particular interest to me at the moment: Dev and Ops: A single team and the unfortunately gendered Software Craftsmanship.

Jason Gorman: Beyond Masters and Apprentices

A Scalable, Peer-led Model For Building Good Habits In Large & Diverse Development Teams

Jason explained the method he uses to coach software developers.
I got a bad seat on the left side of the auditorium, where it was hard to see the slides because they were blocked by the lectern, so I may have missed a few points.

He began by outlining some of the primary factors which make software more difficult to change over time:

  • Readability: developers spend a lot of their time trying to understand code that they (or someone else) have written
  • Complexity: as well as making code more difficult to understand, complexity increases the chance of errors. More complex code can fail in more ways.
  • Duplication: when code is duplicated, it’s more difficult to change because we need to keep track of the copies and often change them all
  • Dependencies and the “ripple effect”: highly interdependent code is more difficult to change, because a change in one place requires corresponding changes elsewhere
  • Regression Test Assurance: I didn’t quite follow how this fit into the list, to be honest. Regression tests are supposed to make it easier to change the code, because errors can be caught more easily.

He then outlined the fundamental principles of his method:

  • Focus on Learning over Teaching – a motivated learner will find their own way, so focus on enabling them to pull the lesson rather than pushing it to them (“there is a big difference between knowing how to do something and being able to do it”)
  • Focus on Ability over Knowledge – learn by doing, and evaluate progress through practice as well (“how do you know when a juggler can juggle?”)

…and went on to outline the process from start to finish:

  1. Orientation, where peers agree on good habits related to the subject being learned. The goal seemed to be to draw out knowledge from the group, allowing them to define their own school of thought with regard to how the work should be done. In other words, learn to do what they know, rather than trying to inject knowledge.
  2. Practice programming, trying to exercise these habits and learn “the right way to do it”
  3. Evaluation through peer review, where team members pair up and observe each other. Over the course of 40-60 hours, they watch each other program and check off where they are observed practicing the habits.
  4. Assessment, where learners practice a time-boxed programming exercise, which is recorded. The focus is on methodical correctness, not speed of progress. Observers watch the recording (which only displays the code), and note instances where the habit was not practiced. The assessment is passed only if less than three errors are noticed.
  5. Recognition, which comes through a certificate issued by the coach, but also through admission to a networking group on LinkedIn, promoting peer recognition

Jason noted that this method of assessing was good practice in itself, helping learners to practice pairing and observation in a rigorous way.

After the principal coach coaches a pilot group, the pilot group then goes on to coach others while they study the next stage of material.

To conclude, Jason gave us a live demo of the assessment technique, by launching Eclipse and writing a simple class using TDD live on the projector. The audience were provided with worksheets containing a list of the habits to observe, and instructed to note instances where he did not practice them.

Julian Simpson: Siloes are for farmers

Production deployments using all your team

After a brief introduction to the problems targeted by the devops approach, Julian offered some advice on how to do it right.

He began with the people issues, reminding us of Weinberg’s second law, which is “no matter what they tell you, it’s always a people problem”.

His people tips:

  • In keeping with a recent trend, he criticized email as a severely flawed communication medium, best avoided.
  • respect everyone
  • have lunch with people on the other side of the wall
  • discuss your problems with other groups (don’t just ask for a specific solution)
  • invite everyone to stand-ups and retrospectives
  • co-locate the sysadmins and developers (thomas allen)

Next, a few process suggestions:

  • Avoid code ownership generally (or rather, promote joint/collective ownership)
  • Pair developers with sysadmins
  • It’s done when the code is in production (I would rephrase as: it’s not done until the code is in production)

and then tools:

  • Teach your sysadmins to use version control
  • Help your developers write performant code
  • Help developers with managing their dev environment
  • Run your deploy scripts via continuous integration (leading toward continuous deployment)
  • Use Puppet or Chef (useful as a form of documentation as well as deployment tools, and on developer workstations as well as servers)
  • Integrate monitoring and continuous integration (test monitoring in the development environment)
  • Deliver code as OS packages (e.g. RPM, DEB)
  • Separate binaries and configuration
  • Harden systems immediately and enable logging for tuning security configuration (i.e. configure developer workstations with real security, making the development environment closer to production)
  • Give developers access to production logs and data
  • Re-create the developer environment often (to clear out accumulated cruft)

I agreed with a lot of what was said, objected to some, and lacked clarity on a few points. I think this kind of material is well suited to a multi-way BOF style discussion rather than a presentation format, and would have liked more opportunity for discussion.

Lars George and Fabrizio Schmidt: Social networks and the Richness of Data

Getting distributed webservices done with Nosql

Lars and Fabrizio described the general “social network problem”, and how they went about solving it. This problem space involves the processing, aggregation and dissemination of notifications for a very high volume of events, as commonly manifest in social networking websites such as Facebook and Twitter which connect people to each other to share updates. Apparently simple functionality, such as displaying the most recent updates from one’s “friends”, quickly become complex at scale.

As an example of the magnitude of the problem, he explained that they process 18 million events per day, and how in the course of storing and sharing these across the social graph, some operations peak as high as 150,000 per second. Such large and rapidly changing data sets represent a serious scaling challenge.

They originally built a monolithic, synchronous system called Phoenix, built on:

  • LAMP frontends: Apache+PHP+APC (500 of them)
  • Sharded MySQL multi-master databases (150 of them)
  • memcache nodes with 1TB+ (60 of them)

They then added on asynchronous services alongside this, to handle things like Twitter and mobile devices, using Java (Tomcat) and RabbitMQ. The web frontend would send out AMQP messages, which would then be picked up by the asynchronous services, which would (where applicable) communicate back to Phoenix through an HTTP API call.

When the time came to re-architect their activity , they identified the following requirements:

  • endless scalability
  • storage- and cloud-independent
  • fast
  • flexible and extensible data model

This led them to an architecture based on:

  • Nginx + Janitor
  • Embedded Jetty + RESTeasy
  • NoSQL storage backends (no fewer than three: Redis, Voldemort and Hazelcast)

They described this architecture in depth. The things which stood out for me were:

  • They used different update strategies (push vs. pull) depending on the level of fan-out for the node (i.e. number of “friends”)
  • They implemented a time-based activity filter which recorded a global timeline, from minutes out to days. Rather than traversing all of the user’s “friends” looking for events, they just scan the most recent events to see if their friends appear there.
  • They created a distributed, scalable concurrent ID generator based on Hazelcast, which uses distributed locking to assign ranges to nodes, so that nodes can then quickly (locally) assign individual IDs
  • It’s interesting how many of the off-the-shelf components had native scaling, replication, and sharding features. This sort of thing is effectively standard equipment now.

Their list of lessons learned:

  • Start benchmarking and profiling your app early
  • A fast and easy deployment keeps motivation high
  • Configure Voldemort carefully (especially on large heap machines)
  • Read the mailing lists of the NoSQL system you use
  • No solution in docs? – read the sources
  • At some point stop discussing and just do it

Andres Kitt: Building Skype

Learnings from almost five years as a Skype Architect

Andres began with an overview of Skype, which serves 800,000 registered users per employee (650 vs. 521 million). Their core team is based in Estonia. Their main functionality is peer-to-peer, but they do need substantial server infrastructure (PHP, C, C++, PostgreSQL) for things like peer-to-peer supporting glue, e-commerce and SIP integration. Skype uses PostgreSQL heavily in some interesting ways, in a complex multi-tiered architecture of databases and proxies.

His first lesson was that technical rules of thumb can lead us astray. It is always tempting to use patterns that have worked for us previously, in a different project, team or company, but they may not be right for another context. They can and should be used as a starting point for discussion, but not presumed to be the solution.

Second, he emphasized the importance of paying attention to functional architecture, not only technical architecture. As an example, he showed how the Skype web store, which sells only 4 products (skype in, skype out, voicemail, and subscription bundles of the previous three) became incredibly complex, because no one was responsible for this. Complex functional architecture leads to complex technical architecture, which is undesirable as he noted in his next point.

Keep it simple: minimize functionality, and minimize complexity. He gave an example of how their queuing system’s performance and scalability were greatly enhanced by removing functionality (the guarantee to deliver messages exactly once), which enabled the simplification of the system.

He also shared some organizational learnings, which I appreciated. Maybe my filters are playing tricks on me, but it seems as if more and more discussion of software engineering is focusing on organizing people. I interpret this as a sign of growing maturity in the industry, which (as Andres noted) has its roots in a somewhat asocial culture.

He noted that architecture needs to fit your organization. Design needs to be measured primarily by how well they solve business problems, rather than beauty or elegance.

He stressed the importance of communication, a term which I think is becoming so overused and diluted in organizations that it is not very useful. It’s used to refer to everything from roles and responsibilities, to personal relationships, to cultural norming, and more. In the case of Skype, what Andres learned was the importance of organizing and empowering people to facilitate alignment, information flow and understanding between different parts of the business. Skype evolved an architecture team which interfaces between (multiple) business units and (multiple) engineering teams, helping each to understand the other and taking responsibility for the overall system design.


Overall, I thought the day’s talks gave me new insight into how Internet applications are being developed and deployed in the real world today. They affirmed some of what I’ve been wondering about, and gave me some new things to think about as well. I’m looking forward to tomorrow.

Written by Matt Zimmerman

March 10, 2010 at 17:44

Ubuntu 10.04 LTS: How we get there

The development of Ubuntu 10.04 has been underway for nearly two months now, and will produce our third long-term (LTS) release in April. Rick Spencer, desktop engineering manager, summarized what’s ahead for the desktop team, and a similar update will be coming soon from Jos Boumans, our new engineering manager for the server team.

What I want to talk about, though, is not the individual projects we’re working on. I want to explain how the whole thing comes together, and what’s happening behind the scenes to make 10.04 LTS different from other Ubuntu releases.

Changing the focus

Robbie Williamson, engineering manager for the foundations team, has captured the big picture in the LTS release plan, the key elements of which are:

Merge from Debian testing

By merging from Debian testing, rather than the usual unstable, we aim to avoid regressions early in the release cycle which tend to block development work. So far, Lucid has been surprisingly usable in its first weeks, compared to previous Ubuntu releases.

Add fewer features

By starting fewer development projects, and opting for more testing projects over feature projects, we will free more time and energy for stabilization. This approach will help us to discover regressions earlier, and to fix them earlier as well. This doesn’t mean that Ubuntu 10.04 won’t have bugs (with hundreds of millions of lines of source code, there is no such thing as a bug-free system), but we believe it will help us to produce a system which is suitable for longer-term use by more risk-averse users.

Avoid major infrastructure changes

We will bring in less bleeding-edge code from upstream than usual, preferring to stay with more mature components. Where a major transition is brewing upstream, we will probably opt to defer it to the next Ubuntu cycle. While this might delay some new functionality slightly, we believe the additional stability is well worth it for an LTS release.

Extend beta testing

With less breakage early in the cycle, we plan to enter beta early. Traditionally, the beta period is when we receive the most user feedback, so we want to make the most of it. We’ll deliver a usable, beta-quality system substantially earlier than in 9.10, and our more adventurous users will be able to upgrade at that point with a reasonable expectation of stability.

Freeze with Debian

With Debian “squeeze” expected to freeze in March, Ubuntu and Debian will be stabilizing on similar timelines. This means that Debian and Ubuntu developers will be attacking the same bugs at the same time, creating more opportunities to join forces.

Staying on course

In addition, we’re rolling out some new tools and techniques to track our development work, which were pioneered by the desktop team in Ubuntu 9.10. We believe this will help us to stay on course, and make adjustments earlier when needed.  Taking some pages from the Agile software development playbook, we’ll be planning in smaller increments and tracking our progress using burn-down charts As always, we aim to make Ubuntu development as transparent as possible, so all of this information is posted publicly so that everyone can see how we’re doing.

Delivering for users

By making these changes, we aim to deliver for our users the right balance of stability and features that they expect from an Ubuntu LTS release. In particular, we want folks to feel confident deploying Ubuntu 10.04 in settings where it will be actively maintained for a period of years.

Written by Matt Zimmerman

December 23, 2009 at 09:00

Stemming the tide of Ubuntu kernel bugs

The Ubuntu kernel team receives an extraordinary number of bug reports, about 1000 in the past week. Yesterday, Leann Ogasawara, our Ubuntu kernel QA lead, addressed a roomful of Ubuntu developers. She shared how the kernel team is handling this situation, and asked for ideas and suggestions from the crowd.

To try to help out, I reviewed the most recent screenful of kernel bug reports (75) to see if there were any patterns we could take advantage of. I discussed with the kernel team some ways in which we could improve our approach, and implemented some of the changes.

Altogether, this was only a few hours of work, but should eliminate a large number of invalid reports, and significantly increase the quality of many more.

A quick back-of-the-envelope count revealed the following categories:

Suspend or hibernate failures (36%)

A majority of these are automated reports from apport. This is good, because we have the opportunity to collect relevant information from the system when the problem happened, but it also means that there are a lot of reports.

Although some new logging was added in 9.04, these reports still often do not contain enough information to diagnose the problem.

One bit of data which the kernel team has said would be useful is the frequency of the failure: does it fail every time, or only sometimes? We can improve the logging to keep track of successful resumes as well as failures, and then include this data in the report.

Networking problems, both wired and wireless (13%)

The kernel team has a partial specification for some improvements to make here.

Package installation and upgrade failures (10%)

The kernel tends to be a trigger point for a variety of problems in this area which are not its fault. For example, if the system is very low on disk space, upgrading the kernel can fail because it is a large package, so we automatically suppress those reports. In my sample, none of the failures being reported against the kernel actually belonged there.

To help address this, we can suppress bogus reports, and redirect valid reports to the appropriate package. I committed fixes to apport which will file the problem reports against grub or initramfs-tools if they were caused by failures in update-grub or update-initramfs respectively. I also added an apport bug pattern to suppress bug reports against the kernel which contained certain dpkg unpacking errors, and added a patch to apt to try to detect this case as well.

Audio-related problems (9%)

Currently, the first step for most of these bug reports is to ask the user to complete the report by running apport-collect -p alsa-base to collect audio-related debugging data.

Because they account for a significant proportion of all kernel bugs, I committed an apport patch to simply attach this information by default for all kernel bugs.

Kernel panics, oopses, lockups etc. (8%)

These bugs are notoriously tricky to file properly, because the system is often non-functional or severely impaired.

In Karmic, we now have a kernel crash dump facility which is very easy to use. Rather than reporting a bug saying “my computer locks up”, you can throw a switch which will enable the problem to be automatically detected, recorded and analyzed. By the time the bug report reaches the kernel developers, it should have detailed information about where the problem occurred, rather than requiring the reporter to use things like digital cameras to capture panic messages.

We’ve also wired up kerneloops to apport, so that oopses are reported through an automatic facility which can produce a more complete bug report.

Written by Matt Zimmerman

August 7, 2009 at 09:57

Collecting debug information when your GPU hangs

After having my i965 hang) twice this morning, I decided to create a small script to make it easier to capture the relevant information when this sort of bug happens. Because the X server stops running, the display is useless, and it’s convenient to be able to get the relevant information by running a single command (I do this using ConnectBot on my phone).

It’s designed to be invoked manually by the user while the system is hung, but if we can somehow detect that it’s locked up, then we could run it automatically.

It collects dmesg, /proc/interrupts, /proc/dri and (for Intel cards) intel_gpu_dump output at the time of the hang.  It then leaves behind a crash report in /var/crash, so that after the user recovers their system, apport will collect the usual information and submit a bug on the appropriate package.

If this seems useful, it could be added to x11-common or to apport.

Written by Matt Zimmerman

June 17, 2009 at 14:57

Posted in Uncategorized

Tagged with ,

Equilibrium in free software testing

When a bug is filed in a free software project’s bug tracker, a social exchange takes place.  Bug reporters give their time and attention to describing, debugging and testing, in exchange for a fair chance that the problem will be fixed.  Project representatives make the effort to listen and understand the problem, and apply their specialized knowledge, in exchange for real-world testing and feedback which drive improvements in their software.  This feedback loop is one of the essential benefits of the free software development model.

Based on the belief that this exchange is of mutual benefit, the people involved form certain expectations of each other.  When I report a bug, I expect that:

  • the bug will be reviewed by a project representative
  • they will make a decision about the relative importance of the bug
  • project developers will fix the most important bugs in future releases of the software

When I receive a bug report, I expect that:

  • if more information is needed, the bug reporter will supply it
  • if I can’t diagnose the problem independently, the bug reporter will help with the analysis
  • if I need help to test and verify a fix for the bug, the bug reporter will provide it

Naturally, everything works best when the system is in equilibrium: there is a steady flow of testing and bug reports, and users feel satisfied with the quality of the software.  Everybody wins.  Ideally, much of this activity takes place around pre-release snapshots of the software, so that early adopters experience the newest features and fixes, and developers can fix bugs before they release a new version for mainstream use.  This state produces the best quality free software.

Unfortunately, that isn’t always the case.  When our expectations aren’t met, or sufficient progress is not made, we feel misled.  If a bug report languishes in the bug tracker without ever being looked at, the bug reporter’s time and effort have been wasted.  If the report lacks sufficient detail, and a request for more information goes unanswered, the developer’s time and effort have been wasted.  This feeling is magnified by the fact that both parties are usually volunteers, who are donating their time in good faith.

The imbalance can often be seen in the number of new (unreviewed) bug reports for a particular project.  At one extreme (“left”) is a dead project which receives a flood of bug reports, which are never put to good use.  At the other extreme is a very active project with no users (“right”), which suffers from a lack of feedback and testing.  Most projects are somewhere in the middle, though a perfect balance is rare.

Ubuntu currently receives too many bug reports for its developers to effectively process, putting it well left of center.  It has a large number of enthusiastic users willing to run unstable development code, and actively encourages its users to participate in its development by testing and reporting bugs, even to the point of being flooded with data.  A similar distance to the right of center might be the Linux kernel, which receives comparatively few bug reports.  Kernel developers struggle to encourage users to test their unstable development code, because it’s inconvenient to build and install, and a bug can easily crash the system and cost them time and work.  There are a huge number people who use the Linux kernel, but very few of them have relationships with its developers.

So, what can a project do to promote equilibrium?  Users and developers need to receive good value for their efforts, and they need to keep pace with each other

The Linux kernel seems to need more willing testers, which distributions like Fedora and Ubuntu are helping to provide by packaging and distributing snapshots of kernel development to their users.  The loop isn’t quite closed, though, as bug reports don’t always make their way back to the kernel developers.

Ubuntu, perhaps, needs more developers, and so we’ve undertaken a number of projects to try to make it easier to contribute to Ubuntu as a developer, and to help our developers work more efficiently.  Soon, it should be possible to commit patches directly into Launchpad without any special privileges, so that they can be easily reviewed and merged by project developers.  This isn’t a fix, but we hope it will help move us closer to a balance.

What else could we try?  I’m particularly interested in approaches which have worked well in other projects.

Written by Matt Zimmerman

April 10, 2009 at 11:29

Posted in Uncategorized

Tagged with ,

Please don’t report Ubuntu bugs directly to Launchpad

This warrants some explanation. If you just want to know what to do instead, skip to the end.

For over three years now, Ubuntu has been using Launchpad to track bugs. This has been an overwhelming success in terms of the number of bug reports filed, so much so that we have trouble keeping up with them.

Ubuntu, like Debian, carries a huge variety of software, and we accept bug reports for all of it. This means that all of the problems in thousands of independently produced components, as well as all of the secondary issues which arise from integrating them together, are all legitimate Ubuntu bug reports. Casual searching hints that for a given problem with a free software program today, there is likely a bug report in Launchpad about it.

What do we gain by collecting all of this information in one place? We get the big picture. We can make connections between seemingly unrelated bugs in multiple programs which point to a common cause. We can quickly and easily reclassify bug reports which are filed in the wrong place. We can provide a feed of all the data, and access through a single API, so that it can be analyzed by anyone. It gives us a coherent view of the bug data for all of the projects we depend on.

So, having all of these bug reports is useful. How do we make it manageable? We never expect to have zero open bugs, but we do expect to ship products that meet people’s needs, and that means that the worst bugs get fixed within the time available in our release cycle. We’re limited by how quickly our triagers can read and evaluate each report, and how quickly a developer can analyze and fix the problem.

The good news is, you can help accelerate both groups’ work with one simple step: report bugs using ubuntu-bug rather than going directly to Launchpad.

This automatically includes as much information as we can collect about your problem, without any additional work on your part. It’s easy!

$ ubuntu-bug network-manager      # report a bug on Network Manager
$ ubuntu-bug linux                # report a bug on the Linux kernel

The man page for ubuntu-bug contains more examples.

Of course, there are some circumstances where this is not possible, and you can still file bugs the old fashioned way if necessary. If you do, please remember to include details such as which version of Ubuntu you’re using, which package the problem is in, which version of the package is installed, and so on. Read the official instructions for reporting bugs for more information.

Written by Matt Zimmerman

March 31, 2009 at 20:22

Posted in Uncategorized

Tagged with ,

Ubuntu quality: or, “but what about my bug?”

Leading up to the Ubuntu 8.10 release, Ubuntu developers and quality assurance engineers have been very busy sorting bugs, deciding what can and should be fixed for the final release, and what cannot.  They make these decisions by estimating the importance of each bug, identifying whether it is a regression, assessing the risk of potential fixes, and by applying their best judgement.  Developers can then focus their efforts where they are most needed.

On the whole, I think that we do remarkably well at this.  In September, for example, the total number of open bugs in Ubuntu increased by only 70.  This doesn’t sound like much of an achievement, if not for the fact that in the same time period, 7872 new bug reports were filed.  The remaining 7802, or over 99%, were resolved (some duplicates, some invalid, some fixed, etc.).

The news isn’t all good, of course.  There are currently over 46000 open Ubuntu bug reports in Launchpad.  Even at this impressive rate of throughput, and even if we were to freeze all development and stop accepting new bug reports entirely, I estimate it would take over half a year just to sift through the backlog of reports we have already received.  There is a lot of noise in that data.

When 8.10 is released, as with each previous release, some users will be disappointed that it has a bug which affects them.  This is regrettable, and I feel badly for affected users each time that I read about this, but it is unlikely to ever change.  There will never be a release of Ubuntu which is entirely free of bugs, and every non-trivial bug is important to someone.

So, what do we do?  What should be our key goals where quality is concerned?  We don’t currently have a clear statement of this, but here’s a strawman:

Prioritize bug reports effectively.  It’s usually difficult to say whether a bug report is valid or serious until a human has reviewed it, so this means having enough people to review and acknowledge incoming bug reports, and helping them to work as efficiently as possible.  The Ubuntu bug squad is a focal point for this type of work, though a great deal of this is done by developers in their everyday work as well.  Projects like 5-a-day are a good way to get started.

Measure our performance objectively.  By tracking metrics for each part of the quality assurance process, we can understand where we need to improve.  The QA team has been developing tools, such as the package status pages, to collect hard data on bugs.

Improve incrementally.  By minimizing regressions from one release to the next, and making some progress on old bugs, we can hope to make each Ubuntu release better, in terms of overall quality, than the one before it (or at least no worse).  The regression tracker (which will hopefully move to the QA site soon) will help to coordinate this effort.

Ensure that the most serious bugs are identified and fixed early.  It’s important to the success of the project that we continue to produce regular releases, and showstopper bugs risk delaying our release dates and adversely affecting the next release cycle.  The release management weather report is one tool which helps monitor this process, though a great deal of coordination is required in order to provide it with useful data.

Communicate about known bugs.  It is inevitable that there will be known bugs remaining in each release, and we should do our best to advise our users about them, including any known workarounds.  The Ubuntu 8.10 beta release notes are a good example of this.

I think that to do well in all of these areas would be a good goal for quality in Ubuntu.

Written by Matt Zimmerman

October 29, 2008 at 12:09