Posts Tagged ‘Conferences’
For the first time in several years, I had the opportunity to attend a software conference in the city where I lived at the time. I’ve benefited from many InfoQ articles in the past couple of years, and watched recordings of some excellent talks from previous QCon events, so I jumped at the opportunity to attend QCon London 2010. It is being held in the Queen Elizabeth II Conference Center, conveniently located a short walk away from Canonical’s London office.
Whenever I attend conferences, I can’t help taking note of which operating systems are in use, and this tells me something about the audience. I was surprised to notice that in addition to the expected Mac and Windows presence, there was a substantial Ubuntu contingent and some Fedora as well.
A Scalable, Peer-led Model For Building Good Habits In Large & Diverse Development Teams
Jason explained the method he uses to coach software developers.
I got a bad seat on the left side of the auditorium, where it was hard to see the slides because they were blocked by the lectern, so I may have missed a few points.
He began by outlining some of the primary factors which make software more difficult to change over time:
- Readability: developers spend a lot of their time trying to understand code that they (or someone else) have written
- Complexity: as well as making code more difficult to understand, complexity increases the chance of errors. More complex code can fail in more ways.
- Duplication: when code is duplicated, it’s more difficult to change because we need to keep track of the copies and often change them all
- Dependencies and the “ripple effect”: highly interdependent code is more difficult to change, because a change in one place requires corresponding changes elsewhere
- Regression Test Assurance: I didn’t quite follow how this fit into the list, to be honest. Regression tests are supposed to make it easier to change the code, because errors can be caught more easily.
He then outlined the fundamental principles of his method:
- Focus on Learning over Teaching – a motivated learner will find their own way, so focus on enabling them to pull the lesson rather than pushing it to them (“there is a big difference between knowing how to do something and being able to do it”)
- Focus on Ability over Knowledge – learn by doing, and evaluate progress through practice as well (“how do you know when a juggler can juggle?”)
…and went on to outline the process from start to finish:
- Orientation, where peers agree on good habits related to the subject being learned. The goal seemed to be to draw out knowledge from the group, allowing them to define their own school of thought with regard to how the work should be done. In other words, learn to do what they know, rather than trying to inject knowledge.
- Practice programming, trying to exercise these habits and learn “the right way to do it”
- Evaluation through peer review, where team members pair up and observe each other. Over the course of 40-60 hours, they watch each other program and check off where they are observed practicing the habits.
- Assessment, where learners practice a time-boxed programming exercise, which is recorded. The focus is on methodical correctness, not speed of progress. Observers watch the recording (which only displays the code), and note instances where the habit was not practiced. The assessment is passed only if less than three errors are noticed.
- Recognition, which comes through a certificate issued by the coach, but also through admission to a networking group on LinkedIn, promoting peer recognition
Jason noted that this method of assessing was good practice in itself, helping learners to practice pairing and observation in a rigorous way.
After the principal coach coaches a pilot group, the pilot group then goes on to coach others while they study the next stage of material.
To conclude, Jason gave us a live demo of the assessment technique, by launching Eclipse and writing a simple class using TDD live on the projector. The audience were provided with worksheets containing a list of the habits to observe, and instructed to note instances where he did not practice them.
Production deployments using all your team
After a brief introduction to the problems targeted by the devops approach, Julian offered some advice on how to do it right.
He began with the people issues, reminding us of Weinberg’s second law, which is “no matter what they tell you, it’s always a people problem”.
His people tips:
- In keeping with a recent trend, he criticized email as a severely flawed communication medium, best avoided.
- respect everyone
- have lunch with people on the other side of the wall
- discuss your problems with other groups (don’t just ask for a specific solution)
- invite everyone to stand-ups and retrospectives
- co-locate the sysadmins and developers (thomas allen)
Next, a few process suggestions:
- Avoid code ownership generally (or rather, promote joint/collective ownership)
- Pair developers with sysadmins
- It’s done when the code is in production (I would rephrase as: it’s not done until the code is in production)
and then tools:
- Teach your sysadmins to use version control
- Help your developers write performant code
- Help developers with managing their dev environment
- Run your deploy scripts via continuous integration (leading toward continuous deployment)
- Use Puppet or Chef (useful as a form of documentation as well as deployment tools, and on developer workstations as well as servers)
- Integrate monitoring and continuous integration (test monitoring in the development environment)
- Deliver code as OS packages (e.g. RPM, DEB)
- Separate binaries and configuration
- Harden systems immediately and enable logging for tuning security configuration (i.e. configure developer workstations with real security, making the development environment closer to production)
- Give developers access to production logs and data
- Re-create the developer environment often (to clear out accumulated cruft)
I agreed with a lot of what was said, objected to some, and lacked clarity on a few points. I think this kind of material is well suited to a multi-way BOF style discussion rather than a presentation format, and would have liked more opportunity for discussion.
Getting distributed webservices done with Nosql
Lars and Fabrizio described the general “social network problem”, and how they went about solving it. This problem space involves the processing, aggregation and dissemination of notifications for a very high volume of events, as commonly manifest in social networking websites such as Facebook and Twitter which connect people to each other to share updates. Apparently simple functionality, such as displaying the most recent updates from one’s “friends”, quickly become complex at scale.
As an example of the magnitude of the problem, he explained that they process 18 million events per day, and how in the course of storing and sharing these across the social graph, some operations peak as high as 150,000 per second. Such large and rapidly changing data sets represent a serious scaling challenge.
They originally built a monolithic, synchronous system called Phoenix, built on:
- LAMP frontends: Apache+PHP+APC (500 of them)
- Sharded MySQL multi-master databases (150 of them)
- memcache nodes with 1TB+ (60 of them)
They then added on asynchronous services alongside this, to handle things like Twitter and mobile devices, using Java (Tomcat) and RabbitMQ. The web frontend would send out AMQP messages, which would then be picked up by the asynchronous services, which would (where applicable) communicate back to Phoenix through an HTTP API call.
When the time came to re-architect their activity , they identified the following requirements:
- endless scalability
- storage- and cloud-independent
- flexible and extensible data model
This led them to an architecture based on:
- Nginx + Janitor
- Embedded Jetty + RESTeasy
- NoSQL storage backends (no fewer than three: Redis, Voldemort and Hazelcast)
They described this architecture in depth. The things which stood out for me were:
- They used different update strategies (push vs. pull) depending on the level of fan-out for the node (i.e. number of “friends”)
- They implemented a time-based activity filter which recorded a global timeline, from minutes out to days. Rather than traversing all of the user’s “friends” looking for events, they just scan the most recent events to see if their friends appear there.
- They created a distributed, scalable concurrent ID generator based on Hazelcast, which uses distributed locking to assign ranges to nodes, so that nodes can then quickly (locally) assign individual IDs
- It’s interesting how many of the off-the-shelf components had native scaling, replication, and sharding features. This sort of thing is effectively standard equipment now.
Their list of lessons learned:
- Start benchmarking and profiling your app early
- A fast and easy deployment keeps motivation high
- Configure Voldemort carefully (especially on large heap machines)
- Read the mailing lists of the NoSQL system you use
- No solution in docs? – read the sources
- At some point stop discussing and just do it
Learnings from almost five years as a Skype Architect
Andres began with an overview of Skype, which serves 800,000 registered users per employee (650 vs. 521 million). Their core team is based in Estonia. Their main functionality is peer-to-peer, but they do need substantial server infrastructure (PHP, C, C++, PostgreSQL) for things like peer-to-peer supporting glue, e-commerce and SIP integration. Skype uses PostgreSQL heavily in some interesting ways, in a complex multi-tiered architecture of databases and proxies.
His first lesson was that technical rules of thumb can lead us astray. It is always tempting to use patterns that have worked for us previously, in a different project, team or company, but they may not be right for another context. They can and should be used as a starting point for discussion, but not presumed to be the solution.
Second, he emphasized the importance of paying attention to functional architecture, not only technical architecture. As an example, he showed how the Skype web store, which sells only 4 products (skype in, skype out, voicemail, and subscription bundles of the previous three) became incredibly complex, because no one was responsible for this. Complex functional architecture leads to complex technical architecture, which is undesirable as he noted in his next point.
Keep it simple: minimize functionality, and minimize complexity. He gave an example of how their queuing system’s performance and scalability were greatly enhanced by removing functionality (the guarantee to deliver messages exactly once), which enabled the simplification of the system.
He also shared some organizational learnings, which I appreciated. Maybe my filters are playing tricks on me, but it seems as if more and more discussion of software engineering is focusing on organizing people. I interpret this as a sign of growing maturity in the industry, which (as Andres noted) has its roots in a somewhat asocial culture.
He noted that architecture needs to fit your organization. Design needs to be measured primarily by how well they solve business problems, rather than beauty or elegance.
He stressed the importance of communication, a term which I think is becoming so overused and diluted in organizations that it is not very useful. It’s used to refer to everything from roles and responsibilities, to personal relationships, to cultural norming, and more. In the case of Skype, what Andres learned was the importance of organizing and empowering people to facilitate alignment, information flow and understanding between different parts of the business. Skype evolved an architecture team which interfaces between (multiple) business units and (multiple) engineering teams, helping each to understand the other and taking responsibility for the overall system design.
Overall, I thought the day’s talks gave me new insight into how Internet applications are being developed and deployed in the real world today. They affirmed some of what I’ve been wondering about, and gave me some new things to think about as well. I’m looking forward to tomorrow.
This will be the end of the series, as I’m leaving for the airport this afternoon.
Rusty Russell: FOSS fun with a Wiimote
Rusty told an entertaining story about his journey to produce geeky toys for his daughter, who is too young to use a keyboard or other standard human-computer interface. I always enjoy hearing about the intermediate steps of invention, and this was no exception. After five design iterations and several long distractions, Rusty produced a couple of working applications using Python and libcwiid, and demonstrated one of them.
Ariel Waldman: Space hacks
Ariel’s talk explained the (surprisingly numerous) ways in which geeks can get actively involved in advancing space science and exploration. With budgets of zero, hundreds or thousands of USD, there are projects which are accessible to individuals and schools which offer not only fun and education opportunities, but actually contribute something to the human study of outer space.
I didn’t note them down, so please watch the talk if you’re interested.
Andrew Tridgell: Patent defence for free software
I missed the start of this talk, but when I arrived, Andrew was explaining how to read and interpret patent claims. This is even less obvious than one might suppose. He offered advice on which parts to read first, and which could be disregarded or referred to only as needed.
Invalidating a patent entirely is difficult, but because patents are interpreted very narrowly, inventions can often be shown to be “different enough” from the patented one.
Where “workarounds” are found, which enable free software to interoperate or solve a problem in a different way than described in a patent, Andrew says it is important to publish them far and wide. This helps to discourage patent holders from attacking free software, because the discovery and publication of a workaround could lead to them losing all of their revenue from the patent (as their licensees could adopt that instead and stop paying for licenses).
Michael Koziarski: Lessons learned from a growing project
Michael, a member of the Rails core team, introduced himself as a pragmatist who is not interested in the principles of free software, only in working with the best tools he can find (many of which are actually proprietary). He gave an overview of what Rails is and where it came from, and a list of lessons he learned from its history.
Michael says that users make the best contributors, because they work to address user needs (which they understand first-hand). He contrasted this with developers who join the project to experiment with the latest technology or rewrite code without good reason. Therefore, in order to gain more contributors, it is important to market the project and attract more users.
He downplayed the conventional wisdom of “release early and often”, recommending a release early enough that there is plenty of incomplete work which contributors can help with, but not so early that the software is useless. In other words, release early, but not too early, and often, but not too often.
As is becoming thematic for the free software community, he recognized the necessity of dealing appropriately with people who do not advance the aims of the project. His example was people who don’t really want to use the software, because there is something about it they don’t like. Unless this one thing is changed, they say, it is of no interest to them. They may imply, or even state outright, that if the project changed in some way, they would join enthusiastically. Michael says that this is often untrue, and that even if they get the feature they want, they will not become valuable contributors. He also spoke of addressing trolls, not just the obvious ones, but more respectable-looking pundits as well.
Rails attracted many users early on because of its upstart status, and Michael pointed out that these people later left the project as it became more mainstream. The same was true of contributors, who left for other projects for their own reasons, to learn new things or explore a different direction.
Over time, the number of willing volunteers in the Rails community was much greater than the corresponding stack of “work to do”. Contributors became furious because their contributions were neglected; they threatened to fork the project and left the community. He stressed the importance of avoiding this scenario by tending to these contributors and their contributions.
He advised (mostly) ignoring your project’s competitors as a means of staying focused on the project’s core vision. In particular, he says that projects which define themselves in terms of their competition (“foo is like Rails, but…”) are not worth paying attention to.
He praised Rails’ use of a more permissive (non-copyleft) license, because it encouraged the growth of an ecosystem of hosting providers and tools. I didn’t quite follow his argument as to why this was.
Some of Michael’s lessons resonated with my experience of Ubuntu’s growth, while others did not. Regardless, it was useful to hear his perspective, and the differences may highlight the differing characters of the two projects.
Lindsay Holmwood: Flapjack
Lindsay introduced Flapjack, which is a monitoring system designed to meet the scalability and extensibility requirements of cloud deployments. It shares the Nagios plugin interface, and so can use the same checks. It uses beanstalkd as a central message queue to coordinate the work of executing checks, recording results and making the appropriate notifications. Each of its components (worker, notifier, database) can be extended or replaced using an API, providing a great deal of flexibility.
Jeremy Allison: Microsoft and Free Software
Jeremy took us through Microsoft’s recognition of, and response to, the threat of free software to their monopoly position. After reviewing the major legal battles of this ongoing war (and the metaphor is apt), he says that Microsoft is turning to patents in an attempt to split the free software community and to earn revenue from the use of free software. Jeremy predicts that the outcome will be a never-ending conflict.
The key conflicts are likely to be around netbooks, mobile phones and appliances. How should the free software community respond?
We could ignore it, and keep making free software under copyleft licenses. Jeremy points out that this is perhaps our most effective strategy in the long run, to stay focused on the vision of a free software world.
We can continue to pressure governments and corporations to adopt truly open standards, and to investigate and challenge monopolies. Transparency is key to these efforts, as “elephants like to work in the dark” (Microsoft being “the elephant in the room”).
By lobbying against software patents, we can hope to contain the US software patent system from the rest of the world. Otherwise, the rapidly accumulating software patents in the US can suddenly and dramatically spread.
We might even convince the likes of Microsoft that patents, and patent trolls, represent a greater harm than good.
In response to direct patent attacks, we should search for prior art and attempt to undermine unjust patents. He also suggests calling out Microsoft employees on the company’s actions, to promote awareness particularly in the context of free software conferences.
He closed with a hope that Microsoft could change, citing IBM as having been “as feared and hated as Microsoft is today”.
Neil Brown:Digging for Design Patterns
Neil explored various design patterns in the kernel in order to illustrate how they are discovered, what their important attributes are, and how to use them effectively.
His examples were a binary search, “goto err”, accessor functions and kref. Naming patterns is important, especially getting that name into the code itself, so that it helps to cross-reference use, implementation and documentation of the pattern (e.g. uses of the kref pattern are sprinkled with the word “kref”). A successful pattern can both help to find bugs (this binary search doesn’t look the same as that one…why?) and to avoid bugs (by getting it right the first time).
Benjamin Mako Hill: Antifeatures (keynote)
Mako delivered an entertaining and inspirational talk on antifeatures, those oddities which intentionally make technology less useful to its consumers (think DRM, though he provided a wide range of examples). Mako explained the main reasons why antifeatures exist, and how they are endemic to the business of proprietary software.
Mako offered a potential upside to antifeatures, which is that they can help the free software community to focus on fundamental concerns like autonomy, rather than (for example) the mechanics of licensing. Antifeatures can be used to explain to the uninitiated why software freedom is important to everyday folks, not just hackers.
Denise Paolucci and Mark Smith: Build Your Own Contributors, One Part At A Time
Denise and Mark provided a practical list of “dos” and “don’ts” for building a successful community based on respect, empowerment and collaboration. Much of this was elementary from an Ubuntu perspective, but they offered a variety of examples from Dreamwidth which were illustrative.
Their list of “three things to start right now”:
- appoint a “welcomer” and laud newcomers’ first contributions
- stop timing out on communication when people need responses from you
- Have words with “that person” and let them know their behavior is not okay
Chris Double: Implementing HTML5 video in Firefox
The initial implementation used the xiph.org reference libraries (libogg, libtheora, libvorbis) and PortAudio, but had some problems, including poor A/V synchronization. The second iteration used higher level libraries liboggz, libfishsound, liboggplay and libsydneyaudio, and was included in Firefox 3.1 alpha and beta, but some limitations in liboggplay (a/v sync, chained oggs, etc.) led to difficulty. There were also proof of concept implementations which used GStreamer on Linux, DirectShow on Windows, and QuickTime on MacOS, but these were hampered by codec plugin complications. In the end, they’ve gone back to using the xiph.org reference libraries (but with libsydneyaudio), though the GStreamer backend is still actively developed. Chris has published a series of articles on his blog on reading, decoding and synchronizing A/V streams using various libraries.
There are still some kinks to work out: the lack of indexes and the like in Ogg complicates seeking, calculating duration and so on, and there is no satisfactory solution for cross-platform audio. Rendering is not hardware accelerated yet because the video element is part of the HTML rendering pipeline.
It will be very powerful when it’s ready, though. Theora playback is supported in Chrome, Firefox and Opera today, and Daily Motion, Wikipedia and Archive.org are using it. I can’t wait to see the full API working well on a massive scale.
James Westby: Ubuntu Distributed Development
James gave a great overview of the Ubuntu distributed development project, which has the ambitious goal of providing a homogeneous view of all of the source code for Ubuntu packages using Launchpad and Bazaar. This includes modeling the relationships between the versions of the code in Debian and further upstream, which is complicated by the use of different revision control systems, patch systems, and so on.
At this stage, most of the source code for Ubuntu and Debian is available through the system, and developers can freely branch from it and request merges. It works the same way for all packages, so developers only need to learn one toolset and workflow. We hope that this will lower the barrier to entry for contributing to Ubuntu, as well as make it easier to share patches between Ubuntu, Debian and upstream.
Timo Hoenig: Extending the scope of mobile devices
Timo reviewed how mobile devices have evolved over the past 40 years, citing dramatic improvements in compute power, memory, bandwidth and so on, but comparatively small improvement in battery life (several orders of magnitude less). Thus, he sees power management and related technologies as important to the further advancement of the category. He specifically identified network links as a key consideration, as they consume a great deal of power, and have continued to do so with newer generations of technology. Local power management, he says, is not sufficient, and we need to take a network-aware view.
He introduced the concept of an “early bird” connector, which acts as a supervisor for a mobile device. It communicates with remote network nodes on its behalf, and takes decisions about when and whether to wake up the mobile device. He estimated a 12% power savings by offloading processing to such a device, using a simple model of power consumption. The early bird would run on another system on the network, presumably without the same power constraints (like a proxy server).
Sam Vilain: Linux Containers
Sam detailed the LXC implementation of containers for Linux. In contrast with vserver, it seems to offer a much simpler interface. Because of this, it has been comparatively straightforward to merge into the Linux kernel mainline.
LXC uses existing Linux kernel facilities to group processes within containers into control groups, which can then be used to control access and scheduling of resources (network, CPU, storage, etc.). Each resource type has a namespace similar in principle to what chroot() provides for filesystems. Since all of the hardware is visible to a single kernel, there can be a great deal of flexibility in how resources are allocated. For example, a given network device and CPU can be dedicated to a container.
Usefully for system administration and diagnostics, all of these resources can be directly accessed from the host without stopping or shutting down guests.
Julius Roberts: ZFS
This talk was a tour through the main features of the ZFS filesystem, showing how to work with storage pools and snapshots. It was useful to see the example commands and behavior in the context of a live command line session, as I haven’t got around to playing with it in an OpenSolaris VM yet.
Mark Atwood: memcached
I ended up in this talk in between other sessions I was attending. It outlined the sorts of places where memcached can be useful in applications. Beyond the obvious caching scenarios, Mark suggested using it to store an application’s working set (key/value dictionaries), user sessions and distributed rate limiting scoreboards.
Josh Berkus: Relational vs. Non-relational
I’ve followed with interest the NOSQL movement, and was interested to hear from Josh (of PostgreSQL Experts Inc.) what I expected would be a “relationalist” point of view.
He began by addressing some database myths. He stated that the “revolutionary new database designs” lauded by the non-relationalists are actually just new implementations of old ideas (e.g. CouchDB vs. Pick). He dismissed the “NoSQL” moniker as the wrong distinction to make: there are much more important and basic differences between database implementations than whether or not they use SQL. He explained that the relational model is orthogonal to support for ACID transactions, though I didn’t realize that was an active misconception. He also reminded us that we do not need to choose a single database for all of our possible needs, and should focus on choosing the system which fits our current application goals, or choosing multiple databases (e.g. MySQL and memcached, or [interestingly] PostgreSQL and CouchDB).
Relational OLTP databases tend to have more mature implementations of transactions, the ability to enforce data constraints and consistency, support complex reporting, and vertical scaling (but not horizontal).
SQL itself doesn’t map very well to many programming languages, but promotes application portability and enables the management of schema changes over time. “No-SQL” provides a more natural mapping, and allows programmers to act as DBAs.
His main reason to use an SQL-RDBMS was when the data will outlive current application implementations, because having a looser coupling between the database and application helps support a migration to a new implementation.
He compared the applicability of embedded key-value, distributed key-value, flat file, object, and document databases but I wasn’t able to take notes quickly enough through this segment.
His conclusion was to reiterate that “relational vs. non-relational” is the wrong way to look at the problem, and that we should instead choose the right tool for the job.
Sometimes, when I have a conflict like this, I try to attend the talk whose material is less familiar to me (in this case, probably the SVG/Flash one). However, since the talks are being recorded and made available on the Internet, this changes the dynamic a bit. I don’t have to miss out on watching anything, as I can download it later. So, it makes more sense for me to go where I can best participate, taking advantage of my presence at the conference.
Distro summit: Package management
So, I chose to attend the package management talk, as I might have something to contribute. It was about how to harmonize general distribution packaging mechanisms (dpkg, RPM, etc.) with special-purpose ones like those used by Ruby (gems), Lua (rocks), Perl (CPAN modules) and so on. The solution described employed a set of wrapper scripts to provide a standard API to these systems, so that they could be used by the distribution package manager to resolve dependencies.
Due up next was Scott James Remnant’s talk on booting faster, but due to travel difficulties, he hadn’t arrived yet. Instead, we had a free-form discussion on various other challenges in the area of package management.
I took the opportunity to put forward an idea I had been thinking about for some time, which is that we may need to move beyond a “one size fits all” or “everything is a package” approach to package management. Modern package management systems are very capable, and solve a range of difficult problems with a high degree of reliability. The cost of all of this functionality is complexity: making packages is difficult. The system can be made to work for a wide range of software, but the simple case is often not very simple.
I also made the point that there are non-technical challenges as well: it is difficult for developers and ISVs to understand the ecosystem of distributions, and even more difficult to succeed in making their software available in “the right way” to Linux users. The obstacles range from procedural to cultural, and if we only approach this as a technical problem, we risk adding more complexity and making the situation worse.
The opportunity to have this kind of participatory discussion really validated my decision about how to choose which talk to attend.
Liz Henry: Code of our Own
Back in the Haecksen/LinuxChix miniconf, Liz Henry presented an action plan
for increasing the involvement of women in open source, with many well-considered “dos” and “don’ts” based on observations of what has and has not worked for open source communities.
It was the first opportunity I’ve had to attend a free software conference session which went beyond the typical “yes, this is important” and “yes, there really is a problem here” content which is unfortunately as necessary as it is commonplace.
I won’t attempt to summarize it here, but I can definitely recommend Liz’ presentation to anyone who is looking for concrete, actionable methods to promote gender diversity in their technical communities.
Lucas Nussbaum: The Relationship between Debian and Ubuntu
Historically, in Lucas’ assessment, many Debian developers have been unhappy with Ubuntu, feeling that it had “stolen” from Debian, and was not “giving back”. He said that bad experiences with certain people associated with Canonical and Ubuntu reflected on the project as a whole.
However, he says, things have improved considerably, and today, most Debian developers see some good points in Ubuntu: it brings new users to free software and Debian technology, it provides a system which “just works” for their (less technical) friends and family, and brings new developers to the Debian community.
There are still some obstacles, though. Lucas says that many bugfix patches in Ubuntu are just workarounds, and so are not very helpful to Debian. He gave the example of a patch which disabled the test suite for a package because of a failure, rather than fixing the failure.
He felt that Canonical offered few “free gifts” to Debian, citing as the only example the website banner on ubuntu.com which was displayed for Debian’s 15th anniversary. I felt this was a bit unfair, as Canonical has done more than this over the years, including sponsoring DebConf every year since Canonical was founded.
It occurred to me that the distinctions between Canonical and Ubuntu are still not clear, even within the core free software community. For example, the “main” package repository for Ubuntu is often seen to be associated exclusively with Canonical, while “universe” is the opposite. In reality, Canonical works on whatever is most important to the company, and volunteers do the same. These interests often overlap, particularly in “main” (where the most essential and popular components are).
Lucas speculated that more mission-critical servers run Debian pre-releases (especially testing) than Ubuntu pre-releases. It would be interesting to test this, as it’s rare to get sufficient real-world testing for server software prior to an Ubuntu release.
Lucas presented a wishlist for Ubuntu:
- more technical discussions between Ubuntu and Debian (particularly on the
ubuntu-develdebian-devel mailing list
- easier access to Launchpad data
- Launchpad PPAs for Debian
The prominence of Launchpad in these discussions spawned a number of tangential discussions about Launchpad, which were eventually deferred to tomorrow’s Launchpad mini-conf. One audience member asked whether Debian would ever adopt Launchpad. The answer from Lucas and Martin Krafft was that it would definitely not adopt the Canonical instance, but that a separate, interoperating instance might eventually be palatable.
I made the point that there is no single Debian/Ubuntu “relationship”, but a large number of distinct relationships between individuals and smaller groups. Instead of focusing on large-scale questions like infrastructure, I think there would be more mileage in working to build relationships between people around their common work.
The theme for my morning, on the first day of the conference, was version control. The conference day was divided into mini-confs covering different topic areas, but this was a common theme of the sessions I attended in different mini-confs.
After the introductory session (which included an amusing video about Wellington), I attended Emma Jane Hogbin’s talk “Version Control for Mere Mortals” which was an introduction to version control concepts and how to start using it.
The sun finally came out, so today has been warm and bright, a proper summer day. It’s too bad I’ll now be cooped up inside conference rooms for the rest of my stay here.
I stopped in briefly into another talk to learn something about Gearman, and then attended Martin Krafft’s talk on vcs-pkg.org, which is exploring the application of version control to the problem of package maintenance in Linux distributions. One of the hard problems in this space is that there are two alternative ways of modeling packages, both of which are in active use and have distinct advantages and disadvantages: a sequence of patches, and a graph (DAG) of revisions. Inter-operation between systems which use different models is not straightforward.
I had lunch at Mac’s Brewery on the waterfront and talked mainly about version control, Bazaar in particular.
As I have most recently observed on my recent flights to New Zealand for linux.conf.au, it seems that many of my fellow travelers are unaware of this simple rule:
When standing up from your seat, do not use the back of the seat in front of you as a handhold unless this is a physical necessity for you. This is very disturbing to the person sitting there, who may be trying to sleep. Instead, bring your own seat forward and use the armrests.
Yes, I’m talking to you, 61J.
That is all.