Posts Tagged ‘QConLondon2010’
QCon London 2010: Day 3
The tracks which interested me today were “How do you test that?”, which dealt with scenarios where testing (especially automation) is particularly challenging, and “Browser as a Platform”, which is self-explanatory.
Joe Walker: Introduction to Bespin, Mozilla’s Web Based Code Editor
I didn’t make it to this talk, but Bespin looks very interesting. It’s “a Mozilla Labs Experiment to build a code editor in a web browser that Open Web and Open Source developers could love”.
I experimented briefly with the Mozilla hosted instance of Bespin. It seems mostly oriented for web application development, and still isn’t nearly as nice as desktop editors. However, I think something like this, combined with Bazaar and Launchpad, could make small code changes in Ubuntu very fast and easy to do, like editing a wiki.
Doron Reuveni: The Mobile Testing Challenge
Why Mobile Apps Need Real-World Testing Coverage and How Crowdsourcing Can Help
Doron explained how the unique testing requirements of mobile handset application are well suited to a crowdsourcing approach. As the founder of uTest, he explained their approach to connecting their customers (application vendors) with a global community of testers with a variety of mobile devices. Customers evaluate the quality of the testers’ work, and this data is used to grade them and select testers for future testing efforts in a similar domain. The testers earn money for their efforts, based on test case coverage (starting at about $20 each), bug reports (starting at about $5 each), and so on. Their highest performers earn thousands per month.
uTest also has a system, uTest Remote Access, which allows developers to “borrow” access to testers’ devices temporarily, for the purpose of reproducing bugs and verifying fixes. Doron gave us a live demo of the system, which (after verifying a code out of band through Skype) displayed a mockup of a BlackBerry device with the appropriate hardware buttons and a screenshot of what was displayed on the user’s screen. The updates were not quite real-time, but were sufficient for basic operation. He demonstrated taking a picture with the phone’s camera and seeing the photo within a few seconds.
Dylan Schiemann: Now What?
Dylan did a great job of extrapolating a future for web development based on the trend of the past 15 years. He began with a review of the origin of web technologies, which were focused on presentation and layout concerns, then on to JavaScript, CSS and DHTML. At this point, there was clear potential for rich applications, though there were many roadblocks: browser implementations were slow, buggy or nonexistent, security models were weak or missing, and rich web applications were generally difficult to engineer.
Things got better as more browsers came on the scene, with better implementations of CSS, DOM, XML, DHTML and so on. However, we’re still supporting an ancient implementation in IE. This is a recurring refrain among web developers, for whom IE seems to be the bane of their work. Dylan added something I hadn’t heard before, though, which was that Microsoft states that anti-trust restrictions were a major factor which prevented this problem from being fixed.
Next, there was an explosion of innovation around Ajax and related toolkits, faster javascript implementations, infrastructure as a service, and rich web applications like GMail, Google Maps, Facebook, etc.
Dylan believes that web applications are what users and developers really want, and that desktop and mobile applications will fall by the wayside. App stores, he says, are a short term anomaly to avoid the complexities of paying many different parties for software and services. I’m not sure I agree on this point, but there are massive advantages to the web as an application platform for both parties. Web applications are:
- fast, easy and cheap to deploy to many users
- relatively affordable to build
- relatively easy to link together in useful ways
- increasingly remix-able via APIs and code reuse
There are tradeoffs, though. I have an article brewing on this topic which I hope to write up sometime in the next few weeks.
Dylan pointed out that different layers of the stack exhibit different rates of change: browsers are slowest, then plugins (such as Flex and SilverLight), then toolkits like Dojo, and finally applications which can update very quickly. Automatically updating browsers are accelerating this, and Chrome in particular values frequent updates. This is good news for web developers, as this seems to be one of the key constraints for rolling out new web technologies today.
Dylan feels that technological monocultures are unhealthy, and prefers to see a set of competing implementations converging on standards. He acknowledged that this is less true where the monoculture is based on free software, though this can still inhibit innovation somewhat if it leads to everyone working from the same point of view (by virtue of sharing a code base and design). He mentioned that de facto standardization can move fairly quickly; if 2-3 browsers implement something, it can start to be adopted by application developers.
Comparing the different economics associated with browsers, he pointed out that Mozilla is dominated by search through the chrome (with less incentive to improve the rendering engine), Apple is driven by hardware sales, and Google by advertising delivered through the browser. It’s a bit of a mystery why Microsoft continues to develop Internet Explorer.
Dylan summarized the key platform considerations for developers:
- choice and control
- taste (e.g. language preferences, what makes them most productive)
- performance and scalability
- security
and surmised that the best way to deliver these is through open web technologies, such as HTML 5, which now offers rich media functionality including audio, video, vector graphics and animations. He closed with a few flashy demos of HTML 5 applications showing what could be done.
QCon London 2010: Day 2
I was talk-hopping today, so none of these are complete summaries, just enough to capture my impressions from the time I was there. I may go back and watch the video for the ones which turned out to be most interesting.
Yesterday, I noted a couple of practices employed by the QCon organizers which I wanted to note, to consider trying them out with Canonical and Ubuntu events:
- As participants leave each talk, they pass a basket with a red, a yellow and a green square attached to it. Next to the wastebasket are three small stacks of colored paper, also red, yellow and green. There are no instructions, indeed no words at all, but the intent seemed clear enough: drop a card in the basket to give feedback.
- The talks were spread across multiple floors in the conference center, which I find is usually awkward. They mitigated this somewhat by posting a directory of the rooms inside each lift.
Chris Read: The Cloud Silver Bullet
Which calibre is right for me?
Chris offered some familiar warnings about cloud technologies: that they won’t solve all problems, that effort must be invested to reap the benefits, and that no one tool or provider will meet all needs. He then classified various tools and services according to their suitability for long or short processing cycles, and high or low “data sensitivity”.
Simon Wardley: Situation Normal, Everything Must Change
I actually missed Simon’s talk this time, but I’ve seen him speak before and talk with him every week about cloud topics as a colleague at Canonical. I highly recommend his talks to anyone trying to make sense of cloud technology and decide how to respond to it.
In some of the talks yesterday, there was a murmur of anti-cloud sentiment, with speakers asserting it was not meaningful, or they didn’t know what it was, or that it was nothing new. Simon’s material is the perfect antidote to this attitude, as he makes it very clear that there is a genuinely important and disruptive trend in progress, and explains what it is.
Jesper Boeg: Kanban
Crossing the line, pushing the limit or rediscovering the agile vision?
Jesper shared experiences and lessons learned with Kanban, and some of the problems it addresses which are present in other methodologies. His material was well balanced and insightful, and I’d like to go back and watch the full video when it becomes available.
Here again was a clear and pragmatic focus on matching tools and processes to the specific needs of the team, business and situation.
Ümit Yalcinalp: Development Model for the Cloud
Paradigm Shift or the Same Old Same Old?
Ümit focused on the PaaS (platform as a service) layer, and the experience offered to developers who build applications for these platforms. An evangelist from Salesforce.com, she framed the discussion as a comparison between force.com, Google App Engine and Microsoft Azure.
Eric Evans: Folding Design into an Agile Process
Eric tackled the question of how to approach the problem of design within the agile framework. As an outspoken advocate of domain-driven design, he presented his view in terms of this school and its terminology.
He emphasized the importance of modeling “when the critical complexity of the project is in understanding and communicating about the domain”. The “expected” approach to modeling is to incorporate an up-front analysis phase, but Eric argues that this is misguided. Because “models are distilled knowledge”, and teams are relatively ignorant at the start of a project, modeling in this way captures that ignorance and makes it persist.
Instead, he says, we should employ to a “pull” approach (in the Lean sense), and decide to work on modeling when:
- communications with stakeholders deteriorates
- when solutions are more complex than the problems
- when velocity slows (because completed work becomes a burden)
Eric illustrated his points in part by showing video clips of engineers and business people engaged in dialog (here again, the focus on people rather than tools and process). He used this material as the basis for showing how models underlie these interactions, but are usually implicit. These dialogs were full of hints that the people involved were working from different models, and the software model needed to be revised. An explicit model can be a very powerful communication tool on software projects.
He outlined the process he uses for modeling, which was highly iterative and involves identifying business scenarios, using them to develop and evaluate abstract models, and testing those models by experimenting with code (“code probes”). Along the way, he emphasized the importance of making mistakes, not only as a learning tool but as a way to encourage creative thinking, which is essential to modeling work. In order to encourage the team to “think outside the box” and improve their conceptual model, he goes as far as to require that several “bad ideas” are proposed along the way, as a precondition for completing the process.
Eric is working on a white paper describing this process. A first draft is available on his website, and he is looking for feedback on it.
Modeling work, he suggested, can be incorporated into:
- a stand up meeting
- a spike
- an iteration zero
- release planning
He pointed out that not all parts of a system are created equal, and some of them should be prioritized for modeling work:
- areas of the system which seem to require frequent change across projects/features/etc.
- strategically important development efforts
- user experiences which are losing coherence
This was a very compelling talk, whose concepts were clearly applicable beyond the specific problem domain of agile development.
QCon London 2010: Day 1
For the first time in several years, I had the opportunity to attend a software conference in the city where I lived at the time. I’ve benefited from many InfoQ articles in the past couple of years, and watched recordings of some excellent talks from previous QCon events, so I jumped at the opportunity to attend QCon London 2010. It is being held in the Queen Elizabeth II Conference Center, conveniently located a short walk away from Canonical’s London office.
Whenever I attend conferences, I can’t help taking note of which operating systems are in use, and this tells me something about the audience. I was surprised to notice that in addition to the expected Mac and Windows presence, there was a substantial Ubuntu contingent and some Fedora as well.
Today’s tracks included two of particular interest to me at the moment: Dev and Ops: A single team and the unfortunately gendered Software Craftsmanship.
Jason Gorman: Beyond Masters and Apprentices
A Scalable, Peer-led Model For Building Good Habits In Large & Diverse Development Teams
Jason explained the method he uses to coach software developers.
I got a bad seat on the left side of the auditorium, where it was hard to see the slides because they were blocked by the lectern, so I may have missed a few points.
He began by outlining some of the primary factors which make software more difficult to change over time:
- Readability: developers spend a lot of their time trying to understand code that they (or someone else) have written
- Complexity: as well as making code more difficult to understand, complexity increases the chance of errors. More complex code can fail in more ways.
- Duplication: when code is duplicated, it’s more difficult to change because we need to keep track of the copies and often change them all
- Dependencies and the “ripple effect”: highly interdependent code is more difficult to change, because a change in one place requires corresponding changes elsewhere
- Regression Test Assurance: I didn’t quite follow how this fit into the list, to be honest. Regression tests are supposed to make it easier to change the code, because errors can be caught more easily.
He then outlined the fundamental principles of his method:
- Focus on Learning over Teaching – a motivated learner will find their own way, so focus on enabling them to pull the lesson rather than pushing it to them (“there is a big difference between knowing how to do something and being able to do it”)
- Focus on Ability over Knowledge – learn by doing, and evaluate progress through practice as well (“how do you know when a juggler can juggle?”)
…and went on to outline the process from start to finish:
- Orientation, where peers agree on good habits related to the subject being learned. The goal seemed to be to draw out knowledge from the group, allowing them to define their own school of thought with regard to how the work should be done. In other words, learn to do what they know, rather than trying to inject knowledge.
- Practice programming, trying to exercise these habits and learn “the right way to do it”
- Evaluation through peer review, where team members pair up and observe each other. Over the course of 40-60 hours, they watch each other program and check off where they are observed practicing the habits.
- Assessment, where learners practice a time-boxed programming exercise, which is recorded. The focus is on methodical correctness, not speed of progress. Observers watch the recording (which only displays the code), and note instances where the habit was not practiced. The assessment is passed only if less than three errors are noticed.
- Recognition, which comes through a certificate issued by the coach, but also through admission to a networking group on LinkedIn, promoting peer recognition
Jason noted that this method of assessing was good practice in itself, helping learners to practice pairing and observation in a rigorous way.
After the principal coach coaches a pilot group, the pilot group then goes on to coach others while they study the next stage of material.
To conclude, Jason gave us a live demo of the assessment technique, by launching Eclipse and writing a simple class using TDD live on the projector. The audience were provided with worksheets containing a list of the habits to observe, and instructed to note instances where he did not practice them.
Julian Simpson: Siloes are for farmers
Production deployments using all your team
After a brief introduction to the problems targeted by the devops approach, Julian offered some advice on how to do it right.
He began with the people issues, reminding us of Weinberg’s second law, which is “no matter what they tell you, it’s always a people problem”.
His people tips:
- In keeping with a recent trend, he criticized email as a severely flawed communication medium, best avoided.
- respect everyone
- have lunch with people on the other side of the wall
- discuss your problems with other groups (don’t just ask for a specific solution)
- invite everyone to stand-ups and retrospectives
- co-locate the sysadmins and developers (thomas allen)
Next, a few process suggestions:
- Avoid code ownership generally (or rather, promote joint/collective ownership)
- Pair developers with sysadmins
- It’s done when the code is in production (I would rephrase as: it’s not done until the code is in production)
and then tools:
- Teach your sysadmins to use version control
- Help your developers write performant code
- Help developers with managing their dev environment
- Run your deploy scripts via continuous integration (leading toward continuous deployment)
- Use Puppet or Chef (useful as a form of documentation as well as deployment tools, and on developer workstations as well as servers)
- Integrate monitoring and continuous integration (test monitoring in the development environment)
- Deliver code as OS packages (e.g. RPM, DEB)
- Separate binaries and configuration
- Harden systems immediately and enable logging for tuning security configuration (i.e. configure developer workstations with real security, making the development environment closer to production)
- Give developers access to production logs and data
- Re-create the developer environment often (to clear out accumulated cruft)
I agreed with a lot of what was said, objected to some, and lacked clarity on a few points. I think this kind of material is well suited to a multi-way BOF style discussion rather than a presentation format, and would have liked more opportunity for discussion.
Lars George and Fabrizio Schmidt: Social networks and the Richness of Data
Getting distributed webservices done with Nosql
Lars and Fabrizio described the general “social network problem”, and how they went about solving it. This problem space involves the processing, aggregation and dissemination of notifications for a very high volume of events, as commonly manifest in social networking websites such as Facebook and Twitter which connect people to each other to share updates. Apparently simple functionality, such as displaying the most recent updates from one’s “friends”, quickly become complex at scale.
As an example of the magnitude of the problem, he explained that they process 18 million events per day, and how in the course of storing and sharing these across the social graph, some operations peak as high as 150,000 per second. Such large and rapidly changing data sets represent a serious scaling challenge.
They originally built a monolithic, synchronous system called Phoenix, built on:
- LAMP frontends: Apache+PHP+APC (500 of them)
- Sharded MySQL multi-master databases (150 of them)
- memcache nodes with 1TB+ (60 of them)
They then added on asynchronous services alongside this, to handle things like Twitter and mobile devices, using Java (Tomcat) and RabbitMQ. The web frontend would send out AMQP messages, which would then be picked up by the asynchronous services, which would (where applicable) communicate back to Phoenix through an HTTP API call.
When the time came to re-architect their activity , they identified the following requirements:
- endless scalability
- storage- and cloud-independent
- fast
- flexible and extensible data model
This led them to an architecture based on:
- Nginx + Janitor
- Embedded Jetty + RESTeasy
- NoSQL storage backends (no fewer than three: Redis, Voldemort and Hazelcast)
They described this architecture in depth. The things which stood out for me were:
- They used different update strategies (push vs. pull) depending on the level of fan-out for the node (i.e. number of “friends”)
- They implemented a time-based activity filter which recorded a global timeline, from minutes out to days. Rather than traversing all of the user’s “friends” looking for events, they just scan the most recent events to see if their friends appear there.
- They created a distributed, scalable concurrent ID generator based on Hazelcast, which uses distributed locking to assign ranges to nodes, so that nodes can then quickly (locally) assign individual IDs
- It’s interesting how many of the off-the-shelf components had native scaling, replication, and sharding features. This sort of thing is effectively standard equipment now.
Their list of lessons learned:
- Start benchmarking and profiling your app early
- A fast and easy deployment keeps motivation high
- Configure Voldemort carefully (especially on large heap machines)
- Read the mailing lists of the NoSQL system you use
- No solution in docs? – read the sources
- At some point stop discussing and just do it
Andres Kitt: Building Skype
Learnings from almost five years as a Skype Architect
Andres began with an overview of Skype, which serves 800,000 registered users per employee (650 vs. 521 million). Their core team is based in Estonia. Their main functionality is peer-to-peer, but they do need substantial server infrastructure (PHP, C, C++, PostgreSQL) for things like peer-to-peer supporting glue, e-commerce and SIP integration. Skype uses PostgreSQL heavily in some interesting ways, in a complex multi-tiered architecture of databases and proxies.
His first lesson was that technical rules of thumb can lead us astray. It is always tempting to use patterns that have worked for us previously, in a different project, team or company, but they may not be right for another context. They can and should be used as a starting point for discussion, but not presumed to be the solution.
Second, he emphasized the importance of paying attention to functional architecture, not only technical architecture. As an example, he showed how the Skype web store, which sells only 4 products (skype in, skype out, voicemail, and subscription bundles of the previous three) became incredibly complex, because no one was responsible for this. Complex functional architecture leads to complex technical architecture, which is undesirable as he noted in his next point.
Keep it simple: minimize functionality, and minimize complexity. He gave an example of how their queuing system’s performance and scalability were greatly enhanced by removing functionality (the guarantee to deliver messages exactly once), which enabled the simplification of the system.
He also shared some organizational learnings, which I appreciated. Maybe my filters are playing tricks on me, but it seems as if more and more discussion of software engineering is focusing on organizing people. I interpret this as a sign of growing maturity in the industry, which (as Andres noted) has its roots in a somewhat asocial culture.
He noted that architecture needs to fit your organization. Design needs to be measured primarily by how well they solve business problems, rather than beauty or elegance.
He stressed the importance of communication, a term which I think is becoming so overused and diluted in organizations that it is not very useful. It’s used to refer to everything from roles and responsibilities, to personal relationships, to cultural norming, and more. In the case of Skype, what Andres learned was the importance of organizing and empowering people to facilitate alignment, information flow and understanding between different parts of the business. Skype evolved an architecture team which interfaces between (multiple) business units and (multiple) engineering teams, helping each to understand the other and taking responsibility for the overall system design.
Conclusion
Overall, I thought the day’s talks gave me new insight into how Internet applications are being developed and deployed in the real world today. They affirmed some of what I’ve been wondering about, and gave me some new things to think about as well. I’m looking forward to tomorrow.