We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘DevOps

DevOps and Cloud

DevOps

I first heard about DevOps from Lindsay Holmwood at linux.conf.au 2010. Since then, I’ve been following the movement with interest. It seems to be about cross-functional involvement in software teams, specifically between software development and system administration (or operations). In many organizations, especially SaaS shops, these two groups are placed in opposition to each other: developers are driven to deliver new features to users, while system administrators are held accountable for the operation of the service. In the best case, they maintain a healthy balance by pushing in opposite directions, but more typically, they resent each other for getting in the way, as a result of this dichotomy:

Development Operations
is responsible for… creating products offering services
is measured on… delivery of new features high reliability
optimizes by… increasing velocity controlling change
and so is perceived as… reckless and irresponsible obstructing progress

Of course, both functions are essential to a viable service, and so DevOps aims to replace this opposition with cooperation. By removing this friction from the organization, we hope to improve efficiency, lower costs, and generally get more work done.


So, DevOps promotes the formation of cross-functional teams, where individuals still take on specialist “development” or “operations” roles, but work together toward the common goal of delivering a great experience to users. By working as teammates, rather than passing work “over the wall”, they can both contribute to development, deployment and maintenance according to their skills and expertise. The team becomes a “devops” team, and is responsible for the entire product life cycle. Particular tasks may be handled by specialists, but when there’s a problem, it’s the team’s problem.

Some take it a step further, and feel that what’s needed is to combine the two disciplines, so that individuals contribute in both ways. Rather than thinking of themselves as “developers” or “sysadmins”, these folks consider themselves “devops”. They work to become proficient in both roles, and to synthesize new ways of working by drawing on both types of skills and experience. A common crossover activity is the development of sophisticated tools for automating deployment, monitoring, capacity management and failure resolution.

DevOps meets Cloud

Like DevOps, cloud is not a specific technology or method, but a reorganization of the model (as I’ve written previously). It’s about breaking down the problem in a different way, splitting and merging its parts, and creating a new representation which doesn’t correspond piece-for-piece to the old one.

DevOps drives cloud because it offers a richer toolkit for the way they work: fast, flexible, efficient. Tools like Amazon EC2 and Google App Engine solve the right sorts of problems. Cloud also drives DevOps because it calls into question the traditional way of organizing software teams. A development/operations division just doesn’t “fit” cloud as well as a DevOps model.

Deployment is a classic duty of system administrators. In many organizations, only the IT department can implement changes in the production environment. Reaping the benefits of an IaaS environment requires deploying through an API, and therefore deployment requires development. While it is already common practice for system administrators to develop tools for automating deployment, and tools like Puppet and Chef are gaining momentum, IaaS makes this a necessity, and raises the bar in terms of sophistication. Doing this well requires skills and knowledge from both sides of the “fence” between development and operations, and can accelerate development as well as promote stability in production.


This is exemplified by infrastructure service providers like Amazon Web Services, where customers pay by the hour for “black box” access to computing resources. How those resources are provisioned and maintained is entirely Amazon’s problem, while its customers must decide how to deploy and manage their applications within Amazon’s IaaS framework. In this scenario, some operations work has been explicitly outsourced to Amazon, but IaaS is not a substitute for system administration. Deployment, monitoring, failure recovery, performance management, OS maintenance, system configuration, and more are still needed. A development team which is lacking the experience or capacity for this type of work cannot simply “switch” to an IaaS model and expect these needs to be taken care of by their service provider.


With platform service providers, the boundaries are different. Developers, if they build their application on the appropriate platform, can effectively outsource (mostly) the management of the entire production environment to their service provider. The operating system is abstracted away, and its maintenance can be someone else’s problem. For applications which can be built with the available facilities, this will be a very attractive option for many organizations. The customers of these services may be traditional developers, who have no need for operations expertise. PaaS providers, though, will require deep expertise in both disciplines in order to build and improve their platform and services, and will likely benefit from a DevOps approach.

Technical architecture draws on both development and operations expertise, because design goals like performance and robustness are affected by all layers of the stack, from hardware, power and cooling all the way up to application code. DevOps itself promotes greater collaboration on architecture, by involving experts in both disciplines, but cloud is a great catalyst because cloud architecture can be described in code. Rather than talking to each other about their respective parts of the system, they can work together on the whole system at once. Developers, sysadmins and hybrids can all contribute to a unified source tree, containing both application code and a description of the production environment: how many virtual servers to deploy, their specifications, which components run on which servers, how they are configured, and so on. In this way, system and network architecture can evolve in lockstep with application architecture.

Cloudy promises such as dynamic scaling and fault tolerance call for a DevOps approach in order to be realized in a real-world scenario. These systems involve dynamically manipulating production infrastructure in response to changing conditions, and the application must adapt to these changes. Whether this takes the form of an active, intelligent response or a passive crash-only approach, development and operational considerations need to be aligned.

So what?

DevOps and cloud will continue to reinforce each other and gain momentum. Both individuals and organizations will need to adapt in order to take advantage of the opportunities provided by these new models. Because they’re complementary, it makes sense to adopt them together, so those with expertise in both will be at an advantage.

Advertisement

Written by Matt Zimmerman

June 8, 2010 at 10:28

QCon London 2010: Day 1

For the first time in several years, I had the opportunity to attend a software conference in the city where I lived at the time. I’ve benefited from many InfoQ articles in the past couple of years, and watched recordings of some excellent talks from previous QCon events, so I jumped at the opportunity to attend QCon London 2010. It is being held in the Queen Elizabeth II Conference Center, conveniently located a short walk away from Canonical’s London office.

Whenever I attend conferences, I can’t help taking note of which operating systems are in use, and this tells me something about the audience. I was surprised to notice that in addition to the expected Mac and Windows presence, there was a substantial Ubuntu contingent and some Fedora as well.

Today’s tracks included two of particular interest to me at the moment: Dev and Ops: A single team and the unfortunately gendered Software Craftsmanship.

Jason Gorman: Beyond Masters and Apprentices

A Scalable, Peer-led Model For Building Good Habits In Large & Diverse Development Teams

Jason explained the method he uses to coach software developers.
I got a bad seat on the left side of the auditorium, where it was hard to see the slides because they were blocked by the lectern, so I may have missed a few points.

He began by outlining some of the primary factors which make software more difficult to change over time:

  • Readability: developers spend a lot of their time trying to understand code that they (or someone else) have written
  • Complexity: as well as making code more difficult to understand, complexity increases the chance of errors. More complex code can fail in more ways.
  • Duplication: when code is duplicated, it’s more difficult to change because we need to keep track of the copies and often change them all
  • Dependencies and the “ripple effect”: highly interdependent code is more difficult to change, because a change in one place requires corresponding changes elsewhere
  • Regression Test Assurance: I didn’t quite follow how this fit into the list, to be honest. Regression tests are supposed to make it easier to change the code, because errors can be caught more easily.

He then outlined the fundamental principles of his method:

  • Focus on Learning over Teaching – a motivated learner will find their own way, so focus on enabling them to pull the lesson rather than pushing it to them (“there is a big difference between knowing how to do something and being able to do it”)
  • Focus on Ability over Knowledge – learn by doing, and evaluate progress through practice as well (“how do you know when a juggler can juggle?”)

…and went on to outline the process from start to finish:

  1. Orientation, where peers agree on good habits related to the subject being learned. The goal seemed to be to draw out knowledge from the group, allowing them to define their own school of thought with regard to how the work should be done. In other words, learn to do what they know, rather than trying to inject knowledge.
  2. Practice programming, trying to exercise these habits and learn “the right way to do it”
  3. Evaluation through peer review, where team members pair up and observe each other. Over the course of 40-60 hours, they watch each other program and check off where they are observed practicing the habits.
  4. Assessment, where learners practice a time-boxed programming exercise, which is recorded. The focus is on methodical correctness, not speed of progress. Observers watch the recording (which only displays the code), and note instances where the habit was not practiced. The assessment is passed only if less than three errors are noticed.
  5. Recognition, which comes through a certificate issued by the coach, but also through admission to a networking group on LinkedIn, promoting peer recognition

Jason noted that this method of assessing was good practice in itself, helping learners to practice pairing and observation in a rigorous way.

After the principal coach coaches a pilot group, the pilot group then goes on to coach others while they study the next stage of material.

To conclude, Jason gave us a live demo of the assessment technique, by launching Eclipse and writing a simple class using TDD live on the projector. The audience were provided with worksheets containing a list of the habits to observe, and instructed to note instances where he did not practice them.

Julian Simpson: Siloes are for farmers

Production deployments using all your team

After a brief introduction to the problems targeted by the devops approach, Julian offered some advice on how to do it right.

He began with the people issues, reminding us of Weinberg’s second law, which is “no matter what they tell you, it’s always a people problem”.

His people tips:

  • In keeping with a recent trend, he criticized email as a severely flawed communication medium, best avoided.
  • respect everyone
  • have lunch with people on the other side of the wall
  • discuss your problems with other groups (don’t just ask for a specific solution)
  • invite everyone to stand-ups and retrospectives
  • co-locate the sysadmins and developers (thomas allen)

Next, a few process suggestions:

  • Avoid code ownership generally (or rather, promote joint/collective ownership)
  • Pair developers with sysadmins
  • It’s done when the code is in production (I would rephrase as: it’s not done until the code is in production)

and then tools:

  • Teach your sysadmins to use version control
  • Help your developers write performant code
  • Help developers with managing their dev environment
  • Run your deploy scripts via continuous integration (leading toward continuous deployment)
  • Use Puppet or Chef (useful as a form of documentation as well as deployment tools, and on developer workstations as well as servers)
  • Integrate monitoring and continuous integration (test monitoring in the development environment)
  • Deliver code as OS packages (e.g. RPM, DEB)
  • Separate binaries and configuration
  • Harden systems immediately and enable logging for tuning security configuration (i.e. configure developer workstations with real security, making the development environment closer to production)
  • Give developers access to production logs and data
  • Re-create the developer environment often (to clear out accumulated cruft)

I agreed with a lot of what was said, objected to some, and lacked clarity on a few points. I think this kind of material is well suited to a multi-way BOF style discussion rather than a presentation format, and would have liked more opportunity for discussion.

Lars George and Fabrizio Schmidt: Social networks and the Richness of Data

Getting distributed webservices done with Nosql

Lars and Fabrizio described the general “social network problem”, and how they went about solving it. This problem space involves the processing, aggregation and dissemination of notifications for a very high volume of events, as commonly manifest in social networking websites such as Facebook and Twitter which connect people to each other to share updates. Apparently simple functionality, such as displaying the most recent updates from one’s “friends”, quickly become complex at scale.

As an example of the magnitude of the problem, he explained that they process 18 million events per day, and how in the course of storing and sharing these across the social graph, some operations peak as high as 150,000 per second. Such large and rapidly changing data sets represent a serious scaling challenge.

They originally built a monolithic, synchronous system called Phoenix, built on:

  • LAMP frontends: Apache+PHP+APC (500 of them)
  • Sharded MySQL multi-master databases (150 of them)
  • memcache nodes with 1TB+ (60 of them)

They then added on asynchronous services alongside this, to handle things like Twitter and mobile devices, using Java (Tomcat) and RabbitMQ. The web frontend would send out AMQP messages, which would then be picked up by the asynchronous services, which would (where applicable) communicate back to Phoenix through an HTTP API call.

When the time came to re-architect their activity , they identified the following requirements:

  • endless scalability
  • storage- and cloud-independent
  • fast
  • flexible and extensible data model

This led them to an architecture based on:

  • Nginx + Janitor
  • Embedded Jetty + RESTeasy
  • NoSQL storage backends (no fewer than three: Redis, Voldemort and Hazelcast)

They described this architecture in depth. The things which stood out for me were:

  • They used different update strategies (push vs. pull) depending on the level of fan-out for the node (i.e. number of “friends”)
  • They implemented a time-based activity filter which recorded a global timeline, from minutes out to days. Rather than traversing all of the user’s “friends” looking for events, they just scan the most recent events to see if their friends appear there.
  • They created a distributed, scalable concurrent ID generator based on Hazelcast, which uses distributed locking to assign ranges to nodes, so that nodes can then quickly (locally) assign individual IDs
  • It’s interesting how many of the off-the-shelf components had native scaling, replication, and sharding features. This sort of thing is effectively standard equipment now.

Their list of lessons learned:

  • Start benchmarking and profiling your app early
  • A fast and easy deployment keeps motivation high
  • Configure Voldemort carefully (especially on large heap machines)
  • Read the mailing lists of the NoSQL system you use
  • No solution in docs? – read the sources
  • At some point stop discussing and just do it

Andres Kitt: Building Skype

Learnings from almost five years as a Skype Architect

Andres began with an overview of Skype, which serves 800,000 registered users per employee (650 vs. 521 million). Their core team is based in Estonia. Their main functionality is peer-to-peer, but they do need substantial server infrastructure (PHP, C, C++, PostgreSQL) for things like peer-to-peer supporting glue, e-commerce and SIP integration. Skype uses PostgreSQL heavily in some interesting ways, in a complex multi-tiered architecture of databases and proxies.

His first lesson was that technical rules of thumb can lead us astray. It is always tempting to use patterns that have worked for us previously, in a different project, team or company, but they may not be right for another context. They can and should be used as a starting point for discussion, but not presumed to be the solution.

Second, he emphasized the importance of paying attention to functional architecture, not only technical architecture. As an example, he showed how the Skype web store, which sells only 4 products (skype in, skype out, voicemail, and subscription bundles of the previous three) became incredibly complex, because no one was responsible for this. Complex functional architecture leads to complex technical architecture, which is undesirable as he noted in his next point.

Keep it simple: minimize functionality, and minimize complexity. He gave an example of how their queuing system’s performance and scalability were greatly enhanced by removing functionality (the guarantee to deliver messages exactly once), which enabled the simplification of the system.

He also shared some organizational learnings, which I appreciated. Maybe my filters are playing tricks on me, but it seems as if more and more discussion of software engineering is focusing on organizing people. I interpret this as a sign of growing maturity in the industry, which (as Andres noted) has its roots in a somewhat asocial culture.

He noted that architecture needs to fit your organization. Design needs to be measured primarily by how well they solve business problems, rather than beauty or elegance.

He stressed the importance of communication, a term which I think is becoming so overused and diluted in organizations that it is not very useful. It’s used to refer to everything from roles and responsibilities, to personal relationships, to cultural norming, and more. In the case of Skype, what Andres learned was the importance of organizing and empowering people to facilitate alignment, information flow and understanding between different parts of the business. Skype evolved an architecture team which interfaces between (multiple) business units and (multiple) engineering teams, helping each to understand the other and taking responsibility for the overall system design.

Conclusion

Overall, I thought the day’s talks gave me new insight into how Internet applications are being developed and deployed in the real world today. They affirmed some of what I’ve been wondering about, and gave me some new things to think about as well. I’m looking forward to tomorrow.

Written by Matt Zimmerman

March 10, 2010 at 17:44