Posts Tagged ‘Cloud computing’
Embracing the Web
The web offers a compelling platform for developing modern applications. How can free software benefit more from web technology, and at the same time promote more software freedom on the web? What would the world be like if FLOSS web applications were as plentiful and successful as traditional FLOSS applications are today?
Web architecture
The web, as a collection of interlinked hypertext documents available on the Internet, has been well established for over a decade. However, the web as an application architecture is only just hitting its stride. With modern tools and frameworks, it’s relatively straightforward to build rich applications with browser-oriented frontends and HTTP-accessible backends.
This architecture has its limitations, of course: browser compatibility nightmares, limited offline capabilities, network latency, performance challenges, server-side scalability, complicated multimedia story, and so on. Most of these are slowly but surely being addressed or ameliorated as web technology improves.
However, for a large class of applications, these limitations are easily outweighed by the advantages: cross-platform support, instantaneous upgrades, global availability, etc. The web enables developers to reach the largest audience of users with the most compelling functionality, and simplifies users’ lives by giving them immediate access to their digital lives from anywhere.
Some web advocates would go so far as to say that if an application can be built for the web, it should be built for the web because it will be more successful. It’s no surprise that new web applications are being developed at a staggering rate, and I expect this trend to continue.
So what?
This trend represents a significant threat, and a corresponding opportunity, to free software. Relatively few web applications are free software, and relatively few free software applications are built for the web. Therefore, the momentum which is leading developers and users to the web is also leading them (further) away from free software.
Traditionally, pragmatists have adopted free software applications because they offered immediate gratification: it’s much faster and easier to install a free software application than to buy a proprietary one. The SaaS model of web applications offers the same (and better) immediacy, so free software has lost some of its appeal among pragmatists, who instead turn to proprietary web applications. Why install and run a heavyweight client application when you can just click a link?
Many web applications—perhaps even a majority—are built using free software, but are not themselves free. A new generation of developers share an appreciation for free software tools and frameworks, but see little value in sharing their own software. To these developers, free software is something you use, not something you make.
Free software cannot afford to ignore the web. Instead, we should embrace the web more completely, more powerfully, and more effectively than proprietary systems do.
What would that look like?
In my view, a FLOSS client platform which fully embraced the web would:
- treat web applications as first-class citizens. The web would not be just another application, represented by a browser, but more like a native application runtime. Web applications could feel much more “native” while still preserving the advantages of a web-style user experience. There would be no web browser: that’s a tool for legacy systems to run web applications within a compatibility environment.
- provide a seamless experience for developers to build web applications. It would be as fast and easy to develop a trivial client/server web application as it is to write “Hello, world!” in PyGTK using Quickly. For bonus points, it would be easy to develop and run web applications locally, and then deploy directly to a PaaS or IaaS cloud.
- empower the user to manage their applications and data regardless of where they are hosted. Traditional operating systems act as a connecting fabric for local applications, providing a shared namespace, file store and IPC mechanisms, but web applications are lacking this. The web’s security model requires that applications are thoroughly sandboxed from each other, but a mediating operating system could connect them in meaningful ways, just as web browsers store cookies and passwords for various websites while protecting them from each other.
Imagine a world where free web applications are as plentiful and malleable as free native applications are today. Developers would be able to branch, test and submit patches to them.
What about Chrome OS?
Chrome OS is a step in the right direction, but doesn’t yet realize this vision. It’s a traditional operating system which is stripped down and focused on running one application (a web browser) very, very well. In some ways, it elevates web applications to first-class status, though its paradigm is still fundamentally that of a web browser.
It is not designed for development, but for consuming the web. Developers who want to create and deploy web applications must use a more traditional operating system to do so.
It does not put the end user in control. On the contrary, the user is almost entirely dependent on SaaS applications for all of their needs.
Although it is constructed using free software, it does not seem to deliver the principles or benefits of software freedom to the web itself.
How?
Just as free software was bootstrapped on proprietary UNIX, the present-day web is fertile ground for the development of free web applications. The web is based on open standards. There are already excellent web development tools, web application frameworks and server software which are FLOSS. Leading-edge web browsers like Firefox and Chrome/Chromium, where much web innovation is happening today, are already open source.
This is a huge head start toward a free web. I think what’s missing is a client platform which catalyzes the development and use of FLOSS web applications.
DevOps and Cloud
DevOps
I first heard about DevOps from Lindsay Holmwood at linux.conf.au 2010. Since then, I’ve been following the movement with interest. It seems to be about cross-functional involvement in software teams, specifically between software development and system administration (or operations). In many organizations, especially SaaS shops, these two groups are placed in opposition to each other: developers are driven to deliver new features to users, while system administrators are held accountable for the operation of the service. In the best case, they maintain a healthy balance by pushing in opposite directions, but more typically, they resent each other for getting in the way, as a result of this dichotomy:
Development | Operations | |
---|---|---|
is responsible for… | creating products | offering services |
is measured on… | delivery of new features | high reliability |
optimizes by… | increasing velocity | controlling change |
and so is perceived as… | reckless and irresponsible | obstructing progress |
Of course, both functions are essential to a viable service, and so DevOps aims to replace this opposition with cooperation. By removing this friction from the organization, we hope to improve efficiency, lower costs, and generally get more work done.
So, DevOps promotes the formation of cross-functional teams, where individuals still take on specialist “development” or “operations” roles, but work together toward the common goal of delivering a great experience to users. By working as teammates, rather than passing work “over the wall”, they can both contribute to development, deployment and maintenance according to their skills and expertise. The team becomes a “devops” team, and is responsible for the entire product life cycle. Particular tasks may be handled by specialists, but when there’s a problem, it’s the team’s problem.
Some take it a step further, and feel that what’s needed is to combine the two disciplines, so that individuals contribute in both ways. Rather than thinking of themselves as “developers” or “sysadmins”, these folks consider themselves “devops”. They work to become proficient in both roles, and to synthesize new ways of working by drawing on both types of skills and experience. A common crossover activity is the development of sophisticated tools for automating deployment, monitoring, capacity management and failure resolution.
DevOps meets Cloud
Like DevOps, cloud is not a specific technology or method, but a reorganization of the model (as I’ve written previously). It’s about breaking down the problem in a different way, splitting and merging its parts, and creating a new representation which doesn’t correspond piece-for-piece to the old one.
DevOps drives cloud because it offers a richer toolkit for the way they work: fast, flexible, efficient. Tools like Amazon EC2 and Google App Engine solve the right sorts of problems. Cloud also drives DevOps because it calls into question the traditional way of organizing software teams. A development/operations division just doesn’t “fit” cloud as well as a DevOps model.
Deployment is a classic duty of system administrators. In many organizations, only the IT department can implement changes in the production environment. Reaping the benefits of an IaaS environment requires deploying through an API, and therefore deployment requires development. While it is already common practice for system administrators to develop tools for automating deployment, and tools like Puppet and Chef are gaining momentum, IaaS makes this a necessity, and raises the bar in terms of sophistication. Doing this well requires skills and knowledge from both sides of the “fence” between development and operations, and can accelerate development as well as promote stability in production.
This is exemplified by infrastructure service providers like Amazon Web Services, where customers pay by the hour for “black box” access to computing resources. How those resources are provisioned and maintained is entirely Amazon’s problem, while its customers must decide how to deploy and manage their applications within Amazon’s IaaS framework. In this scenario, some operations work has been explicitly outsourced to Amazon, but IaaS is not a substitute for system administration. Deployment, monitoring, failure recovery, performance management, OS maintenance, system configuration, and more are still needed. A development team which is lacking the experience or capacity for this type of work cannot simply “switch” to an IaaS model and expect these needs to be taken care of by their service provider.
With platform service providers, the boundaries are different. Developers, if they build their application on the appropriate platform, can effectively outsource (mostly) the management of the entire production environment to their service provider. The operating system is abstracted away, and its maintenance can be someone else’s problem. For applications which can be built with the available facilities, this will be a very attractive option for many organizations. The customers of these services may be traditional developers, who have no need for operations expertise. PaaS providers, though, will require deep expertise in both disciplines in order to build and improve their platform and services, and will likely benefit from a DevOps approach.
Technical architecture draws on both development and operations expertise, because design goals like performance and robustness are affected by all layers of the stack, from hardware, power and cooling all the way up to application code. DevOps itself promotes greater collaboration on architecture, by involving experts in both disciplines, but cloud is a great catalyst because cloud architecture can be described in code. Rather than talking to each other about their respective parts of the system, they can work together on the whole system at once. Developers, sysadmins and hybrids can all contribute to a unified source tree, containing both application code and a description of the production environment: how many virtual servers to deploy, their specifications, which components run on which servers, how they are configured, and so on. In this way, system and network architecture can evolve in lockstep with application architecture.
Cloudy promises such as dynamic scaling and fault tolerance call for a DevOps approach in order to be realized in a real-world scenario. These systems involve dynamically manipulating production infrastructure in response to changing conditions, and the application must adapt to these changes. Whether this takes the form of an active, intelligent response or a passive crash-only approach, development and operational considerations need to be aligned.
So what?
DevOps and cloud will continue to reinforce each other and gain momentum. Both individuals and organizations will need to adapt in order to take advantage of the opportunities provided by these new models. Because they’re complementary, it makes sense to adopt them together, so those with expertise in both will be at an advantage.
Ubuntu 10.10 (Maverick) Developer Summit
I spent last week at the Ubuntu Developer Summit in Belgium, where we kicked off the 10.10 development cycle.
Due to our time-boxed release cycle, not everything discussed here will necessarily appear in Ubuntu 10.10, but this should provide a reasonable overview of the direction we’re taking.
Presentations
While most of our time at UDS is spent in small group meetings to discuss specific topics, there are also a handful of presentations to convey key information and stimulate creative thinking.
A few of the more memorable ones for me were:
- Mark Shuttleworth talked about the desktop, in particular the introduction of the new Unity shell for Ubuntu Netbook Edition
- Fanny Chevalier presented Diffamation, a tool for visualizing and navigating the history of a document in a very flexible and intuitive way
- Rick Spencer talked about the development process for 10.10 and some key changes in it, including a greater focus on meeting deadlines for freezes (and making fewer exceptions)
- Stefano Zacchiroli, the current Debian project leader, gave an overview of how Ubuntu and Debian developers are working together today, and how this could be improved. He has posted a summary on the debian-project mailing list.
The talks were all recorded, though they may not all be online yet.
Foundations
The Foundations team provides essential infrastructure, tools, packages and processes which are central to the development of all Ubuntu products. They make it possible for the desktop and server teams to focus on their areas of expertise, building on a common base system and development procedures.
Highlights from their track:
- Early on in the week, they conducted a retrospective to discuss how things went during the 10.04 cycle and how we can improve in the future
- One of their major projects has been about revision control for all of Ubuntu’s source code, and they talked last week about what’s next
- We’re aiming to provide btrfs as an install-time option in 10.10
- In order to keep Ubuntu moving forward, the foundations team is always on the lookout for stale bits which we don’t need to keep around anymore. At UDS, they discussed culling unattended packages, retiring the IA64 and SPARC ports and other spring cleaning
- There was a lot of discussion about Upstart, including its further development, implications for servers, desktops and the kernel, and the migration of server init scripts to upstart jobs
- After maintaining two separate x86 boot loaders for years, it looks like we may be ready to replace isolinux with GRUB2 on Ubuntu CDs
Desktop
The desktop team manages both Desktop Edition and Netbook Edition, on a mission to provide a top-notch experience to users across a range of client computing devices.
Highlights from their track:
- A key theme for 10.10 is to help developers to create applications for Ubuntu, by providing a native development environment, improving Quickly, improving desktopcouch, making it easier to get started with desktopcouch, and enabling developers to deliver new applications to Ubuntu users continuously
- With more and more touch screen devices appearing, Ubuntu will grow some new features to support touch oriented applications
- The web browser is a staple application for Ubuntu, and as such we are always striving for the best experience for our users. The team is looking ahead to Chromium, using apport to improve browser bug reports, and providing a web-oriented document capability via Zoho
- Building on work done in 10.04, we will aim to make simple things simple for basic photo editing
- Security-conscious users may rest easier knowing that the X window system will run without root privileges where kernel modesetting is supported
Server/Cloud
The server team is charging ahead with making Ubuntu the premier server OS for cloud computing environments.
Highlights from their track:
- Providing more powerful tools for managing Ubuntu in EC2 and Ubuntu Enterprise Cloud infrastructure, including boot-time configuration, image and instance management, and kernel upgrades
- Improving Ubuntu Enterprise Cloud by adding new Eucalyptus features (such as LXC container support, monitoring, rapid provisioning, and load balancing. If you ever wanted to run a UEC demo from a USB stick, that’s possible too.
- Providing packaged solutions for cloud building blocks such as hadoop and pig, Drupal, ehcache, Spring, various NOSQL databases, web frameworks, and more
- Providing turn-key solutions for free software applications like Alfresco and Kolab
- Making Puppet easier to deploy, easier to configure, and easier to scale in the cloud
ARM
Kiko Reis gave a talk introducing ARM and the corresponding opportunity for Ubuntu. The ARM team ran a full track during the week on all aspects of their work, from the technical details of the kernel and toolchain, to the assembly of a complete port of Netbook Edition 10.10 for several ARM platforms.
Kernel
The kernel team provided essential input and support for the above efforts, and also held their own track where they selected 2.6.35 as their target version, agreed on a variety of changes to the Ubuntu kernel configuration, and created a plan for providing backports of newer kernels to LTS releases to support deployment on newer hardware.
Security
Like the kernel team, the security team provided valuable input into the technical plans being developed by other teams, and also organized a security track to tackle some key security topics such as clarifying the duration of maintenance for various collections of packages, and the ongoing development of AppArmor and Ubuntu’s AppArmor profiles.
QA
The QA team focuses on testing, test automation and bug management throughout the project. While quality is everyone’s responsibility, the QA team helps to coordinate these activities across different teams, establish common standards, and maintain shared infrastructure and tools.
Highlights from their track include:
- There was a strong sense of broadening and deepening our testing efforts, mobilizing testers for specific testing projects, streamlining the ISO testing process by engaging Ubuntu derivatives and fine-tuning ISO test cases, and reactivating the community-based laptop testing program
- In support of this effort, there will be projects to improve test infrastructure, including enabling tests to target specific hardware and tracking test results in Launchpad
- There is a continuous effort to improve high-volume processing of bug reports, and two focus areas for this cycle will be tracking regressions (as these are among the most painful bugs for users) and improving our response to kernel bugs (as the kernel faces some special challenges in bug management)
Design
The design team organized a track at UDS for the first time this cycle, and team manager Ivanka Majic gave a presentation to help explain its purpose and scope.
Toward the end of the week, I joined in a round table discussion about some of the challenges faced by the team in engaging with the Ubuntu community and building support for their work. This is a landmark effort in mating professional design with free software community, and there is still much to learn about how to do this well.
Community
The community track discussed the usual line-up of events, outreach and advocacy programs, organizational tools, and governance housekeeping for the 10.10 cycle, as well as goals for improving the translation of Ubuntu and related resources into many languages.
One notable project is an initiative to aggressively review patches submitted to the bug tracker, to show our appreciation for these contributions by processing them more quickly and completely.
On Cloud
If you’re convinced that “cloud” is a useless buzzword, being used to describe everything under the sun, old and new, then…well, you’re right—that is, except for the “useless” part. It’s true that this word is being (ab)used in many different products, services and technologies, which do not seem to relate to each other in any concrete way. “Cloud” in the abstract is being defined in many different ways, based on different fundamental characteristics of “cloud technology”. Nonetheless, there is something genuinely important going on here, and this is my view of what it’s about.
The hype
Countless business which only had “web sites” or “web applications” in 2008 now call them “cloud services”. They aren’t delivering any new benefits to their users, and they haven’t redesigned their infrastructure. What do they mean by this?
Cloud is defined, some say, by where your data is kept, and “cloud” means storing your data on remote servers instead of internal ones. Millions of Gmail and Flickr users have been “in the cloud” without even knowing it, and so has everyone who reads their mail over IMAP, or uses voicemail. In short, cloud is just data that is somewhere else.
Last September, I watched a presentation on “Cloud computing” where Symantec pitched their anti-virus product as having something like three different “cloud computing” technologies. These, it turned out, all involved downloading files over the Internet. In this view, cloud is essentially about the network, derived from our habit of using a drawing of a cloud in our network diagrams for so many years. Wikipedia currently shares this view, as its Cloud computing article opens with “Cloud computing is Internet-based computing”. No Internet, no cloud.Maybe cloud is just SaaS, and any service provided on a pay-per-use basis over the Internet is a “cloud” service. It would seem that cloud is about paying for things, especially on a “utility” basis. The cloud is computing by the hour; it’s something you buy.
Others maintain that “cloud” is just virtualization, so if you’re using KVM or VMWare, you’ve got a cloud already. In this view, cloud is about operating systems, and whether they’re running on real or virtual hardware. Cloud means deploying applications as virtual appliances, and creating new servers entirely in software. If your servers only have one OS on them, then they’re not cloud-ready.
If this vagueness and contradiction irritates you, then you’re not alone. It bugs me, and a lot of other people who are getting on with developing, using and providing technology and services. Cloud is not just a new name for these familiar technologies. In fact, it isn’t a technology at all. I don’t think it even makes sense to categorize technologies as “cloud” and “non-cloud”, though some may be more “cloudy” than others.
Nonetheless, there is some meaning and validity in each of these interpretations of cloud. So what is it?
A different perspective
Cloud is a transition, a trend, a paradigm shift. It isn’t something which exists or not: it is happening, and we won’t understand its essential nature until it’s over. Simon Wardley explains this, in presentation format, much better than I could in this blog post, so if you haven’t already, I suggest that you take 14 minutes of your time and go and watch him do his thing.This change doesn’t have a clear beginning or an end. Early on, it was a disconnected set of ideas without any identity. We’re now somewhere in the middle of the bell curve, with enough insight to give it a name, and sufficient momentum to guess at what happens next. In the end, it will be absorbed into “the way things are”.
What’s going on?
So, what is actually changing? The key trends I see are:
- People and organizations are becoming more comfortable relying on resources which do not exist in any particular place. Rather than storing precious metals under our beds, we entrust our finances to banks, which store our data and provide us with services which let us “use” our “money”. In computing terms, we’re no longer enamored with keeping all of “our” programs and “our” data on “our” hard drive: as it turns out, it’s not very safe there after all, and in some ways we can actually exercise more control over programs and data “out there” than “in here”. We can no longer hoard our gold, or drive off would-be thieves with a pistol, but that wasn’t a productive way to spend our energies anyway. While we can’t put our hands on our money, we can know exactly what’s happening with it from minute to minute, and use it in a variety of ways at a moment’s notice.
- IT products are ceding ground to services. In many cases, we no longer need to build software, or even buy it: we can simply use it. As computing needs become better understood, and can be met by commodity products, it becomes more feasible to develop services to meet them on-demand. This is why we see a spike in acronyms ending in “aaS”. Simon explains the progression in detail in his talk, linked above.
- Hardware, software and data are being reorganized according to new models, in order to provide these services effectively. Architectural patterns, such as “hardware/OS/framework/application” are giving way to new ones, like “hardware/OS/IaaS/OS/PaaS/SaaS/Internet/web browser/OS/hardware”. These are still evolving, and the lines between them are blurry. It’s a bit early to say what the dominant patterns will be, but system, network, data and application architecture are all being transformed. No single technology or architecture defines cloud, but virtualization (at the infrastructure level) and the web (at the application level) both seem to resonate strongly with “cloudy” design patterns.
These trends are reinforcing and accelerating each other, driving information technologies and businesses in a common direction. That, in a nutshell, is what cloud is all about.
So what?
This transformation is disruptive in many different ways, but the angles which most interest me at the moment are:
- Operating systems – Cloud seems to indicate further commoditization of operating systems. We will have more operating systems than ever before (thanks to virtualization and IaaS), but we probably won’t think about them as much (as in software appliances). I think that the cloud world will want operating systems which are free, standard and highly customizable, which is potentially a great opportunity for Ubuntu and other open systems.
- Software freedom – As Eben Moglen and others have pointed out, this trend has significant consequences for the free software movement. As the shape of software changes, our principles of freedom must evolve as well. What does it mean to have the freedom to “run” a program in the cloud? To copy it? To change it? What other protections might people need in order to exercise these freedoms in the future?
- DevOps – I see a strong resonance between cloud—which is blurring the lines between hardware and software, infrastructure and applications, network and computer—and DevOps, which is bringing together the people who currently work in these different categories. Cloud means that we can no longer afford to treat these elements separately, and must work together to find better approaches to developing and deploying software, knocking down barriers and mapping new territory. Fortunately, cloud also means developing powerful new tools which will power this revolution.
- Clients – A majority of cloud related activity seems to be focused on what happens to back-end servers, but what about the computers we actually touch, on our desks, and in our pockets? How will they change? Will we end up reverting to a highly centralized computing model, where clients are strictly limited to front-end user interface processing (e.g. a web browser)? Will clients “join” the cloud and be providers of computing resources, not only consumers? What new types of devices will we need in order to make the most of cloud?
I’d like to explore some of these topics in future posts.
QCon London 2010: Day 2
I was talk-hopping today, so none of these are complete summaries, just enough to capture my impressions from the time I was there. I may go back and watch the video for the ones which turned out to be most interesting.
Yesterday, I noted a couple of practices employed by the QCon organizers which I wanted to note, to consider trying them out with Canonical and Ubuntu events:
- As participants leave each talk, they pass a basket with a red, a yellow and a green square attached to it. Next to the wastebasket are three small stacks of colored paper, also red, yellow and green. There are no instructions, indeed no words at all, but the intent seemed clear enough: drop a card in the basket to give feedback.
- The talks were spread across multiple floors in the conference center, which I find is usually awkward. They mitigated this somewhat by posting a directory of the rooms inside each lift.
Chris Read: The Cloud Silver Bullet
Which calibre is right for me?
Chris offered some familiar warnings about cloud technologies: that they won’t solve all problems, that effort must be invested to reap the benefits, and that no one tool or provider will meet all needs. He then classified various tools and services according to their suitability for long or short processing cycles, and high or low “data sensitivity”.
Simon Wardley: Situation Normal, Everything Must Change
I actually missed Simon’s talk this time, but I’ve seen him speak before and talk with him every week about cloud topics as a colleague at Canonical. I highly recommend his talks to anyone trying to make sense of cloud technology and decide how to respond to it.
In some of the talks yesterday, there was a murmur of anti-cloud sentiment, with speakers asserting it was not meaningful, or they didn’t know what it was, or that it was nothing new. Simon’s material is the perfect antidote to this attitude, as he makes it very clear that there is a genuinely important and disruptive trend in progress, and explains what it is.
Jesper Boeg: Kanban
Crossing the line, pushing the limit or rediscovering the agile vision?
Jesper shared experiences and lessons learned with Kanban, and some of the problems it addresses which are present in other methodologies. His material was well balanced and insightful, and I’d like to go back and watch the full video when it becomes available.
Here again was a clear and pragmatic focus on matching tools and processes to the specific needs of the team, business and situation.
Ümit Yalcinalp: Development Model for the Cloud
Paradigm Shift or the Same Old Same Old?
Ümit focused on the PaaS (platform as a service) layer, and the experience offered to developers who build applications for these platforms. An evangelist from Salesforce.com, she framed the discussion as a comparison between force.com, Google App Engine and Microsoft Azure.
Eric Evans: Folding Design into an Agile Process
Eric tackled the question of how to approach the problem of design within the agile framework. As an outspoken advocate of domain-driven design, he presented his view in terms of this school and its terminology.
He emphasized the importance of modeling “when the critical complexity of the project is in understanding and communicating about the domain”. The “expected” approach to modeling is to incorporate an up-front analysis phase, but Eric argues that this is misguided. Because “models are distilled knowledge”, and teams are relatively ignorant at the start of a project, modeling in this way captures that ignorance and makes it persist.
Instead, he says, we should employ to a “pull” approach (in the Lean sense), and decide to work on modeling when:
- communications with stakeholders deteriorates
- when solutions are more complex than the problems
- when velocity slows (because completed work becomes a burden)
Eric illustrated his points in part by showing video clips of engineers and business people engaged in dialog (here again, the focus on people rather than tools and process). He used this material as the basis for showing how models underlie these interactions, but are usually implicit. These dialogs were full of hints that the people involved were working from different models, and the software model needed to be revised. An explicit model can be a very powerful communication tool on software projects.
He outlined the process he uses for modeling, which was highly iterative and involves identifying business scenarios, using them to develop and evaluate abstract models, and testing those models by experimenting with code (“code probes”). Along the way, he emphasized the importance of making mistakes, not only as a learning tool but as a way to encourage creative thinking, which is essential to modeling work. In order to encourage the team to “think outside the box” and improve their conceptual model, he goes as far as to require that several “bad ideas” are proposed along the way, as a precondition for completing the process.
Eric is working on a white paper describing this process. A first draft is available on his website, and he is looking for feedback on it.
Modeling work, he suggested, can be incorporated into:
- a stand up meeting
- a spike
- an iteration zero
- release planning
He pointed out that not all parts of a system are created equal, and some of them should be prioritized for modeling work:
- areas of the system which seem to require frequent change across projects/features/etc.
- strategically important development efforts
- user experiences which are losing coherence
This was a very compelling talk, whose concepts were clearly applicable beyond the specific problem domain of agile development.
Ubuntu Server Edition 9.10: No hardware required
In the 9.10 release of Ubuntu Server Edition, we introduced something new for people who are exploring cloud computing using Amazon’s Elastic Compute Cloud (EC2).
If you haven’t tried it yet, EC2 is essentially an API for managing virtual servers. Using a command line tool or script, you request a new VM, and moments later, it is ready for you to ssh in, preconfigured with your ssh public key. When you’re finished, you shut it down, and receive a bill for only the time and Internet bandwidth you used (about $0.10 per hour and $0.10-$0.17 per gigabyte). There is no downloading, no installing, and no hardware required for you to set up a server. The first step is to boot it up.
Starting with release 9.10, every release of Ubuntu Server Edition is simultaneously available on EC2. This means you can have a new Ubuntu server up and running using your EC2 account with a single command. Ready-to-run Ubuntu machine images are published on EC2 whenever a new Ubuntu release or milestone is available. All you need to know is the AMI, a short string which uniquely identifies the image you want. The AMIs for the 9.10 release are on the download page, in 32- and 64-bit versions for US and Europe zones.
Similarly, all Server Edition development builds are available on EC2 as well. When the first builds of Lucid (Ubuntu 10.04) are created, there will be AMIs for those as well. If you want to test drive a new feature, or check compatibility with your application, just fire up a new instance on EC2, do your work, and then terminate it. The whole process can be completed in less than a minute. If you find a problem in our development builds, just run ubuntu-bug on the virtual machine as you normally would, and apport will automatically attach the relevant EC2 details to your bug report.
As I mentioned above, EC2 does charge for Internet bandwidth. It does not charge for local bandwidth within your EC2 zone. For this reason, Canonical has set up Ubuntu archive mirrors within EC2, so that you can download all Ubuntu packages and updates for free. Ubuntu virtual machines inside EC2 are automatically configured to use the appropriate mirror, so you don’t need to think about it.
This is an exciting step forward in making Ubuntu more convenient and powerful to use on EC2, and I encourage you to give it a try. If you’ve never used EC2 before, just follow our Starter’s Guide to get set up.