We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Web

Optimizing my social network

with 4 comments

I’ve been working to better organize my online social network so as to make it more useful to me and to the people I know.

I use each social networking tool in a different way, and tailor the content and my connections accordingly. I don’t connect with all of the same people everywhere. I am particularly annoyed by social networks which abuse the word “friend” to mean something wholly different than it means in the rest of society. If I’m not someone’s “friend” on a certain website, it doesn’t mean that I don’t like them. It just means that the information I exchange with them fits better somewhere else.

Here is the arrangement I’ve ended up with:

  • If you just want to hear bits and pieces about what I’m up to, you can follow me on identi.ca, Twitter or FriendFeed. My identi.ca and Twitter feeds have the same content, though I check @-replies on identi.ca more often.
  • If you’re interested in the topics I write about in more detail, you can subscribe to my blog.
  • If you want to follow what I’m reading online, you can subscribe to my Google Reader feed.
  • If (and only if) we’ve worked together (i.e. we have worked cooperatively on a project, team, problem, workshop, class, etc.), then I’d like to connect with you on LinkedIn. LinkedIn also syndicates my blog and Twitter.
  • If you know me “in real life” and want to share your Facebook content with me, you can connect with me on Facebook. I try to limit this to a manageable number of connections, and will periodically drop connections where the content is not of much interest to me so that my feed remains useful. Don’t take it personally (see the start of this post). Virtually everything I post on my Facebook account is just syndicated from other public sources above anyway. I no longer publish any personal content to Facebook due to their bizarre policies around this.

Written by Matt Zimmerman

March 18, 2010 at 17:28

QCon London 2010: Day 3

with one comment

The tracks which interested me today were “How do you test that?”, which dealt with scenarios where testing (especially automation) is particularly challenging, and “Browser as a Platform”, which is self-explanatory.

Joe Walker: Introduction to Bespin, Mozilla’s Web Based Code Editor

I didn’t make it to this talk, but Bespin looks very interesting. It’s “a Mozilla Labs Experiment to build a code editor in a web browser that Open Web and Open Source developers could love”.

I experimented briefly with the Mozilla hosted instance of Bespin. It seems mostly oriented for web application development, and still isn’t nearly as nice as desktop editors. However, I think something like this, combined with Bazaar and Launchpad, could make small code changes in Ubuntu very fast and easy to do, like editing a wiki.

Doron Reuveni: The Mobile Testing Challenge

Why Mobile Apps Need Real-World Testing Coverage and How Crowdsourcing Can Help

Doron explained how the unique testing requirements of mobile handset application are well suited to a crowdsourcing approach. As the founder of uTest, he explained their approach to connecting their customers (application vendors) with a global community of testers with a variety of mobile devices. Customers evaluate the quality of the testers’ work, and this data is used to grade them and select testers for future testing efforts in a similar domain. The testers earn money for their efforts, based on test case coverage (starting at about $20 each), bug reports (starting at about $5 each), and so on. Their highest performers earn thousands per month.

uTest also has a system, uTest Remote Access, which allows developers to “borrow” access to testers’ devices temporarily, for the purpose of reproducing bugs and verifying fixes. Doron gave us a live demo of the system, which (after verifying a code out of band through Skype) displayed a mockup of a BlackBerry device with the appropriate hardware buttons and a screenshot of what was displayed on the user’s screen. The updates were not quite real-time, but were sufficient for basic operation. He demonstrated taking a picture with the phone’s camera and seeing the photo within a few seconds.

Dylan Schiemann: Now What?

Dylan did a great job of extrapolating a future for web development based on the trend of the past 15 years. He began with a review of the origin of web technologies, which were focused on presentation and layout concerns, then on to JavaScript, CSS and DHTML. At this point, there was clear potential for rich applications, though there were many roadblocks: browser implementations were slow, buggy or nonexistent, security models were weak or missing, and rich web applications were generally difficult to engineer.

Things got better as more browsers came on the scene, with better implementations of CSS, DOM, XML, DHTML and so on. However, we’re still supporting an ancient implementation in IE. This is a recurring refrain among web developers, for whom IE seems to be the bane of their work. Dylan added something I hadn’t heard before, though, which was that Microsoft states that anti-trust restrictions were a major factor which prevented this problem from being fixed.

Next, there was an explosion of innovation around Ajax and related toolkits, faster javascript implementations, infrastructure as a service, and rich web applications like GMail, Google Maps, Facebook, etc.

Dylan believes that web applications are what users and developers really want, and that desktop and mobile applications will fall by the wayside. App stores, he says, are a short term anomaly to avoid the complexities of paying many different parties for software and services. I’m not sure I agree on this point, but there are massive advantages to the web as an application platform for both parties. Web applications are:

  • fast, easy and cheap to deploy to many users
  • relatively affordable to build
  • relatively easy to link together in useful ways
  • increasingly remix-able via APIs and code reuse

There are tradeoffs, though. I have an article brewing on this topic which I hope to write up sometime in the next few weeks.

Dylan pointed out that different layers of the stack exhibit different rates of change: browsers are slowest, then plugins (such as Flex and SilverLight), then toolkits like Dojo, and finally applications which can update very quickly. Automatically updating browsers are accelerating this, and Chrome in particular values frequent updates. This is good news for web developers, as this seems to be one of the key constraints for rolling out new web technologies today.

Dylan feels that technological monocultures are unhealthy, and prefers to see a set of competing implementations converging on standards. He acknowledged that this is less true where the monoculture is based on free software, though this can still inhibit innovation somewhat if it leads to everyone working from the same point of view (by virtue of sharing a code base and design). He mentioned that de facto standardization can move fairly quickly; if 2-3 browsers implement something, it can start to be adopted by application developers.

Comparing the different economics associated with browsers, he pointed out that Mozilla is dominated by search through the chrome (with less incentive to improve the rendering engine), Apple is driven by hardware sales, and Google by advertising delivered through the browser. It’s a bit of a mystery why Microsoft continues to develop Internet Explorer.

Dylan summarized the key platform considerations for developers:

  • choice and control
  • taste (e.g. language preferences, what makes them most productive)
  • performance and scalability
  • security

and surmised that the best way to deliver these is through open web technologies, such as HTML 5, which now offers rich media functionality including audio, video, vector graphics and animations. He closed with a few flashy demos of HTML 5 applications showing what could be done.

Written by Matt Zimmerman

March 12, 2010 at 17:14

Multivac emerging

with 19 comments

Science fiction writer Isaac Asimov envisioned a computer called Multivac, powerful enough to process all of the planet’s data. Humanity painstakingly collects massive quantities of information to submit to Multivac on a daily basis, in exchange for the opportunity to ask questions of it.

With so much information at its disposal, Multivac is capable of amazing feats of analysis and prediction, which guide humanity to resolving global problems of war, poverty and so on.

The corporate mission statement of Google, Inc. is to organize the world’s information and make it universally accessible and useful. Google constantly processes information from the web, books, and photographic imagery from space and from the surface of the planet. Its famously simple search interface invites humans to ask it about anything, and it provides instantaneous answers in the form of references to information it has collected.

Google is not yet capable, in general, of providing meaningful answers to natural language questions, though research is ongoing, and systems like Wolfram Alpha hint at more abstract manipulation of data at a global scale.

We seem to be edging closer to Asimov’s vision of Multivac. What would you ask Multivac, given the opportunity? How will our future reality differ from science fiction?

Written by Matt Zimmerman

November 8, 2009 at 18:37

Posted in Uncategorized

Tagged with ,

Google Voice

with 6 comments

I’ve been experimenting with Google Voice while traveling in the US. I would have tried it sooner, but it isn’t very UK-friendly at present.

The good:

  • Free SMS to US mobiles from the browser
  • Convenient browsing and searching of received/placed calls, SMS and voicemail
  • Initiation of phone calls from the browser
  • Unified contacts database with my Android phone (and GMail, though I don’t use it)
  • Simple call routing, so I can use a fixed number when I’m in the US even though I usually pick up new a prepaid SIM on each trip
  • Ability to choose a phone number through searching, to find one which is easy to remember
  • Speech-to-text of voicemails (maybe just good enough to be useful)

The bad or missing:

  • Requires a data connection on Android (problematic e.g. when roaming or using a prepaid/pay-as-you-go SIM), though it falls back gracefully to non-Google-Voice service
  • Calls outside the US cost money (Vonage and other VOIP providers offer this for free)
  • Calls can apparently only be placed between POTS lines (no softphone functionality)
  • Yet another place to set my time zone.  Being able to selectively block calls while I’m asleep abroad would be the killer feature for me
  • Caller ID doesn’t seem fully integrated in Android: it sometimes looks like I’m on a call with myself

The boring:

  • Free calls within the US: people still pay for this?
  • Voicemail: people still leave voicemail?

The “ah” moment came when someone gave me a phone number on IRC, and I copy/pasted it into my browser to call them.

Written by Matt Zimmerman

September 5, 2009 at 20:26

Posted in Uncategorized

Tagged with , ,

Social media has made me boring

with 34 comments

I’ve lived in many different places in my life, and spent a lot of time online. As a result, my friends are dispersed around the world. We see one another rarely, and stay in touch through short letters, text conversations and phone calls. Through these occasional communications, we learn about what is going on in each other’s lives. We share anecdotes about where we’ve been, what we’re thinking about, what we’ve done, read, heard and watched.

At least, that’s how it used to work.

Now, when I travel, it appears in my Facebook feed. When I do something interesting, I mention it on identi.ca and Twitter. When I read something I like, I share it on Google Reader. If I watch a video which reminds me of a friend, I send them a link. If I have an idea I think is worth sharing, I write about it on my blog.

The end result of all this is that when I catch up with friends who use social media, I don’t have much to say. They’ve already heard it all, and there is very little “catching up” to do. It’s awkward when I start to tell them about something which has happened to me, and they remind me that they’ve already heard about it.

I would have expected this to make me feel closer to people, but it doesn’t. It makes the relationship feel less intimate. Reading something on a feed is not the same as hearing about it first-hand, but even so, the first-hand account feels a bit redundant because it’s not new.

Is anyone else having the same experience?

Written by Matt Zimmerman

August 18, 2009 at 12:17

Posted in Uncategorized

Tagged with , ,

Why I hate web content filtering systems

with 12 comments

While on holiday for the weekend, I have been browsing RSS feeds using NewsRob, a convenient offline RSS reader which synchronizes with Google Reader. I came across an article on Kirrily Robert’s blog on the use of the word “offense” in the context of sexism. The RSS content indicated that there were several comments on this post, and so I clicked through to check it out. The result was this:

Screenshot-SonicWALL - Web Site Blocked - Shiretoko

My phone was connected to the hotel’s WiFi, which apparently has a SonicWALL content filter installed. This filter seems to think that Kirrily’s blog is “Adult/Mature Content”. I’m not sure why this is. Perhaps because the word “sexism” has “sex” in it?

SonicWALL has a web site where one can view their ratings for arbitrary web sites, and it confirms that infotrope.net (apparently the whole site) is classified as Category 6: Adult/Mature Content and Category 41: Society and Lifestyle. There is no explanation of what these categories mean, or how the classification process works.

They provide a form to request that they reevaluate their rating of a particular site, so I did that. Without knowing why they classified this site the way they did, I don’t see how a reevaluation will help, though. They asked me to suggest better categories for it, but none of them made sense. They can’t be serious about using a list of a few dozen static categories for all of the content on the web. Can they?

I wrote them a short note explaining that I did not think the site merited an “Adult/Mature” content rating, a euphemism usually reserved for pornography. I have very little hope that this action will ever elicit a response, and it certainly won’t restore my access to the site this weekend while I’m here. I am not a customer of SonicWALL, and don’t expect ever to be.

I will mention to the hotel that their system caused this problem for me, but I don’t expect them to act on this complaint either. Companies which install content filtering systems don’t plan to spend time maintaining them to make sure that they provide a good quality of service, and this is sure to seem like an unnecessary hassle to them.

This is why I hate content filtering systems. They disenfranchise end users by making content decisions on their behalf, and make it their responsibility to work through the bureaucracy when it goes wrong.

Written by Matt Zimmerman

August 8, 2009 at 22:23

Posted in Uncategorized

Tagged with , ,

Iberia on-line check-in doesn’t work with free software

with 11 comments

Apparently, there are still some 1990s-era websites which haven’t noticed that the web is not Windows. I realized this today when attempting to check in for an Iberia flight using my Ubuntu system and Firefox.

Someone thought it would be a good idea to use JavaScript and ActiveX to check whether the user has installed Adobe Acrobat Reader. If they don’t, it displays a helpful message:

Error message from Iberia on-line check-in

Error message from Iberia on-line check-in

…and refuses to continue. After all, if you might not have Adobe Acrobat Reader installed, why bother trying to check in? I’m glad you asked. Because it’s not the only program capable of reading and printing PDF files!

You may have noticed that the message offers us some information, just in case the script got it wrong. Perhaps it tells us how to continue checking in despite having different application preferences than their developers? Unfortunately, no. Instead, it explains how to configure Windows to use Adobe Acrobat Reader as the default PDF viewer:

Helpful guidance for configuring your Windows system

Helpful guidance for configuring your Windows system

I’m not the first one to discover this problem, as a quick search turned up instructions for working around the problem (Spanish, English translation) from September 2008.

Written by Matt Zimmerman

July 2, 2009 at 14:06

Posted in Uncategorized

Tagged with , , ,

apturl: Quick links for Ubuntu applications

with 19 comments

I’d like to bring your attention to a little-used feature of Ubuntu which helps connect the web to the vast repository of software which is packaged for Ubuntu: apturl.  It has been included in Ubuntu installations since 7.10 (Gutsy), but isn’t yet widely used on the web.  Have a look at it and see if it might be useful to you.

Here are some examples of how you can use it:

  • If you’re the maintainer of an application which is packaged in Ubuntu, add an apturl link to your site so that Ubuntu users can simply click to install it: If you’re running Ubuntu, check out Banshee
  • If you’re the author of a how-to document, replace apt-get commands or Add/Remove instructions with a simple hyperlink: Step 1: To get started making screencasts, install gtk-recordmydesktop
  • If you’re writing a blog post or other content where you make reference to an application, include an apturl link so that readers can follow along by installing it: I’m eagerly following the development of Gwibber in Karmic

It would be fantastic if someone came up with a “Get it for Ubuntu” image and a small, embeddable HTML fragment which could be used for these sorts of links.  Meanwhile, please try it out and let us know what you think of it.

Update: WordPress completely destroys the apt: links in this post. Boo! Refer to the wiki pages for syntax examples.

Written by Matt Zimmerman

June 27, 2009 at 23:25

Posted in Uncategorized

Tagged with , , ,

Micro-blogging maze

with 22 comments

I’ve only been micro-blogging for about a month now, and already, it’s gotten complicated.

mdz-microblogging-architecture

Diagram of my micro-blogging world

  • I have five views of the micro-blogging world: identi.ca (web), Twitter (web), Gwibber (client), Twidroid (client), Facebook (web)
  • I use two different micro-blogging services, identi.ca (which the free software community seems to prefer) and Twitter (which everyone else seems to prefer).  Many people seem to be on both.  Some of them relay their updates from one or the other to Facebook, some don’t.
  • Most of the time, I watch identi.ca, but I occasionally need to check the other places or I miss something.
  • identi.ca can relay my updates to Twitter and Facebook, but @replies don’t come back.  People on Twitter can hear me, but I don’t often hear them until much later (when I check Twitter)
  • Twidroid can send and receive both identi.ca and Twitter messages, but can only connect to one of them at a time (dashed lines).
  • Gwibber talks to everything every way, but crashes a lot
  • identi.ca uses #hashtags and !groups, and I never know which is right.  Twitter uses #hashtags.  In both cases, there’s no feedback about whether you’ve correctly guessed a hashtag which other people are using

I’m sure some of you, dear readers, have managed to bring this mess under control.  How?

P.S. The diagram was originally an SVG, but WordPress doesn’t seem to support them.  Shame…

Written by Matt Zimmerman

June 26, 2009 at 10:02

Posted in Uncategorized

Tagged with ,

Internet discussion trends: from Usenet to micro-blogs

with 7 comments

I’ve written briefly before about how our lives are digitized at an increasing rate and wondered about the impact this has on our social behavior.  One particular arc which interests me is how we engage in discussion.

Usenet news

In the early 1990s, newsgroups were the norm for discussion.  They were multiplying daily, and for almost any topic, chances were good that there was already a Usenet newsgroup, whether you were a recovering system administrator or a kibologist. The killer feature of newsgroups was their universality: anyone could get access to a Usenet feed via their ISP, and in one shot be connected to the full spectrum of discussion groups. Their weakness was that, despite the vast namespace, there wasn’t room for everyone. There were already too many groups, and ISPs grumbled about the time and expense of shipping all of this data around. Why should you have your own newsgroup? Newsgroups existed everywhere and nowhere, and authority was questionable at best. To create a new newsgroup, you might be expected to discuss it in a certain place, ask a local administrator, participate in a voting process, or all of these and more. Even if you did it all right, someone who disagreed could delete your group at any time by posting a control message from anywhere in the network.

It took some time for a new message to propagate through Usenet, and most people didn’t read their news all that frequently, so discussions progressed at a moderate pace. A typical discussion lasted for days, and a popular thread might go on for months as it spiralled away from the original topic. News reading programs developed advanced features for filtering out irrelevant discussions.

FAQ editors distilled the accumulated wisdom of newsgroups into documents, and posted them to the group periodically as they were revised. People worried about the signal-to-noise ratio of popular groups, and predicted that Usenet would become worthless when it dropped too low.  This was a constant perceived danger as more and more people began to participate, having no idea how to behave appropriately, and were rebuked.  Some groups appointed moderators to filter out inappropriate content.

Mailing lists

The next dominant pattern was that of email discussion via mailing lists. These were hosted in a definite “place” within an Internet domain, and any system administrator could set up a new one without stepping on anyone else’s toes. They were a bit harder to find, as there was no complete index of mailing lists, but this too was an improvement. If “newbies” couldn’t find your mailing list, they couldn’t post irrelevant content or otherwise misbehave. The barrier to entry provided a useful constraint on growth.

Mailing lists propagated messages more quickly, and people read email more frequently, so the pace of discussion increased.  A misstep on a mailing list could get you “flamed” within minutes.  Content posted to the list was only sent to individual members, and (unlike Usenet) was not copied to every ISP’s news server. The cost of a message, instead, was measured in terms of the attention of its readers. People worried about the signal-to-noise ratio on their mailing lists, and appointed moderators.  The FAQ pattern became decoupled from the discussion group as the web became a more convenient storehouse for this information.

Keeping up with multiple or high-traffic mailing lists required sophisticated software to sort mailing lists into separate folders, set appropriate headers on replies, scan or read many messages quickly, and so on.  Joining a mailing list required sending specially formatted control messages to a certain address.  People with only basic email capabilities found mailing lists difficult to use, and the people on mailing lists generally considered this to be a good thing, because they would probably just make a nuisance of themselves anyway (not knowing the appropriate etiquette).

Web forums

Forums filled a vacuum for people who wanted to participate in discussion, but found mailing lists and the associated software too complex and cumbersome.  Setting up a new forum was still moderately complex (a task for system administrators or webmasters) but anyone could participate in a forum with very little effort or technical expertise.

Without the underlying standardization of news or mail, forums vary widely in their capabilities, social patterns and content.  In general, they seem to be less linear than newsgroups or mailing lists, where the usual pattern is to scan all of the new messages since the last visit.   Visitors to forums instead look at “what’s newest”, or “what’s hot”, or search for a specific topic.

Forums have evolved fairly sophisticated mechanisms for managing the signal-to-noise ratio.  Very active forums tend to be moderated by visitors, who vote content (or other visitors) up or down according to relevance.  Participants maintain a visible and persistent identity within the forum (an avatar), in contrast to the simple email addresses used on Usenet and mailing lists, and forum content can be deleted (an impractical task in mail and news systems).

On the other hand, some types of noise (like “me too” messages) seem to be tolerated well enough in forums, while this is considered disruptive on a mailing list.  I suspect that etiquette varies widely depending on the capabilities and culture of the particular forum.

Blogs

The key advancement of blogs was that anyone could set one up, without any technical expertise, and make it available to everyone.  Originally, blogs weren’t discussion-oriented at all, which I think contributed to their slow growth early on.  Over time, through comments, trackbacks and web-based aggregators, they have grown some discussion capabilities.  Most people seem to read blogs through an RSS reader of some sort, though, and rarely visit the originating site at all.  Few blog posts attract more than a small number of comments or responses on other blogs, and these often receive a disproportionately small amount of attention compared to the original post.

The blogosphere moves quickly, but is largely incoherent as a whole.  Bloggers read each other’s content, and it influences their opinions and their writing, but there is very little direct response or feedback compared to the earlier examples above.  Etiquette is all but inapplicable to individual blogs, as readers can be assumed to be “tuning in” to the author’s intended content.  If they don’t like it, they simply don’t return.

It’s even more difficult for third parties to observe blog discussion.  It can be challenging to find responses, and inconvenient to display them on-screen at the same time.  The present infrastructure still seems much better suited to a singular “speaker” or “listener” role than to multi-directional discussion.  It will be interesting to see whether it becomes more discussion-friendly in the future.

Micro-blogging

I don’t think I fully “get” micro-blogging yet, as I haven’t used in enough myself.  It seems much more discussion-oriented than blogging, more like a very fast-paced mailing list than a small blog.  Short, unstructured discussions can be held, two-way or around a topic, and broadcast content is acceptable as well.  It would be nice to see more of a threading model, to make discussions easier to follow.

The real-time nature of micro-blogging seems to show obvious promise for discussion, but I’m not sure that the current formats support this very well.  I’d be interested to hear from more experienced micro-bloggers about this.

What next?

I think a “best of all worlds” online discussion system would combine:

  • the clear threading of mailing lists
  • the universal accessibility of blogging
  • the mobile/real-time nature of micro-blogging
  • collaborative filtering (something better than what I’ve seen in forums)
  • a balance between “pull” (everything since I last checked) and “push” (whatever you want to tell me right now) models
  • decentralized infrastructure, which can be hosted by anyone anywhere, and improved upon through open development

Taking things even further, I’d like to have all of the advantages of a face-to-face conversation (time-efficiency, rich expression, personal connection, well-understood etiquette, etc.),  and the advantages of online discussion (spanning long distances, interacting with large numbers of people, careful organization of thoughts, flexible starting and stopping, background and context).  As long as I’m making a wish list, how about a seamless connection between public and private discourse?

If you know of experiments happening in this area, please post them in comments here.

Written by Matt Zimmerman

May 3, 2009 at 11:57

Posted in Uncategorized

Tagged with ,

Follow

Get every new post delivered to your Inbox.

Join 2,244 other followers