We'll see | Matt Zimmerman

a potpourri of mirth and madness

Posts Tagged ‘Hacks

Decoding a .mobileconfig file containing a Cisco IPsec VPN configuration

When someone wants to give you access to a Cisco VPN, they might give you a .mobileconfig file. This is apparently used by MacOS and iOS to encapsulate the configuration parameters needed to connect to a VPN. You should be able to connect to it with open source software (such as NetworkManager and vpnc) as long as you have the right configuration. Some helpful soul has tried to give you that configuration, but it’s wrapped up in an Apple-specific container. Here’s how you rip it open and get the goodies.

File format

A .mobileconfig appears to contain:

  1. Some binary garbage which is safe to ignore
  2. An XML document containing the good bits, i.e.:
    1. The “local identifier” (i.e. IPsec group name)
    2. The “remote address” (i.e. IPsec gateway host)
    3. The shared secret (base64 encoded)
  3. Some more binary garbage which is safe to ignore

…and it looks like this:

<plist version="1.0">

The shared secret is base64-encoded, so you can decode it with:

$ echo -n 'BASE64_ENCODED_SECRET_HERE' | base64 -d

Network Manager configuration

  1. Make sure you have network-manager-vpnc installed
  2. Click the Network Manager icon, select “VPN Connections”, “Configure VPN…”
  3. Create a “Cisco-compatible (vpnc)” connection

    Create a “Cisco-compatible (vpnc)” VPN connection

  4. Configure the connection settings as follows:

    Configure the connection settings

    • Enter the “remote address” in the “Gateway” field
    • Enter the “local identifier” in the “Group name” field
    • Enter the shared secret in the “Group password” field
  5. To connect, click the Network Manager icon, select “VPN Connections”, and select the connection you just configured

Good luck and enjoy!

Written by Matt Zimmerman

November 15, 2012 at 18:29

Navigating the PolicyKit maze

I’ve written a simple application which will automatically extract media from CDs and DVDs when they are inserted into the drive attached to my server. This makes it easy for me to compile all of my media in one place and access it anytime I like. The application uses the modern udisks API, formerly known as DeviceKit-disks, and I wrote it in part to learn get some experience working with udisks (which, it turns out, is rather nice indeed).

Naturally, I wanted to grant this application the privileges necessary to mount, unmount and eject removable media. The server is headless, and the application runs as a daemon, so this would require explicit configuration. udisks uses PolicyKit for authorization, so I expected this to be very simple to do. In fact, it is very simple, but finding out exactly how to do it wasn’t quite so easy.

The Internet is full of web pages which recommend editing /etc/PolicyKit/PolicyKit.conf. As far as I can tell, nothing pays attention to this file anymore, and all of these instructions have been rendered meaningless. My system was also full of tools like polkit-auth, from the apparently-obsolete policykit package, which kept their configuration in some other ignored place, i.e. /var/lib/PolicyKit. It seems the configuration system has been through a revolution or two recently.

In Ubuntu 10.04, the right place to configure these things seems to be /var/lib/polkit-1/localauthority, and this is documented in pklocalauthority(8). Authorization can be tested using pkcheck(1), and the default policy can be examined using pkaction(1).

I solved my problem by creating a file in /var/lib/polkit-1/localauthority/50-local.d with a .pkla extension with the following contents:

[Access to removable media for the media group]

This took effect immediately and did exactly what I needed. I lost quite some time trying to figure out why the other methods weren’t working, so perhaps this post will save the next person a bit of time. It may also inspire some gratitude for the infrastructure which makes all of this work automatically for more typical usage scenarios, so that most people don’t need to worry about any of this.

Along the way, I whipped up a patch to add a --eject option to the handy udisks(1) tool, which made it easier for me to test along the way.

Written by Matt Zimmerman

June 27, 2010 at 14:38

Extracting files from a nandroid backup using unyaffs

I recently upgraded my G1 phone to the latest Cyanogen build (5.x). Since the upgrade instructions recommend wiping user data, I made a “nandroid” backup first, using the handy Amon_RA recovery image. I’ve gotten pretty familiar with the Android filesystem layout, and was confident I could restore anything I really missed (such as my wpa_supplicant.conf with all of my WiFi credentials).

It wasn’t until I finished with the upgrade that I realized the backup wasn’t trivial to work with. It’s a raw yaffs2 flash image, which can’t be simply mounted on a loop device. After messing around for a bit with the nandsim module, mtd-utils and the yaffs2 kernel module, I realized there was a much simpler way: the unassuming unyaffs. It says that it can only extract images created by mkyaffs2image, but apparently the images in the nandroid backup are created this way (or otherwise compatible with unyaffs).

So I downloaded and built unyaffs:

svn checkout http://unyaffs.googlecode.com/svn/trunk/ unyaffs
cd unyaffs
gcc -o unyaffs unyaffs.c

and then ran it on the backup image:

mkdir g1data && cd g1data # unyaffs extracts into the current directory
~/src/android/unyaffs/unyaffs /media/G1data/nandroid/HT839GZ23983/BCDS-20100529-1311/data.img

At which point I could restore files one by one, e.g.:

adb push /tmp/g1data/misc/wifi/wpa_supplicant.conf /data/misc/wifi/

After toggling WiFi off and then back on, all of my credentials were restored. I was able to restore preferences for various applications in the same way.

Written by Matt Zimmerman

May 29, 2010 at 19:24

Introducing the jonometer

a learning experiment using Python, Twitter, CouchDB and desktopcouch

For a while now, I’ve been wanting to do a programming project using CouchDB and third-party web service APIs. I started out with an application to sync Launchpad bug data into CouchDB so that I could analyze it locally, a bit like Bug Hugger. It quickly got too complex for my spare time, and stalled. I’d still like to pick it up someday when I can devote more time to it.

More recently, I was noticing that Jono seemed to be having a rocking good time lately, sending a lot of awesome tweets about jams. This was only conjecture, though, and I needed hard data. I need to quantify just how strong these influences were.

Now, this was a project I could get done in an evening of hacking and learning.

Version One

First, I threw together this quick proof of concept to learn the Twitter API and get some tantalizing preliminary data. Behold version 1.0 of the jonometer:


# python-twitter

import sys
import twitter
import re

username = 'jonobacon'
updates_wanted = 100
patterns = ['rock', 'awesome', 'jam']

class Counter:
    """A simple accumulator which counts matches of a regex"""

    def __init__(self, pattern):
        self.pattern = pattern
        self.regex = re.compile(pattern, re.I)
        self.count = 0

    def update(self, s):
        """Increment count if the string s matches the pattern"""
        if self.regex.search(s):
            self.count += 1

def main():
    client = twitter.Api()
    counters = map(Counter, patterns)
    updates_found = 0
    for update in client.GetUserTimeline(username, updates_wanted):
        updates_found += 1
        for counter in counters:

    for counter in counters:
        print counter.pattern, counter.count

if __name__ == '__main__':

The output looked like this:

rock 5
awesome 6
jam 10

In other words, about 5% of Jono’s recent tweets were rocking, another 6% were awesome, and a whopping 10% were jamming! I was definitely onto something, but I had to find out more.

One of the shortcomings of this quick prototype is that it would download the data from Twitter every time I ran it. This meant that it was fairly slow (about 2 seconds for 100 tweets), which is inconvenient for experimenting with different patterns, and that I wouldn’t want to try it with larger data sets (say, thousands of tweets, or multiple people).

Version Two

Enter CouchDB, the darling of the NoSQL crowd: fast, scalable and simple, it was just what I wanted for the next version of the jonometer. I replaced the Counter objects with a single Database, which stores all of the tweets in CouchDB. This was incredibly simple to do, because python-twitter provides an .AsDict() method which returns a tweet as a dictionary object, and CouchDB can store this type of data structure directly into the database. Easy!

Each time the jonometer is run, it downloads all of the new tweets since the previous run. In order to do this, it needs to keep track of the most recent tweet ID it has seen, so that it can pick up where it left off. I had originally planned to store a record in the database with the sync state, but after Stuart reminded me that Gwibber does much the same thing, I followed its example and instead calculated it using a view. Each row in the “maxid” view records the highest tweet ID seen for a particular user:

The maxid view
Key Value
jonobacon 10743678774

…so although the jonometer is currently Jono-specific, it could be extended easily.

For the core functionality, I created a view called “matches” to count how many tweets match each pattern. For each key (username and pattern), there is a row in this view which records how many tweets from that user matched that pattern:

The matches view
Key Value
["jonobacon", null] 100
["jonobacon", "Awesome"] 6
["jonobacon", "Jam"] 10
["jonobacon", "Rock"] 5

The null pattern is used to keep a count of the total number of tweets for that user.

Once the data is loaded, the runtime for the CouchDB version is only about 0.3 seconds, including the Python interpreter startup as well as checking Twitter to see if there are new tweets. I doubled the size of the database up to 200 (which was about all Twitter would give me in one batch), and this didn’t change measurably. If I’ve done all of this right, it should scale easily up to thousands of tweets. Awesome! Adding or changing a pattern currently requires manually deleting the view so that it can be re-created. There is probably an established pattern for dealing with this, but I don’t know what it is yet.

Here’s the code for version 2:


# python-twitter
# python-desktopcouch

import sys
import twitter
import re
from desktopcouch.records.server import CouchDatabase
from desktopcouch.records.record import Record

username = 'jonobacon'
# title string : JavaScript regex
patterns = { 'Rock' : 'rock',
        'Awesome' : 'awesome',
        'Jam' : 'jam' }

class Database(CouchDatabase):
    design_doc = "jonometer"
    database_name = "jonometer"

    def __init__(self, patterns):
        """patterns is a dictionary of (title string, JavaScript regex)"""

        CouchDatabase.__init__(self, self.database_name, create=True)
        self.patterns = patterns.copy()

        # set up maxid view
        if not self.view_exists("maxid", self.design_doc):
            mapfn = '''function(doc) { emit(doc.user.screen_name, doc.id); }'''
            viewfn = '''function(key, values, rereduce) {
    return Math.max.apply(Math, values);
            self.add_view("maxid", mapfn, viewfn, self.design_doc)

        # set up a view to count occurrences of each pattern
        if not self.view_exists("matches", self.design_doc):

            mapfn = '''
function(doc) {
    emit([doc.user.screen_name, null], 1);

    var pattern = null;
    var pattern_name = null;

            mapfn += ''.join(['''   
    pattern = "%s";
    pattern_name = "%s";
    if (new RegExp(pattern, "i").exec(doc.text)) {
        emit([doc.user.screen_name, pattern_name], 1);
    ''' % (pattern, pattern_name)
       for pattern_name, pattern in self.patterns.items()])

            mapfn += '}'

            viewfn = '''function(key, values, rereduce) { return sum(values); }'''
            self.add_view("matches", mapfn, viewfn, self.design_doc)

    def maxid(self, username):
        """Return the highest known tweet ID for the specified user"""

        view = self.execute_view("maxid", self.design_doc)
        result = view[username].rows
        if len(result) > 0:
            return result[0].value
        return None

    def count_matches(self, username, pattern_name=None):
        """Return the number of tweets from username which match 
        the specified pattern.

        If no pattern is specified, count all tweets."""

        assert pattern_name is None or pattern_name in self.patterns
        view = self.execute_view("matches", self.design_doc)
        result = view[[username, pattern_name]].rows
        if len(result) > 0:
            return result[0].value

def main():
    client = twitter.Api()
    db = Database(patterns)

    maxid = db.maxid(username)
    if maxid:
        timeline = client.GetUserTimeline(username, since_id=maxid)
        timeline = client.GetUserTimeline(username, count=100)

    for tweet in timeline:
        print "new:", tweet.GetText()
        record = Record(tweet.AsDict(),
        record_id = db.put_record(record)

    for pattern in patterns:
        print pattern, db.count_matches(username, pattern)
    print "total", db.count_matches(username)

if __name__ == '__main__':

Written by Matt Zimmerman

March 19, 2010 at 23:41

Quick hack: GPT partitions without kernel support

I have a couple of USB hard disks which each have a single GPT partition on them. I recently moved them to an embedded server, and discovered that its Linux kernel lacked support for GPT.

For various reasons, it isn’t practical for me to replace its kernel right now, but I still wanted to be able to use the disks, and to have them automount by UUID.

…some time later…

A set of udev rules:

# Import variables from devkit-disks-part-id on the *parent* device
# devkit-disks-part-id looks at $DEVPATH regardless of the argument passed to
# it, so we need to override that
ATTR{partition}=="1", IMPORT{program}="/usr/bin/env DEVPATH=%p/.. /lib/udev/devkit-disks-part-id /dev/%P"

# If this partition is on a disk using GPT, fake it
ATTR{partition}=="1", ENV{DKD_PARTITION_TABLE_SCHEME}=="gpt", RUN+="/sbin/losetup -o 16896 -f /dev/%k"

This code uses a tool from devicekit-disks to detect when a GPT partition table is present. If so, it sets up a loop device at the appropriate (hardcoded) offset corresponding to the GPT partition.

It only works for a single partition, and it’s not exactly pretty, but it solved my problem. The loop devices generate their own uevents, the generic udev rules detect the UUID, and everything works.

Written by Matt Zimmerman

December 22, 2009 at 16:59