Glider
"In het verleden behaalde resultaten bieden geen garanties voor de toekomst"

Current filter: »Software« (Click tag to remove it or click and/or to switch it.)

About this blog

These are the ramblings of Matthijs Kooijman, concerning the software he hacks on, hobbies he has and occasionally his personal life.

Most content on this site is licensed under the WTFPL, version 2 (details).

Sun Mon Tue Wed Thu Fri Sat
 
19 20
21 22 23 24 25 26 27
28 29 30 31      
Powered by Blosxom &Perl onion
(With plugins: config, extensionless, hide, tagging, Markdown, macros, breadcrumbs, calendar, directorybrowse, entries_index, feedback, flavourdir, include, interpolate_fancy, listplugins, menu, pagetype, preview, seemore, storynum, storytitle, writeback_recent, moreentries)
Valid XHTML 1.0 Strict & CSS
Bouncing packets: Kernel bridge bug or corner case?

Tux

While setting up Tika, I stumbled upon a fairly unlikely corner case in the Linux kernel networking code, that prevented some of my packets from being delivered at the right place. After quite some digging through debug logs and kernel source code, I found the cause of this problem in the way the bridge module handles netfilter and iptables.

Just in case someone else actually finds himself in this situation and actually manages to find this blogpost, I'll detail my setup, the problem and it solution here.

See more ...

Related stories

 
0 comments -:- permalink -:- 18:40
Debian Squeeze on an emulated MIPS machine

In my work as a Debian Maintainer for the OpenTTD and related packages, I occasionally come across platform-specific problems. That is, compiling and running OpenTTD works fine on my own x86 and amd64 systems, but when I my packages to Debian, it turns out there is some problem that only occurs on more obscure platforms like MIPS, S390 or GNU Hurd.

This morning, I saw that my new grfcodec package is not working on a bunch of architectures (it seems all of the failing architectures are big endian). To find out what's wrong, I'll need to have a machine running one of those architectures so I can debug.

In the past, I've requested access to Debian's "porter" machines, which are intended for these kinds of things. But that's always a hassle, which requires other people's time to set up, so I'm using QEMU to set up a virtual machine running the MIPS architecture now.

What follows is essentially an update for this excellent tutorial about running Debian Etch on QEMU/MIPS(EL) by Aurélien Jarno I found. It's probably best to read that tutorial as well, I'll only give the short version, updated for Squeeze. I've also looked at this tutorial on running Squeeze on QEMU/PowerPC by Uwe Hermann.

Finally, note that Aurélien also has pre-built images available for download, for a whole bunch of platforms, including Squeeze on MIPS. I only noticed this after writing this tutorial, might have saved me a bunch of work ;-p

Preparations

You'll need qemu. The version in Debian Squeeze is sufficient, so just install the qemu package:

$ aptitude install qemu

You'll need a virtual disk to install Debian Squeeze on:

$ qemu-img create -f qcow2 debian_mips.qcow2 2G

You'll need a debian-installer kernel and initrd to boot from:

$ wget http://ftp.de.debian.org/debian/dists/squeeze/main/installer-mips/current/images/malta/netboot/initrd.gz
$ wget http://ftp.de.debian.org/debian/dists/squeeze/main/installer-mips/current/images/malta/netboot/vmlinux-2.6.32-5-4kc-malta

Note that in Aurélien's tutorial, he used a "qemu" flavoured installer. It seems this is not longer available in Squeeze, just a few others (malta, r4k-ip22, r5k-ip32, sb1-bcm91250a). I just picked the first one and apparently that one works on QEMU.

Also, note that Uwe's PowerPC tutorial suggests downloading a iso cd image and booting from that. I tried that, but QEMU has no BIOS available for MIPS, so this approach didn't work. Instead, you should tell QEMU about the kernel and initrd and let it load them directly.

Booting the installer

You just run QEMU, pointing it at the installer kernel and initrd and passing some extra kernel options to keep it in text mode:

$ qemu-system-mips -hda debian_mips.qcow2 -kernel vmlinux-2.6.32-5-4kc-malta -initrd initrd.gz -append "root=/dev/ram console=ttyS0" -nographic

Now, you get a Debian installer, which you should complete normally.

As Aurélien also noted, you can ignore the error about a missing boot loader, since QEMU will be directly loading the kernel anyway.

After installation is completed and the virtual system is rebooting, terminate QEMU:

$  killall qemu-system-mips

(I haven't found another way of terminating a -nographic QEMU...)

Booting the system

Booting the system is very similar to booting the installer, but we leave out the initrd and point the kernel to the real root filesystem instead.

Note that this boots using the installer kernel. If you later upgrade the kernel inside the system, you'll need to copy the kernel out from /boot in the virtual system into the host system and use that to boot. QEMU will not look inside the virtual disk for a kernel to boot automagically.

$ qemu-system-mips -hda debian_mips.qcow2 -kernel vmlinux-2.6.32-5-4kc-malta -append "root=/dev/sda1 console=ttyS0" -nographic

More features

Be sure to check Aurélien's tutorial for some more features, options and details.

 
0 comments -:- permalink -:- 12:18
dconf-editor is the new gconf-editor

Gnome

A I previously mentioned, Gnome3 is migrating away from the gconf settings storage to the to GSettings settings API (along with the default dconf settings storage backend).

So where you previously used the gconf-editor program to browse and edit Gnome settings, you can now use dconf-editor to browse and edit settings.

I do wonder if the name actually implies that dconf-editor is editing the dconf storage directly, instead of using the fancy new GSettings API? :-S

 
0 comments -:- permalink -:- 17:07
CrashPlan: Cheap cloud backup that runs on Linux

For some time, I've been looking for a decent backup solution. Such a solution should:

  • be completely unattended,
  • do off-site backups (and possibly onsite as well)
  • be affordable (say, €5 per month max)
  • run on Linux (both desktops and headless servers)
  • offer plenty of space (couple of hundred gigabytes)

Up until now I haven't found anything that met my demands. Most backup solutions don't run on (headless Linux) and most generic cloud storage providers are way too expensive (because they offer high-availability, high-performance storage, which I don't really need).

Backblaze seemed interesting when they launched a few years ago. They just took enormous piles of COTS hard disks and crammed a couple dozen of them in a custom designed case, to get a lot of cheap storage. They offered an unlimited backup plan, for only a few euros per month. Ideal, but it only works with their own backup client (no normal FTP/DAV/whatever supported), which (still) does not run on Linux.

Crashplan

Crashplan logo

Recently, I had another look around and found CrashPlan, which offers an unlimited backup plan for only $5 per month (note that they advertise with $3 per month, but that is only when you pay in advance for four years of subscription, which is a bit much. Given that if you cancel beforehand, you will still get a refund of any remaining months, paying up front might still be a good idea, though). They also offer a family pack, which allows you to run CrashPlan on up to 10 computers for just over twice the price of a single license. I'll probably get one of these, to backup my laptop, Brenda's laptop and my colocated server.

The best part is that the CrashPlan software runs on Linux, and even on a headless Linux server (which is not officially supported, but CrashPlan does document the setup needed). The headless setup is possible because CrashPlan runs a daemon (as root) that takes care of all the actual work, while the GUI connects to the daemon through a TCP port. I still need to double-check what this means for the security though (especially on a multi-user system, I don't want to every user with localhost TCP access to be able to administer my backups), but it seems that CrashPlan can be configured to require the account password when the GUI connects to the daemon.

The CrashPlan software itself is free and allows you to do local backups and backups to other computers running CrashPlan (either running under your own account, or computers of friends running on separate accounts). Another cool feature is that it keeps multiple snapshots of each file in the backup, so you can even get back a previous version of a file you messed up. This part is entirely configurable, but by default it keeps up to one snapshot every 15 minutes for recent changes, and reduces that to one snapshot for every month for snapshots over a year old.

When you pay for a subscription, the software transforms into CrashPlan+ (no reinstall required) and you get extra features such as multiple backup sets, automatic software upgrades and most notably, access to the CrashPlan Central cloud storage.

I've been running the CrashPlan software for a few days now (it comes with a 30-day free trial of the unlimited subscription) and so far, I'm quite content with it. It's been backing up my homedir to a local USB disk and into the cloud automatically, I don't need to check up on it every time.

The CrashPlan runs on Java, which I doesn't usually make me particularly enthousiastic. However, the software seems to run fast and reliable so far, so I'm not complaining. Regarding the software itself, it does seem to me that it's not intended for micromanaging. For example, when my external USB disk is not mounted, the interface shows "Destination unavailable". When I then power on and mount the external disk, it takes some time for Crashplan to find out about this and in the meanwhile, there's no button in the interface to convince CrashPlan to recheck the disk. Also, I can add a list of filenames/path patterns to ignore, but there's not really any way to test these regexes.

Having said that, the software seems to do its job nicely if you just let it do its job in the background. On piece of micromanagement which I do like is that you can manually pause and resume the backups. If you pause the backups, they'll be automatically resumed after 24 hours, which is useful if the backups are somehow bothering you, without the risk that you forget to turn the backups back on.

Backing up only when docked

Of course, sending away backups is nice when I am at home and have 50Mbit fiber available, but when I'm on the road, running on some wifi or even 3G connection, I really don't want to load my connection with the sending of backup data.

Of course I can manually pause the backups, but I don't want to be doing that every time when I pick up my laptop and get moving. Since I'm using a docking station, it makes sense to simply pause backups whenever I undock and resume them when I dock again.

The obvious way to implement this would be to simply stop the CrashPlan daemon when undocking, but when I do that, the CrashPlanDesktop GUI becomes unresponsive (and does not recover when the daemon is started again).

So, I had a look at the "admin console", which offers "command line" commands, such as pause and resume. However, this command line seems to be available only inside the GUI, which is a bit hard to script (also note that not all of the commands seem to work for me, sleep and help seem to be unknown commands, which cause the console to close without an error message, just like when I just type something random).

It seems that these console commands are really just sent verbatim to the CrashPlan daemon. Googling around a bit more, I found a small script for CrashPlan PRO (the business version of their software), which allows sending commands to the daemon through a shell script. I made some modifications to this script to make it useful for me:

  • don't depend on the current working dir, hardcode /usr/local/crashplan in the script instead
  • fixed a bashism (== vs =)
  • removed -XstartOnFirstThread argument from java (MacOS only?)
  • don't store the commands to send in a separate $command but instead pass "$@" to java directly. This latter prevents bash from splitting arguments with spaces in them into multiple arguments, which causes the command "pause 9999" to be interpreted as two commands instead of one with an argument.

I have this script under /usr/local/bin/CrashPlanCommand:

#!/bin/sh
BASE_DIR=/usr/local/crashplan

if [ "x$@" == "x" ] ; then
  echo "Usage: $0 <command> [<command>...]"
  exit
fi

hostPort=localhost:4243
echo "Connecting to $hostPort"

echo "Executing $@"

CP=.
for f in `ls $BASE_DIR/lib/*.jar`; do
    CP=${CP}:$f
done

java -classpath $CP com.backup42.service.ui.client.ConsoleApp $hostPort "$@"

Now I can run CrashPlanCommand 'pause 9999' and CrashPlanCommand resume to pause and resume the backups (9999 is the number of minutes to pause, which is about a week, since I might be undocked more than 24 hourse, which is the default pause time).

To make this run automatically on undock, I created a simply udev rules file as /etc/udev/rules.d/10-local-crashplan-dock.rules:

ACTION=="change", ATTR{docked}=="0", ATTR{type}=="dock_station", RUN+="/usr/local/bin/CrashPlanCommand 'pause 9999'"
ACTION=="change", ATTR{docked}=="1", ATTR{type}=="dock_station", RUN+="/usr/local/bin/CrashPlanCommand resume"

And voilà! Automatica pausing and resuming on undocking/docking of my laptop!

 
1 comment -:- permalink -:- 17:05
Changing the gdm3 (login screen) background in Gnome3

Gnome

I upgraded to Gnome3 this week, and after half a day of debugging I got my (quite non-standard) setup working completely again. One of the things that got broken was my custom wallpaper on the gdm3 login screen. This used to be configured in /etc/gdm3/greeter.gconf.defaults, but apparently Gnome3 replaced gconf by this new "gsettings" thingy.

Anyway, to change the desktop background in gdm, add the following lines to /etc/gdm3/greeter.gsettings:

[org.gnome.desktop.background]
picture-uri='file:///etc/gdm3/thinkpad.jpg'

For reference, I also found some other method, which looks a lot more complicated. I suspect it also doesn't work in Debian, which runs gdm as root, not as a separate "gdm" user. Systems that do use such a user might need the more complicated method, I guess (which probably ends up storing the settings somewhere in the homedir of the gdm user...).

 
0 comments -:- permalink -:- 12:19
Opening attachments on another machine from within mutt

For a fair amount of years now, I've been using Mutt as my primary email client. It's a very nice text-based email client that is permanently running on my server (named drsnuggles). This allows me to connect to my server from anywhere, connect to the running Screen and always get exactly the same, highly customized, mail interface (some people will say that their webmail interfaces will allow for exactly the same, but in my experience webmail is always clumsy and slow compared to a decent, well-customized text-based client when processing a couple of hundreds of emails per day).

Attachment troubles

So I like my mutt / screen setup. However, there has been one particular issue that didn't work quite as efficient: attachments. Whenever I wanted to open an email attachment, I needed to save the attachment within mutt to some place that was shared through HTTP, make the file world-readable (mutt insists on not making your saved attachments world-readable), browse to some url on the local machine and open the attachment. Not quite efficient at all.

Yesterday evening I was finally fed up with all this stuff and decided to hack up a fix. It took a bit of fiddling to get it right (and I had nearly started to spend the day coding a patch for mutt when the folks in #mutt pointed out an easier, albeit less elegant "solution"), but it works now: I can select an attachment in mutt, press "x" and it gets opened on my laptop. Coolness.

How does it work?

Just in case anyone else is interested in this solution, I'll document how it works. The big picture is as follows: When I press "x", a mutt macro is invoked that copies the attachment to my laptop and opens the attachment there. There's a bunch of different steps involved here, which I'll detail below.

See more ...

 
0 comments -:- permalink -:- 22:38
Adobe dropped 64 bit Linux support in Flash again

Only recently, Adobe has started to (finally) support 64 bit Linux with its Flash plugin. I could finally watch Youtube movies (and more importantly, do some Flash development work for Brevidius).

However, this month Adobe has announced that it drops support for 64 bit Linux again. Apparently they "are making significant architectural changes to the 64-bit Linux Flash Player and additional security enhancements" and they can't do that while keeping the old architecture around for stable releases, apparently.

This is particularly nasty, because the latest 10.0 version (which still has amd64 support) has a couple of dozens (!) of security vulnerabilities which are fixed in a 10.1 version only (which does not have Linux amd64 support anymore).

So Adobe is effectively encouraging people on amd64 Linux to either not use their product, or use a version with critical security flaws. Right.

 
0 comments -:- permalink -:- 09:51
URL-encoding in Flash: Be careful of plus signs!

Adobe Flash PHP

Recently I have been doing some Flash debugging for my work at Brevidius. In a video player we have been developing (based on work done by Jeroen Wijering) we needed to escape some url parameter, since our flash code could not be certain what would be in the value (and characters like & and = could cause problems). The obvious way to do this is of course the escape function in ActionScript. This function promises to escape all "non-alphanumeric characters", which would solve all our problems.

However, afters implementing this, we find that there are spaces magically appearing in our GET parameters. Upon investigation, it turns out that there are plus signs in our actual values (it's Base64 encoded data, which uses the plus sign). However, the escape function apparently thinks a plus sign is alphanumeric, since it does not escape it (note that the flash 10 documentation documents this fact). Which shouldn't be a problem, since a plus sign isn't special in an url according to RFC1738:

Thus, only alphanumerics, the special characters "$-_.+!*'(),", and reserved characters used for their reserved purposes may be used unencoded within a URL.

(Note that RFC3986 does recommend escaping plus signs, since they might be used to separate variables, but that's not the case here).

However, the urls we generate in flash point to PHP scripts and thus pass their variables to PHP. Unfortunately, PHP does not adhere to the RFC's strictly: It interprets plus signs in an url as spaces. Historically, spaces in an url were replaced by plus signs, while spaces should really be encoded as %20 nowadays. There is of course a simple way get Flash (or any other url-generating piece of code) work properly with PHP: Simply encode plus signs in your data as %2B (which is the "official" way). This makes sure you get a real plus in your $_GET array in PHP, and the problem is resolved.

After some searching, and asking around in ##swf on Freenode, I found the encodeURIComponent function, which is similar to escape, but does encode the plus sign. If we use this function, we can again send data with spaces to PHP! And since encoding more than needed is still fine according to the specs, there are no downsides (except that you need Flash >= 9.0).

So, if you're developing in Flash, please stop using escape, and use encodeURIComponent instead.

 
0 comments -:- permalink -:- 00:25
Using fcsh as a drop-in replacement for mxmlc

Adobe Flash

I've recently been hacking a bit with Flash (or rather, with Adobe's Flex compiler. This is a freely available commandline compiler, which actually works on Linux as well.

However, out of the box the commandline compiler, mxmlc, is obnoxiously slow. It takes nearly 10 seconds to compile a simple Hello, world! example, and compiling the video player I'm working on takes over 30 seconds. Not quite productive.

This is a documented "flaw" in the mxmlc compiler, caused by the fact that it has to start up a big java program everytime, loading thousands of classes and because it always recompiles the entire source.

The official solution to get faster compile times is to use the Flex Compiler Shell (fcsh), which is included with the Flex SDK. Basically, it's a caching version of the mxmlc compiler, that keeps running in the background and caches compiled files.

fcsh is intended to be used by IDEs. The IDE starts fcsh and then communicates with it through its stdin/stdout. This means fcsh is really a simple thing, without any support for listing on sockets or properly daemonizing.

Using fcsh speeds up the build time from nearly 40 seconds to just a few seconds (depending on how many changes were made). The first compilation run is still slow, of course.

I've been trying to use fcsh with the ant build system, which makes things a bit tricky. Since there is no long-running process (like an IDE) which can keep fcsh running, this needs some way for fcsh to be run in the background, connected to some fifo or network socket so we can start it on the first compilation run and then connect to it in subsequent compilation runs.

A quick google shows that there are already a few fcsh wrappers to do this, some of which intended to work with ant as well. At a quick glance, it seems that the flex-shell-script-daemonizer seems the most useful one. It is created in Java and runs as a daemon (unlike the alternatives I saw, which were Windows-only due to some useless GUI). It has two modes: In server mode it starts fcsh and connects it to a TCP network socket and in client mode it takes a command to run and passes it over the netowrk to the running server.

There is also some stuff to integrate make ant use fcsh for compiling actionscript files. However, this requires changing the build methods and stuff in build.xml, which I don't want to do (I'm trying to minimize my changes to the sources, since I'm modifiying someone else's project).

So, I created a small shell script that can be used as a drop in replacement for mxmlc. It simply takes joins together all its arguments to a single command and passes that to the flex-shell-script-daemonize command. If the fcsh daemon is not yet running, it automatically starts it (and does some half-baked daemonization, since neither fcsh nor flex-shell-script-daemonizer does that...)

While writing the script, I also found out that the fcsh "shell", doesn't have any way to quote spaces in arguments in the command (at least, I couldn't find any and there was no documentation about it). This means that, AFAICS, there is no way to support spaces in paths. How braindead is that... I guess FlexBuilder, the official IDE from Adobe doesn't use fcsh after all and instead just includes the compiler classes directly...

Of course, just as I finished the script, I encountered flexcompile, which basically does the same thing (and includes the network features of flex-shell-script-daemonizer as well). It's written in python. However, it does require the path to flex and the mxmlc part of the path to be passed in as arguments, so it's not a 100% drop-in replacement (which I needed due to the way the build system I was using was defined). Perhaps it might be useful to you.

Anyway, here's my script. Point the JAR variable at the jar generated by flex-shell-script-daemonizer and FCSH to the fcsh binary included with the Flex SDK.

#!/bin/sh

# (Another) wrapper around FlexShellScript.jar to provide a mxmlc-like
# interface (so we can just replace the call to mxmlc with this script).

# For this, we'll have to call FlexShellScript.jar with the "client"
# argument, and then the entire mxmlc command as a single argument
# (whereas we will be called with a bunch of separate arguments)

JAR=/home/matthijs/docs/src/upstream/fcsh-daemonizer/FlexShellScript.jar
FCSH=/usr/local/flex/bin/fcsh
PORT=53000

# Check if fcsh is running
if ! nc -z localhost "$PORT"; then
    echo "fcsh not running, trying to start..."
    # fcsh not running yet? Start it (and do some poor-man's
    # daemonizing, since the jar doesn't do that..)
    java -jar "$JAR" server "$FCSH" "$PORT" > /dev/null 2> /dev/null <
/dev/null &

    # Wait for the server to have started
    while ! nc -z localhost "$PORT"; do
    echo "Waiting for fcsh to start..."
    sleep 1
    done
fi

# Run the client. Note that this does no quoting of parameters, since I
# can't find any documentation that fcsh actually supports that. Seems
# like spaces in path just won't work...
# We use $* instead of $@ for building the command, since that will
# expand to a single word, instead of multiple words.
java -jar "$JAR" client "mxmlc $*" "$PORT"

 
0 comments -:- permalink -:- 00:25
XDebug javascript snippet for Firefox (and whitespace fix for vim)

I've been toying around with XDebug this week, which allows me to debug php scripts interactively, by setting breakpoints, watching variables and stepping through the code from within Vim (it's a bit rudimentary, but it works). Marijn tipped me about this and I used his useful howto to set it up.

I'm still battling with python and vim to properly support spaces in filenames, I'll post my solution once I found it (I found it, see below). If you want to toy around with XDebug and Vim, you should definately check out this comment about other (non-space) special characters in urls.

Anyway, I started out this post because I made something pretty and wanted to share it. To use the XDebug extension, you need to pass XDEBUG_SESSION_START=id with the request you want to debug. Because I kept forgetting how this parameter was called exactly and because it is annoying to insert it at the end of some long url, especially when the url also uses # to jump to an anchor, I created a nifty bookmark.

Screenshot

var loc, tmp, anchor, url, args;

tmp = document.location.href.split('#');
if(tmp[1]) 
    anchor = "#" + tmp[1];
else 
    anchor = "";
tmp = tmp[0].split('?');
url = tmp[0];
if (tmp[1]) 
    args = "?" + tmp[1] + "&";
else 
    args = "?";
args += "XDEBUG_SESSION_START=id";
document.location.href = url + args + anchor;

This script splits the current location in url, args and anchors, adds the argument, and puts the location back together. Simple as that. You can create a bookmark from this script by right-clicking this link and pick "Bookmark this link..." or whatever your browser calls it. You can also use the link to test the script (watch the address bar).

I've tested this script on Firefox, but it should work on most browsers, for it doesn't do anything really complex.

Update: I've managed to convince the XDebug vim script to handle spaces in filenames properly. I've put up a fixed version and a diff with the fixes. Both include the above mentioned special characters fix.

 
0 comments -:- permalink -:- 16:21
Showing 1 - 10 of 20 posts
Copyright by Matthijs Kooijman