All posts by Daniel

Twonky DLNA with video thumbnails

Update: download thumbnails and movie posters from the internet automatically using tw-video-scraper for Twonky.

For streaming videos to a DLNA compatible television, you can use TwonkyServer software. This is not the only DLNA software around, but I like it because you can run it on a lot of platforms, it is fast, and it runs on low-powered machines. The installation instructions for Twonky on Linux are not very complete. For thumbnails the instructions are completely missing. You will need some Linux knowledge to set everything up. Luckily, it is not that difficult.

First, download Twonky. If you visit the main web site, it is very easy to miss that Twonky is available for a lot of platforms. On the main web site, only Windows, OS X and Linux i386 are advertised. On the forum, you can find the NAS versions. If you want to run it on an ARM architecture machine, try the Kirkwood build. This is an ARM build, but unlike the other ARM bundles, it will run on newer Debian versions (due to library issues, the ARM bundles do not run on Debian).

Extract the zip file to the directory /usr/local/twonkymedia Next, set the correct permissions on files by running

chmod 700 twonkym* cgi-bin/* plugins/*

Now let’s make sure that Twonky starts when the system boots. A boot script is supplied, but it is not optimised for Debian. Open the file twonkymedia.sh in a text editor and change the lines

# Default-Start:  3 5
# Default-Stop:   0 1 2 6

to

# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6

Then, make a symbolic link to this script in the /etc/init.d directory, and make sure that the script executed at boot time. This can be done with the following two commands

ln -s /usr/local/twonkymedia/twonkymedia.sh /etc/init.d/twonkymedia
update-rc.d twonkymedia defaults

That is it for the standard set-up of Twonky. Next, let’s enable thumbnail support. For this, some extra software is needed. This can be installed with

apt-get install dtach ffmpeg

Edit the file /usr/local/twonkymedia/cgi-bin/ffmpeg.location at replace its content with

/usr/bin

Next, edit the file /usr/local/twonkymedia/cgi-bin/ffmpeg-video-thumb.desc at replace its content with

# generate video thumbs
#(c) 2010 by PacketVideo
exec: ffmpeg -itsoffset -30 -i $infile -vcodec mjpeg -vframes 1 -an -f rawvideo -s 284x160 $outfile
# capabilities
from=video/*
to=image/jpeg
videothumb
synchronous

Finally, make some edits (again) to the file /usr/local/twonkymedia/twonkymedia.sh Find the line that says

"$TWONKYSRV" -D

and replace this with

dtach -n /tmp/twonkymedia "$TWONKYSRV"

Now you should be able to start Twonky with the command

/etc/init.d/twonkymedia start

Open a web browser, and go the the URL http://nas-ip-address:9000/config Click on the Transcoding link, select JPEG, and click Save Changes.
Now you can add the folder you wish to make available via Twonky. Click on the Sharing link and add the folder that hold your media. Click Save Changes again, and Twonky will scan your folders for media. With the link Server Status, it is possible to see how far the scan has progressed. Notice that this is only the scanning of the files. The process of generating thumbnails is not covered on the status page. The only way to see this, is on the server itself, with the command

dtach -a /tmp/twonkymedia

Use the key Ctrl+\ to go back to the terminal. Thumbnail generating will continue in the background.

Squeezebox server

The SheevePlug is now also running Squeezebox Server. This because I bought a Squeezebox Touch. It’s a very nice device to play music. TwonkyMedia Server was already running on the SheevaPlug, and allowed me to play video and music using a DLNA compatible television. But this set-up is not ideal for music; the television must always be on. The Squeezebox is a dedicated device just the play music. Unfortunately, Squeezebox uses its own server software, and does not work with the standard DLNA. The software that is needed is, however, open source, and can be run on a lot of different devices, including Linux. All it need is Perl and MySQL. Easy deb packages are available for download.

The speed it not great, but certainly workable. It first needs to index all music. This takes around 40 minutes for 6000 files. When browsing using the web interface, it takes a few seconds to load a page. When using the Squeezebox itself, the speed it more than enough.

Another advantage of the Squeezebox, is that a lot of it is open source. The server software is open source, written in Perl, and the Touch runs a small version of Linux. You can connect using SSH. The interface of the Touch is written in lua, and can be hacked completely. I already wrote my first hack

Two servers: SheevaPlug and Amazon

After my SheevaPlug died, I was in search for a new (temporary) host. I decided on Amazon EWS, because it allowed me to set up my own server, use the software I want, and configure it the way I need to. The SheevaPlug itself was quickly repaired, but I decided to keep Amazon. Why? Because the server there is actually a bit faster then the SheevaPlug. Although the SheevaPlug could run dynamic web pages (such as this blog), it was not ideal, and pages loaded slowly. Most web pages were served statically.

But running Amazon EWS also has its disadvantages, especially with mail. The service offered by Amazon is great for spammers. You can set up a server, send all the spam, and cancel the account. For this reason, Amazon limits the number of mails you can send from an Amazon account. Even worse, many mail providers block all mail from Amazon. Hotmail is one such provider. If you try to send a mail to Hotmail, you get a quick message back, that mail is refused.

To solve this, I had to “split” the server. Web pages are still hosted at Amazon, but the mail server is hosted at the SheevaPlug. It was a bit complicated to set up, but it all works beautifully now. If a web page (on Amazon) now tries to send a mail, it used postfix. In postfix, the relay SMTP host is set to the web address of the SheevaPlug. So all mail is first send to the SheevaPlug, and only then to the addressee. The web mail is still hosted at Amazon, but it reads the mail from the SheevaPlug. Newly received mail is delivered directly to the SheevaPlug. This can be achieved by setting MX-1 DNS record.

The SheevaPlug died…

This Friday my SheevaPlug suddenly died. All that was left was a device with the green blinking LED. Nothing worked, including the serial interface, which should always work. Searching the web, it turns out this is a problem with the power supply (PSU). Apparently it happens more often. My SheevaPlug was still under warrenty, so NewIT will replace the it. Unfortunately, the new PSU is not available until January. So what to do with all the sites that I run on my SheevePlug’s web server?

I decided to use Amazon EC2. You can place your own virtual server in Amazon’s cloud. When you sign up, you get one micro server for free. Such a micro server is even a little bit faster than the SheevaPlug, so that it an added bonus. Because you run your own server, you can choose whatever OS you want (I choose Debian, only Lenny was available though), and you can set it up the way you want it. So it was an easy process of copying all the data and configuration files to the new server. Within a couple of hours, the web server was up and running again. There seems to be some cost to data transfer out from the server (so for every web request). But it is a ridiculous low $0.01 per GB. After two days I used 7 MB, so I’ll take a while before I have to pay 2 cents.

It’s not all bad, because when I had to set up a new server, I could use my own articles. I could updated some articles, to include extra information, or correct some errors.

Update 1: First problem found. Hotmail refuses all mail sent from Amazon EC2 servers, because the IP range is listed in Spamhaus’ Policy Block List. So you would need to use an external SMTP relay host.

Update 2 (28 December): I received a new PSU from NewIT today. Installing this was easy. The SheevaPlug is now working again. Very quick response from NewIT.

See unread mail on SSH login

In the previous post, I explained how to set up your mail server. Now an extra tip on how to read your mail. If you have your own server, you will probably log in via SSH to administer it. Would it not be nice to be greeted with a message if you have new mail? That’s quite easy. Edit the files /etc/pam.d/login and /etc/pam.d/sshd and find the line with pam_mail.so on it. Make it look like

session    optional   pam_mail.so standard dir=~/.mail

If you have another directory where Dovecot stores your mail, change it. To read your mail, type

mail

to read it. It might be necessairy to install the package heirloom-mailx by first typing

aptitude install heirloom-mailx

Setting up a mail server

On of the most difficult things to set up – in my experience – was a working mail server. It’s not that easy understanding the transport agents, sendmail applications, SMTP servers, IMAP server, etc.

I’ll try to describe all the steps to get a working server. This is largely based on The Perfect Debian Setup.

The first thing is to install and configure Postfix. Postfix is a mail transfer agent (MTA). It is the program that can send and receive mail from your server to any other server. However, it does not provide a way to access (read) that mail. It does provide the SMTP server for sending mail.

First, you need to install a bunch of stuff:

aptitude install postfix libsasl2-2 sasl2-bin libsasl2-modules

When asked about the type of configuration, choose ‘Internet site’, and type your server’s name as the system mail name. After the installation is finished, you can do the actual configuration. Open the file /etc/postfix/main.cf and change the following settings (add the option if it is not already there):

smtpd_sasl_auth_enable = yes
smtpd_sasl_local_domain = vijge.net
smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,reject_unauth_destination
smtpd_sasl_security_options = noanonymous
broken_sasl_auth_clients = yes
myhostname = server.vijge.net
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = vijge.net, server.vijge.net, localhost.vijge.net, localhost
relayhost = [external SMTP server]
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_command = /usr/lib/dovecot/deliver
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all

This set how you can access the SMTP server, how the server should behave, and what it should do with mail it receives. The one thing you want to avoid is an open relay server. This is a server that anyone can access, and use to send mail from. Spammers use this, and if your server is an open relay server, some ISPs might block it. So pay attention to security. The setting smtpd_recipient_restrictions controls this.

  • permit_mynetworks means that anyone with an IP address listed in mynetworks can use the SMTP server without any login.
  • reject_unauth_destination means that mail to local users is always allowed. So anyone on the network can send mail to any other users on the network without authenticating.
  • However, sometimes you are outside your network, but still want to use the SMTP server. But only if a user logs in. I’m using a secure login for that, with the setting permit_sasl_authenticated. Other methods (reject_unauth_destination) are not allowed.

I set the option relayhost to the SMTP server of my ISP. All outgoing mail is send to this server first. You can also run it without this relay server. Then the SMTP server will try to contact the receiving mail server directly. Just leave it blank if you don’t want to use it.

Next, you have to start SASL, which takes care of the secure login. Edit the file /etc/default/saslauthd and set START=no to START=yes.

Start saslauthd with

/etc/init.d/saslauthd restart

and restart postfix with

/etc/init.d/postfix restart

You can test the connection with the command:

telnet localhost 25
ehlo localhost
quit

That should output a list of the capabilities of the mail server. If you see 250-STARTTSL the secure login is available. If you see 250-AUTH PLAIN LOGIN the password-less login is also available (for local users).

Postfix is now configured, and you should be able to send mail. But of course you want to read your mail too. You use an e-mail program for that (such as Outlook, or Thunderbird). But the program needs a way of accessing the mail on your server. There are several options for that: the local mail spool, POP, or IMAP. I am using IMAP. It is supported by all (web) mail programs, and leaves the mail on the server (it doesn’t download the mail locally). Because the SqueezeBox is small, I wanted a lightweight mail server, so I choose Dovecot. You can install the IMAP part of Dovecot by running:

aptitude install dovecot-imapd

Your server can receive e-mails (through postfix), but it does not yet know what to do with them. It needs to store them somewhere. This is the job of the local delivery agent (LDA). Dovecot includes an LDA, but it must be enabled in the configuration. You need to tell Dovecot where it must store new mail, and where it can find the mail if you make an IMAP connection to the server. Open the file /etc/dovecot/dovecot.conf and edit the following options:

  • protocols: a list of protocols that are enabled. Separate them with a space. I enabled IMAP, secure IMAP, and sieve, which be used to easily create message filters (more on that later).
    protocols = imap imaps managesieve
  • ssl: set ssl to yes, to enable encrypted login through secure IMAP.
    ssl = yes
  • Certificates and keys: a secure login need certificates and keys. Certificates are made automatically during the installation of Dovecot.
    ssl_cert_file = /etc/ssl/certs/dovecot.pem
    ssl_key_file = /etc/ssl/private/dovecot.pem
  • Mail location: here you can configure where to store your mail, and in what format. Dovecot can use different formats, such as maildir and mbox. mbox uses a single file, whilst maildir stores everything in a directory. With Dovecot, maildir is supposed to be fast, so I have used that. I store the mail in the user’s home directory, under a hidden folder, but you can store it everywhere you like, and even make use of replacement variables.
    mail_location = maildir:~/.mail
  • LDA: by default, Dovecot’s LDA is not enabled, so you need to uncomment that section. Just remove the # before the line
    protocol lda {

    Find the closing } and remove the # before that too. I enabled to plugin sieve, so the LDA can filter messages

    mail_plugin = sieve

Now you can restart Dovecot with

/etc/init.d/dovecot restart

Now you need to create a mailbox directory for each user. You can do this with the command

maildirmake.dovecot .mail

This assumes you are in the user’s home directory, and want to create a mailbox dir named .mail

Hopefully everything should work now. You can log in using a regular system user and access you mail via IMAP. You should also be able to send mail through your SMTP server.

In the next part of this tutorial, I’ll explain how to install some extra stuff, such a webmail, and an option to display new message when logging into the SSH shell.

Installing a new kernel on Debian

Update: If you install it using another method, the kernel can simply by updated with the standard APT update tool. However, there is a bug in the kernel update script. After the update, the USB drive is not longer recognised. It has the error sd_mod: Unknown symbol scsi_verify_blk_ioctl. To fix this, you must flash the kernel to the internal memory with (it might be needed to install flash-kernel first):

sudo flash-kernel

Original post: I installed my SheevaPlug with Debian on an SD card using the alternative Debian install method. This works great, with one minor drawback: the kernel is never updated by Debian package management. Luckily, it’s not that complicated to install a new kernel. Sheeva with Linux publishes kernels for the Sheevaplug. There is also a script to update the kernel. On my Sheevaplug however, I could not use this script. The kernel is saved on a small boot partition of the SD card. This partition is too small to copy a new kernel to. Manually doing all the steps is straightforward too.

  1. By default, the boot partition of the SD card is not mounted, so mount it to /boot:
    mount /dev/mmcblk0p1 /boot
  2. Make backups of the file that will be overwritten. This include the file /boot/uImage and the folder /dev/firmware
  3. Go to Sheeva with Linux and choose a kernel, the latest will be fine. At the time I did this, this was 2.6.35.2. Download the files sheeva-*-uImage, sheeva-*-System.map, and sheeva-*-Modules.tar.gz. Just save them in your home directory
  4. Execute the following commands to extract the new modules and install the symbol table (replace {ver} with the kernel version you downloaded, e.g. 2.6.35.2):
    tar x -C / --overwrite -zf sheeva-{ver}-Modules.tar.gz
    depmod -eF /boot/sheeva-{ver}-System.map {ver}
  5. Remove the old kernel and copy the new kernel:
    rm /boot/uImage
    mv sheeva-*-uImage /boot/uImage
  6. Reboot with the command
    reboot

The SheevePlug should now reboot. After a minute or so, you should be able to log in again. Use the command

uname -a

to check if the new kernel is running. If something goes wrong, you can remove the SD card from the plug, put it in another computer and restore the backup files. I did everything from a SSH session. If you want more information (e.g. the error message during boot), you can do the upgrade over a USB terminal connection.

AWstats statistics with sub domains on lighttpd

One of the best tools to analyse web visits is AWstats. This tool gives very detailed information about the visitors on your web site. It will analyse the log file of the web server, and turns this into very nice statistics. In the default configuration, it has only limited support for lighttpd. You can set the log type to Apache, and you will get some kind of logging. However, this lumps all sites into one. If you have sub domains, some tweaking is needed to get separate statistics for each sub domain.

I installed AWstats using the Debian package manager. This will install the configuration files in /etc/awstats There are two files there: awstats.conf contains basic settings, awstats.conf.local is imported by awstats.conf and can be used to overwrite certain settings.

For each of the sub domains, a separate configuration file is needed. It needs to be in a specific format: awstats.sub.domain.net.conf For example, the config file for this site would be awstats.server.vijge.net.conf  The awstats.conf file will gather statistics for all sub domain together, and the awstats.conf.local file will contain all the general settings.

First, delete the current awstats.conf.local file, and copy the current awstats.conf file to awstats.conf.local Next, make some changes the the awstats.conf.local file:
1. Change the LogFile variable to point to the lighttpd log file. In the default Debian configuration, this would be

LogFile="/var/log/lighttpd/access.log"

2. Change the LogFormat to the log settings for lighttpd.

LogFormat="%host %virtualname %logname %time1 %methodurl %code %bytesd %refererquot %uaquot"

3. Place a comment character (#) before SiteDomain and HostAlias

#SiteDomain=""
#HostAliases=""

4. At the end of the file, there is an include of the awstats.conf.local file. But because this is already the awstats.conf.local file, the include has the be removed, to make sure there is no infinite include loop.

#Include "/etc/awstats/awstats.conf.local"

That is it for the generate configuration. Now configure the configuration for each sub domain, and for the total site. Open the file awstats.conf, and replace all contents by

SiteDomain="domain.net"
HostAliases="localhost 127.0.0.1 REGEX[domain\.net$]"

Include "/etc/awstats/awstats.conf.local"

Add your domain name in the regex expression, so all domains are matched. For each sub domain you want to track, make a new file with the name awstats.sub.domain.net.conf and place the following code it in

SiteDomain="sub.domain.net"
HostAliases="sub.domain.net"

Include "/etc/awstats/awstats.conf.local"

The configuration is now done. Now you need to make sure your web server knows about AWstats. First, create a separate directory in your www folder (e.g. /var/www/awstats). In this directory, make a symbolic link to the awstats executable:

ln -s /usr/lib/cgi-bin/awstats.pl

Edit the file /etc.lighttpd/lighttpd.conf to include

alias.url += ( "/awstats-icon" => "/usr/share/awstats/icon" )
alias.url += ( "/awstats" => "/var/www/awstats" )
$HTTP["url"] =~ "^/awstats($|/)" {
cgi.assign = (
".pl" => "/usr/bin/perl",
".cgi" => "/usr/bin/perl"
)
}

If you visit the AWstats page, it will display the statistics for the current sub domain. If you view AWstats from the domain server.vijge.net, the statistics for that domain are displayed. To change to a different domain, simply place an argument behind the url, e.g.

awstats.pl?config=daniel.vijge.net

The part behind the config= corresponds to the name given to each config file, between awstats and .conf To view the statistics for all the domains (as configured in awstats.conf), supply an invalid config parameter, e.g.

awstats.pl?config=all

I created a simple page with links to the statistics for each sub domain, to I can easily browse to them.

If you installed AWstats under Debian Squeeze, it runs as a cron job, so all the statistics are updated every 10 minutes. Under older versions of Debian this script is not included, but you can download it here. Save this as /usr/share/awstats/tools/update.sh, make is executable, and change the file /etc/cron.d/awstats to read

*/10 * * * * www-data [ -x /usr/share/awstats/tools/update.sh ] && /usr/share/awstats/tools/update.sh

Convert myGallery to NextGen Gallery

For a lot of web logs, photo galleries are used. The web logs in this server used to run myGallery. However, this WordPress plugin wasn’t updated for a long time. (Recently there was an update for WordPress 3. I haven’t tested this one yet). So I needed a new gallery plugin. The NextGen Gallery plugin has all the requirements that I need. And best of all, it’s reasonably compatible with myGallery. So I decided to write a script that can convert all myGallery galleries to NextGen galleries. It will also update all the pages that display the gallery. Last, it will convert the thumbnail directory, so NextGen Gallery can use them. Of course, you can also let NextGen Gallery generate new thumbnails. But since my server isn’t that fast, using existing thumbnails is a good idea.

If you are running myGallery, and want to upgrade to NextGen Gallery and perform the conversion, this are the steps to perform. First, download NextGen Gallery. Disable myGallery, and enable NextGen Gallery. Next, copy and paste the script below in a PHP file. Modify the configuration and run the script. This will give a series of SQL statements. You can run these on the MySQL database. For example, you can use phpMyAdmin to run the queries easily.

// CONFIG

// the prefix for your WordPress tables (default: wp_)
$WP_PREFIX = 'wp_';

// the directory on the server where your photos are stored
$GALLERYBASEPATH = 'wp-content/myfotos/daniel';

// username for the database
$USERNAME = 'username';

// password for the database
$PASSWORD = 'password';

// name of the database containing WordPress
$DATABASE = 'wordpress';

// host for the database (99% chance it will be localhost)
$HOST = 'localhost';

// END OF CONFIG

if (substr($GALLERYBASEPATH,-1)!='/' && $GALLERYBASEPATH!='')
{
        $GALLERYBASEPATH = $GALLERYBASEPATH . '/';
}

$link = mysql_connect($HOST,$USERNAME,$PASSWORD) or die('Cannot connect to database '.$HOST);
$x = mysql_select_db($DATABASE,$link) or die('Cannot select database '.$DATABASE);

// convert the gallery
$result = mysql_query('SELECT * FROM '.$WP_PREFIX.'mygallery') or die('Cannot execure queries. Is the prefix correct?');
while ($row = mysql_fetch_assoc($result))
{
        $row['longname'] = addslashes($row['longname']);
        $row['galdescrip'] = addslashes($row['galdescrip']);
        echo 'INSERT INTO '.$WP_PREFIX.'ngg_gallery (gid,name,path,title,galdesc,pageid,previewpic,author) VALUES ('.$row['id'].',\''.$row['name'].'\',\''.$GALLERYBASEPATH.$row['name'].'\',\''.$row['longname'].'\',\''.$row['galdescrip'].'\',\''.$row['pageid'].'\',\''.$row['previewpic'].'\',0);' . '<br />';
}

// convert the images
$result = mysql_query('SELECT * FROM '.$WP_PREFIX.'mypictures INNER JOIN '.$WP_PREFIX.'mygprelation ON '.$WP_PREFIX.'mypictures.id='.$WP_PREFIX.'mygprelation.pid INNER JOIN '.$WP_PREFIX.'mygallery ON '.$WP_PREFIX.'mygallery.id='.$WP_PREFIX.'mygprelation.gid');

while ($row = mysql_fetch_assoc($result))
{
        $row['description'] = addslashes($row['description']);
        echo 'INSERT INTO '.$WP_PREFIX.'ngg_pictures (pid,galleryid,filename,description) VALUES ('.$row['pid'].','.$row['gid'].',\''.$row['picturepath'].'\',\''.$row['description'].'\');' . '<br />';
}

// convert pages
$result = mysql_query('SELECT id,pageid FROM '.$WP_PREFIX.'mygallery');
while ($row = mysql_fetch_assoc($result))
{
        echo 'UPDATE '.$WP_PREFIX.'posts SET post_content=\'[nggallery id='.$row['id'].']\' WHERE ID='.$row['pageid'] . ';<br>';
}

// convert page with master gallery
$result = mysql_query('SELECT * FROM '.$WP_PREFIX.'posts WHERE post_content=\'[mygallistgal]\'');
while ($row = mysql_fetch_assoc($result))
{
        echo 'UPDATE '.$WP_PREFIX.'posts SET post_content=\'[album template=extend]\' WHERE ID='.$row['ID'] . ';<br>';
}

There are a number of caveats you should be aware of:

  • Gallery pages are updated, so they contain the new gallery tag. If there is any other text on a page, this text is lost
  • The page looks for the master gallery page (the page with links to all galleries. It will only find it if this page does not contain any other text. If the last code block does not give any results, you have to change this page manually

If you visit your gallery pages now, the should work, only the thumbnails are not displayed. To fix this, I’ve made a second script.

error_reporting(E_ALL);
$prefix = '/home/www/wordpress_mu/wp-content/myfotos/daniel';
$dirs = scandir($prefix);


foreach ($dirs as $dirname) {
        if ($dirname=='.' || $dirname=='..') continue;
        $dir = $prefix.'/'.$dirname;
        if (is_dir($dir)) {

                if ($dh = opendir($dir)) {
                echo $dir . " open<br>";

                        if (is_dir($dir.'/tumbs')) {
                                if ($dh2 = opendir($dir.'/tumbs')) {
                               
                                        while (($file = readdir($dh2)) !== false) {
                                                if (substr($file,0,3)=='tmb') {
                                                        $newname = str_replace('tmb','thumbs',$file);
                                                        $r = rename($dir.'/tumbs/'.$file,$dir.'/tumbs/'.$newname);
                                                        echo "filename: $file to $newname : rename: $r<br>";
                                                }
                                        }
                                        // rename tumbs dir
                                        $r = rename($dir.'/tumbs',$dir.'/thumbs');
                                        echo $dir . " rename " .$r."<br>";
                                }
                                closedir($dh2);
                               
                        }
                        closedir($dh);
                }
        }
}

Just change the prefix variable, to point to the path where all galleries are located. This script will rename all thumbnail directories immediatly.

A script to fix Geo Mashup update

On one of my web logs, I use a map to display the route of my travels. Each post is geo-tagged, so places can be connected with lines. It’s a very nice feature for a travel blog, thanks to the Geo Mashup plugin for WordPress. I recently updated the Geo Mashup plugin to a new version. For this version, the locations had to be converted to the new format. The result of this upgrade was that my route was no longer correct: posts seemed randomly connected to each other.

After some investigation, I found that Geo Mashup uses a special table to link posts and location to each other. This link also has a date and time connected to it. The results are ordered using this date. During the conversion, each link had the date of the conversion. Therefore, the results were displayed random.

I wrote a quick PHP script to fix this problem. This script looks at the time the original post was made, and updates the Geo Mashup table.

// CONFIG

// username for the database
$USERNAME = 'username';

// password for the database
$PASSWORD = 'password';

// name of the database containing WordPress
$DATABASE = 'wordpress';

// host for the database (99% chance it will be localhost)
$HOST = 'localhost';

// END OF CONFIG

$link = mysql_connect($HOST,$USERNAME,$PASSWORD) or die('Cannot connect to database '.$HOST);
$x = mysql_select_db($DATABASE,$link) or die('Cannot select database '.$DATABASE);

$result = mysql_query("SELECT * FROM wp_geo_mashup_location_relationships");
while($row = mysql_fetch_assoc($result)
{
        $result2 = mysql_query("SELECT post_date FROM wp_posts WHERE id=".$row['object_id']);
        mysql_query("UPDATE wp_geo_mashup_location_relationships SET geo_date=\'" . $row2['post_date'] . "\'");
}

After running this script, the problem was fixed, and my map displayed the correct route.