Three days ago, my ipod shuffle started acting weird. Newly synced files would not show up, scanning for stale and orphaned tracks turned up lots of hits. Unfortunately, I was in a hurry at the time so I deleted the iPod_Control folder on the shuffle and asked Amarok to re-initialize it. That only made things worse – nothing would play after that. The shuffle’s indicator lights showed there were no tracks on the device. I finally got some free time this afternoon and tracked down the problem.

First, I synced some songs with iTunes in my Windows XP VM and the shuffle worked just fine so I concluded that it was a problem with how Amarok was touching the shuffle’s database. A little googling and I found out libgpod (the library that teaches Amarok how to talk to ipods) was to blame. I’d done an update earlier in the week that must have introduced some regressions so I downgraded libgpod from 0.7.93-0ubuntu1 to 0.7.2-1ubuntu1 and problem solved.

Advertisements

DNS Crash Course
The Domain Name System (DNS) resolves domain names like http://www.wordpress.com into a series of digits (74.200.247.60) that computers can understand. Your browser typically hands over website names to a DNS server and receives IP Addresses in return. Most Internet Service Providers provide a DNS server for their customers to help speed up browsing and downloads.

In Comes Namebench
Namebench is a DNS benchmarking application available for the Linux, Windows and Mac OS X. It uses either your web browser’s history or a standardized test data set to find out which DNS service returns the fastest results for your location.

Installing Namebench
Download and run Namebench from the Google Code repository here.

Ubuntu Users
The people at GetDeb have packaged a deb for Namebench. You can add their repository here.

Using Namebench
Close all internet-aware applications before you start Namebench. We don’t want those applications competing with Namebench for your bandwidth and distorting the results. Launch Namebench (Internet –> namebench for Ubuntu users.) You’ll see an interface like this:

Namebench Application Window

The Nameservers are the DNS servers you are currently using. You can add other nameservers to this list (separate them with a comma or space.) The default settings are usually okay for most people so just click Start Benchmark. Google has a more detailed explanation of the settings here. The test takes 10 – 20 minutes so you can take a sandwich break or something. 🙂

Nambench Results
After the test completes, your web browser starts up to show you the results.

Namebench Results
As you can see, my primary DNS server’s performance is pretty sweet. That’s to be expected though… it’s a local server so some cached queries must have been involved. On the right, Namebench recommends the optimum nameserver setup for my machine. It seems I’ll have to switch my fall-back namservers from OpenDNS to one in the Netherlands and another in Kenya.

This table shows the DNS servers that were used in the test, resonse times, notes and errors if any. I’ve got some tweaking to do, it seems.

Moving on…

Average and Fastest Responses

This graph shows the average and fastest response times for the top 10 nameservers.

Response Distribution Chart (First 200ms)

This one shows the percentage of times a response was received from a server within the first 200 milliseconds.

Response Distribution Chart (Full)
This last graph shows the percentage of times a response was received from a server for the entire test duration.

Making Changes
There’s a great article here on how to change your DNS servers in Ubuntu. Use the fastest servers from your Namebench test. Windows and Mac users can take a look here to learn how to change DNS settings. Have fun. 🙂

Posted by: Odzangba | February 20, 2010

How To Free Reserved Space On EXT4 Partitions

This one came in handy when I bought a 1TB hard drive last week. Most linux distributions reserve 5% of new partitions for the root user and system services. The idea here is even when you run out of disk space, the root user should still be able to log in and system services should still run… this won’t happen if there is no space on the root partition. This policy may have been appropriate in the 90s when hard disk capacities were relatively low but this is 2010 and one can get a 1TB hard drive for a couple of hundred Ghana Cedis. 5% of that is about 51GB and those system services need only a couple of hundred megabytes.

So I decided to reclaim all that disk real estate with this command:

sudo tune2fs -m 0 /dev/sdb1

This sets the reserved blocks to 0%. This is an additional storage drive, I have no need to reserve disk space for system services. You can verify that this actually worked with:

sudo tune2fs -l /dev/sdb1 | grep ‘Reserved block count’

As usual, modify /dev/sdb1 to suit your partition setup. Have fun. 😀

UPDATE: Thanks to Msz Junk for pointing out the typos in the file paths.

What I really should have done was to link to khattam’s article because he did a pretty good job of describing the solution to this error but for my own archives, here goes…. I upgraded my box to Karmic Koala this evening and for some reason, ubiquity-frontend-kde flipped and borked the package management system. When I tried to open Synaptic, I got this:

Click to view a screenshot of Synaptic's error message

So I tried

sudo aptitude –configure -a

and

sudo apt-get install -f

and even tried messing with these:

/var/lib/dpkg/info/dbconfig-common.postinst

/var/lib/dpkg/info/dbconfig-common.postrm

but the system wouldn’t budge. Then I found khattam’s article and realized I was looking in the wrong files. To solve this error, close all package management software, and back up and edit the /var/lib/dpkg/status file with the following commands:

sudo cp /var/lib/dpkg/status /var/log/dpkg/status.old

gksudo gedit /var/lib/dpkg/status

Here comes the dicey part. Search for the package causing all this brouhaha and delete its entry. Please be very careful here and make sure you leave a blank line between the package entries below or above the deleted package entry. Here are screenshots of my file before and after selecting the appropriate package description entry.

BeforeAfter

If you did this right you should be able to open Synaptic and remove the offending package (if you don’t want it any more) or re-install it.I don’t understand why the developers couldn’t cook up a more graceful way for dpkg to show its displeasure.

For weeks now my Jaunty box would lock up unexpectedly and only a hard reset could bring it back to life. Since it did not happen often, I just brushed it off… to be completely honest, I was too lazy to track down the problem. 😀 But my box locked up again a few minutes ago as I was waiting on a very important download and after I’d exhausted my vocabulary of swear words (and seriously contemplated throwing my monitor through the window), I decided I’d had enough. I examined my logs and noticed these errors around the time the lock-up kicked in:

compcache: Error allocating memory for compressed page: 37691, size=28
compcache: Error allocating memory for compressed page: 126848, size=233
compcache: Error allocating memory for compressed page: 106315, size=40

So I googled compcache and found out that it wasn’t supposed to be active on permanent installations like mine. Basically, it helps computers with low RAM to comfortably load a livecd session through a fairly boring use of “virtual RAM.” The important thing is, it should only kick in during a livecd session. It’s also quite unstable.  Read more about compcache here.

To find out if compcache is active on your system, do:

sudo swapon -s

If you see /dev/ramzswap, compcache is plotting to lock up your box when you least expect it. To permanently disable compcache, do:

sudo rm -f /usr/share/initramfs-tools/conf.d/compcache && sudo update-initramfs -u

Then either reboot or do a

sudo swapoff /dev/ramzswap{insert the device number here}… so for example:

sudo swapoff /dev/ramzswap1

The morale of the story is, don’t be lazy… it took me about three minutes to track down the problem, fix it and get on with my life. 🙂 Now I have to restart this 700MB download. 😦

Posted by: Odzangba | September 12, 2009

Fix Dolphin Thumbnail Previews

Dolphin, the KDE 4 file manager, needs a little help in order to display thumbnails of videos. It uses mplayerthumbs to generate the thumbnails. Unfortunately, mplayerthumbs is not pulled in as a dependency when installing dolphin. I don’t know what the developers were thinking. Video thumbnails are integral to any modern desktop. It doesn’t make sense to ask users to manually install an extra package in order to enjoy this feature. Anyway, do a quick

sudo aptitude install mplayerthumbs

on the terminal or search for and install mplayerthumbs in the Synaptic package manager, and dolphin will be able to generate thumbnails for your video collection.

Posted by: Odzangba | September 1, 2009

Using Google Talk With Kopete

I got bored over the weekend and did a fresh install of Jaunty. In part I wanted to try out backing up and restoring application settings and other data. It worked out pretty well. I last used Kopete in 2006 and I was a little curious so I installed and fired it up. Adding a Yahoo messenger account worked flawlessly but Google Talk choked on some weird ssl error. As it turned out after some googling, one needs a package called qca-tls (ubuntu) to be able to get Kopete to play nice with Google Talk. Other distributions have slightly different names for this package:

Gentoo                 app-crypt/qca-tls
Mandriva              libqca1-tls
OpenSuSE            qca

On Ubuntu, a quick

sudo aptitude install qca-tls

on the terminal will do the trick. Or you can search for qca-tls in Synaptic.

To add a google talk account:

  • Settings –> Configure
  • Accounts –> Add Account
  • Select Jabber
  • Next
  • On the Basic Setup tab, your account information should look like this:

    Jabber ID:     xxxx@gmail.com (your gmail address)
    [ ] Remember password     (Ticking this makes it easier to login later)
    Password:     xxxxx (Enter your password)

  • The Connection tab should look like this:

[X] Use protocol encryption (SSL)
[X] Allow plain-text password authentication
[X] Override default server information
Server:  [talk.google.com] Port:  [5223]

You’re done. 🙂

Posted by: Odzangba | May 25, 2009

Greetings from Ho

It’s a long weekend (thanks to African Unity Day) and I’m relaxing in sleepy Ho… it’s nice to get away from the constant hustle and bustle of Accra. Anyway, I’ve had a lot of hard disk trouble lately. First I ran out of space, then my system hard disk died. As if that wasn’t enough, the next day, my spare hard disk died too… making my life doubly miserable. You see, I hadn’t backed up my data… I was out of space after all and the dvd shop is out of my way – I kept putting it off. So I had to raid my younger brother’s piggy bank for a new hard disk. I’d like to think that “I was not attached to those hard disks” but it really grinds my gears the way they both failed in rapid succession. Now if I had my way, somebody at Seagate would be in a lot of pain right now. How is it that the world’s largest hard disk manufacturer has so many defective products on shelves? For what it’s worth, I’m never buying a Seagate hard drive again… even though Barracuda is such a cool name. Western Digital hard drives are – in my experience – much more reliable. But they really should do something about the name “Caviar.” 😀

I now have a lot more hard disk space and a fresh install of Ubuntu 9.04. Morale of the story… back up your data and don’t buy Seagate !

Posted by: Odzangba | March 25, 2009

GZIP vs. BZIP2 vs. LZMA

There’s no nicer way to say it… I’m running out of disk space. I have three options: buy a larger hard drive, delete some files to free up space, or compress some of the data. Buying a larger hard drive is the best option in the long term but “in the long term, we’re all dead” 😀 and deleting files is painful for me… I’m a serial pack rat. So I decided to explore compression as a way out of my disk space headaches. First, I had to find the most efficient compression algorithm, a task I soon found out is not easy. I read several blogs and websites and everybody had something good to say about their favorite algorithm. But one thing was clear, the GZIP, BZIP2 and LZMA compression algorithms were leading the pack. To satisfy my own curiosity and determine for myself which was the most efficient, I decided to run some benchmarks. To be honest, I’ve been hearing some good things about the LZMA compression algorithm so I was hoping it would live up to the hype.

These benchmarks were conducted on a 2.53 GHz dual-core processor with 2GB RAM and a 5400 RPM Seagate Barracuda IDE hard disk. I also throttled the algorithms for maximum compression.

Version information:
gzip 1.3.12
bzip2 1.0.5
LZMA 4.32.0beta3
LZMA SDK 4.43

For starters, I threw an empty 1GiB file with nothing in it but binary zeros at them.

$ dd if=/dev/zero of=test.zero -bs=1024M -count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 187.978 s, 5.7 MB/s

Now the fun starts.

GZIP
$ /usr/bin/time -f “%U seconds CPU %P” gzip -c9 test.zero > test.gz
12.36 seconds CPU 99%

BZIP2
$ /usr/bin/time -f “%U seconds CPU %P” bzip2 -c9 test.zero > test.bz2
32.07 seconds CPU 98%

LZMA
$ /usr/bin/time -f “%U seconds CPU %P” lzma -c9 test.zero > test.lzma
873.79 seconds CPU 96%

So what kind of compression ratios are we talking about here?

$ ls -lh test.zero*
-rw-r–r– 1 kafui kafui  1.0G 2009-03-25 12:01 test.zero
-rw-r–r– 1 kafui kafui 1018K 2009-03-25 12:51 test.gz
-rw-r–r– 1 kafui kafui  148K 2009-03-25 13:10 test.lzma
-rw-r–r– 1 kafui kafui   785 2009-03-25 12:52 test.bz2

GZIP squeezed 1 gigabyte into about 1 megabyte in about 12 seconds… nice. LZMA’s compression ratio was very impressive; it squeezed 1 gigabyte into 148 kilobytes BUT in 873.79 seconds… that’s almost 15 minutes. BZIP2 was absolutely cool… 1Gib down to 785 bytes in 32 seconds! The clear winner here however is BZIP2. It has the highest compression ratio with acceptable time requirements. Now on to tests with real data.

For the next test, I decided to compress the contents of my  /opt folder. To simplify things, I created a tar archive of the folder first.

$ sudo tar -cf opt.tar /opt
[sudo] password for kafui:
tar: Removing leading `/’ from member names
tar: Removing leading `/’ from hard link targets

$ ls -lh opt.tar
-rw-r–r– 1 root root 120M 2009-03-25 15:48 opt.tar

So we’re working with 120MB of data. On to the tests:

GZIP
$ /usr/bin/time -f “%U seconds CPU %P” gzip -c9 opt.tar > opt.tar.gz
19.42 seconds CPU 89%

BZIP2
$ /usr/bin/time -f “%U seconds CPU %P” bzip2 -c9 opt.tar > opt.tar.bz2
30.76 seconds CPU 93%

LZMA
/usr/bin/time -f “%U seconds CPU %P” lzma -c9 opt.tar > opt.tar.lzma
132.21 seconds CPU 92%

$ ls -lh opt.tar*
-rw-r–r– 1 kafui kafui 120M 2009-03-25 15:48 opt.tar
-rw-r–r– 1
kafui kafui 39M 2009-03-25 15:56 opt.tar.gz
-rw-r–r– 1
kafui kafui 36M 2009-03-25 16:09 opt.tar.bz2
-rw-r–r– 1
kafui kafui 25M 2009-03-25 16:16 opt.tar.lzma

Once again, GZIP was the fastest and got 120MB down to 39MB in 19.42 seconds. BZIP2 reduced 120MB to 36MB but took 11.34 seconds longer than GZIP. LZMA delivered the best compression with 25MB but took 132.21 seconds. It appears there are trade-offs with each compression method. GZIP is fast but its compression ratio is the lowest of the three. LZMA (depending on the data) delivers the most efficient compression ratio but takes too much time to do so. BZIP2 strikes a balance between efficient compression and speed… it’s way faster than LZMA and can actually deliver better compression. LZMA just does not live up to the hype.

Unfortunately, these benchmarks were of no use to me because about 140GiB of my data is made up of AVIs, PNGs and JPEGs. These formats are already compressed so there isn’t much room for further compression. But for what it’s worth, I gave the algorithms a spin anyway.

$ ls -lh The.Big.Bang.Theory.S01E10.avi
-rwxrwxrwx 1 kafui kafui 175M 2008-04-18 20:14 The.Big.Bang.Theory.S01E10.avi

GZIP
$ /usr/bin/time -f “%U seconds CPU %P” gzip -c9 The.Big.Bang.Theory.S01E10.avi > The.Big.Bang.Theory.S01E10.avi.gz
10.94 seconds CPU 78%

BZIP2
$ /usr/bin/time -f “%U seconds CPU %P” bzip2 -c9 The.Big.Bang.Theory.S01E10.avi > The.Big.Bang.Theory.S01E10.avi.bz2
55.15 seconds CPU 94%

LZMA
$ /usr/bin/time -f “%U seconds CPU %P” lzma -c9 The.Big.Bang.Theory.S01E10.avi > The.Big.Bang.Theory.S01E10.avi.lzma
138.74 seconds CPU 93%

$ ls -lh The.Big.Bang.Theory.S01E10.avi*
-rwxr-xr-x 1 kafui kafui 175M 2009-03-25 16:34 The.Big.Bang.Theory.S01E10.avi
-rw-r–r– 1
kafui kafui 173M 2009-03-25 16:35 The.Big.Bang.Theory.S01E10.avi.gz
-rw-r–r– 1
kafui kafui 173M 2009-03-25 16:39 The.Big.Bang.Theory.S01E10.avi.bz2
-rw-r–r– 1
kafui kafui 174M 2009-03-25 16:43 The.Big.Bang.Theory.S01E10.avi.lzma

GZIP and BZIP both got the 175MB episode of The Big Bang Theory down to 173MB; BZIP2 of course took 44.12 seconds longer. And LZMA got it down by only 1MB but in 138.74 seconds. As you can see, it doesn’t make much sense for me to compress my videos and pictures… not with those compression ratios. So it seems I’ll just have to cough up the cedis for a new hard drive. 😦

Posted by: Odzangba | February 28, 2009

Graphical Hardware Information Tools

Just a few months ago, I was not even using a graphical environment; videos, music, surfing the internet, instant messaging… all from the terminal. But my philosophy on software has been undergoing subtle changes ever since the I got a faster computer. The thing is, I now default to graphical applications for most tasks. Where aptitude, mplayer, mpd, ncmpc, rtorrent, finch and elinks ruled supreme, synaptic, amarok, smplayer, deluge, pidgin and firefox have the upper hand. So this morning, I decided to find a GUI hardware information program to replace lspci, lshw and dmidecode… not really, I just needed a graphical frontend to these tools.  It took me about 15 minutes to go through the top three: Hardinfo, Sysinfo and Lshw-gtk. Hardinfo was the most impressive of the lot. In addition to hardware information, it can perform benchmark tests and let you compare the results with that of others. My lean, mean and ridiculously affordable box did quite well in the comparison tests. 😀 Sysinfo was a little stingy on information but it’s quite capable. Lshw-gtk, as the name implies, is really just a graphical frontend to lshw and threw up some detailed information about my motherboard and CPU but very little else. I’m keeping only hardinfo for the long term however, the others don’t quite live up to expectations. Anyways, that is only my opinion… I’ll let the screenshots do the rest of the talking:

« Newer Posts - Older Posts »

Categories

%d bloggers like this: