Sunday, 31 December 2006

New Year 2007 - The year of GNU/Linux

Today is the dawn of a new year, the year 2007. Every year, we wish, hope and dream that it will be the year when GNU/Linux will gain critical mass appeal - not that it has failed to significantly widen its base. One of the most endearing aspect of GNU/Linux for me over and above the ideological considerations is its simplicity.

A couple of years back, when I was yet to be introduced to Linux, I remember having faced many situations when my OS (Windows 98) had died on me for no apparent reason and I was left staring at the blue screen of death. The outcome being doing a clean re-install of Windows. From those experiences, I realized that the Windows was a complex beast especially when it came to troubleshooting problems. Compared to that, troubleshooting in GNU/Linux is akin to a piece of cake or a walk in the park.

The inherent strength of GNU/Linux lies in the fact that all the configuration pertaining to the OS is saved in liberally commented text files which reside in a specific location. And almost all actions executed by the OS is logged in the appropriate files which are also plain text files. For example, reading the file /var/log/messages will reveal a wealth of knowledge about the actions carried out by the OS and the errors if any during boot-up. So once the initial learning curve is overcome, it becomes a joy to work in GNU/Linux.


Some time in 2007, we can hope to see KDE 4.0 released. Already, when I compare KDE 3.5 with older versions, I have found significant increase in speed with which applications start up. KDE 4.0 is expected to be much more snappy as it is developed using Qt 4.0 library and will contain a lot of additional features. Of course, this year Microsoft is also officially releasing its new OS Vista. But many reviews indicate that there are lots of shortcomings in Microsoft's latest offering and the general opinion is that it is not worth its price tag.

I am not trying to disparage Microsoft but when you have a fabulous choice in GNU/Linux which comes with an unbeatable price tag (Free) and if you are able to do almost all your tasks in GNU/Linux baring say playing some of your favorite games, why would you consider buying another OS paying hundreds of dollars ? More over if you are an avid gaming enthusiast, you should rather be buying a Sony PlayStation or a Nintendo Wii or even an XBox and not an OS.

There was a time when I used to boot into Windows to carry out certain tasks. But for the past many months, I have realized that I am able to do all my tasks from within GNU/Linux itself and it has been some time now since I have booted into Windows.

But when we look back, Linux or rather GNU/Linux the OS has done quite well in 2006. With many popular distributions opting for a 6 Month release schedule, we get to try out at least two versions each of many distributions each year. More over, we get the latest software too. Other than that, in 2006 we also saw the open source release of Java code by Sun Microsystems - a great victory for Free software enthusiasts. The Linux BIOS project also got its share of publicity with many hardware manufacturers evincing interest in the project. So in many ways I look forward to an exciting year 2007 for GNU/Linux, Open Source and Free Software. And as always (lets hope) 2007 is going to be the Year of GNU/Linux.

On this positive note, I wish you all a very happy and prosperous New Year.

Thursday, 21 December 2006

A great collection of repositories for Open SuSE Linux

When ever I try out a GNU/Linux distribution, the one thing which hound me atleast in the initial stages is the lack of awareness about additional repositories particularly the ones which contain software packages which are necessary to make working in Linux a complete experience. So the first time I tried Red Hat, I had to scrounge the Net to get the addresses of additional repositories because the servers which hosted the Red Hat official repositories were stretched to their limits and were dead slow and more over, they did not contain non-Free software.

When I tried Ubuntu, this travail was elevated to some extent partly due to the fact that the switches for enabling additional repositories which contained non-Free software was made available in the Linux distribution itself and also partly due to help from the strong active community revolving around it.

Now Vichar Bhatt, a staunch supporter of SuSE Linux - more so for its robustness and superior features, has compiled a collection of repositories which host packages meant for the SuSE Linux distribution though he is quick to point out that installing software from unverified repositories carry a slight security risk. Nevertheless his efforts are commendable. He has also provided a list of official SuSE repositories too which can be found here. Hopefully, the list will be updated as and when new repositories are made available.

Wednesday, 20 December 2006

KVM Virtualization solution to be tightly integrated with Linux kernel 2.6.20

There is good news on the horizon... which is that Linus Torvalds has merged the KVM code - which is the Kernel Virtual Machine Module in the kernel source tree leading to Linux Kernel 2.6.20. This opens up a lot of avenues as far as Linux is concerned. Using KVM, it is possible to run multiple virtual machines running unmodified Linux or Windows images.

KVM is not the only technology that is around as far as Linux is concerned. But its advantage over other similar technologies is that it is a part of Linux and uses the regular Linux scheduler and memory management which in turn makes it much smaller and simpler to use. It uses slightly modified userland tools that comes bundled with QEMU to manage virtual machines. But the similarity ends there as QEMU inherently uses emulation where as KVM makes use of processor extensions for virtualization.

A normal Linux process has two modes of execution - which is the Kernel mode and the User mode. When you use KVM, Linux will have an additional mode which is the guest mode which in turn will have its own kernel and user modes (see figure below).

On the down side, for KVM to function properly, your computer should have Intel or AMD processors which supports integrated virtualization technology in the hardware level.

Tuesday, 19 December 2006

25 Shortcomings of Microsoft Vista OS - A good reason to choose GNU/Linux ...

As a continuation of the previous post, here are 25 shortcomings found by Frank J. Ohlhorst when he reviewed the yet to be formally released Microsoft Vista OS. I have added my views which are enclosed in parentesis, alongside the Vista shortcomings.
  • Vista introduces a new variant of the SMB protocol - (I wonder what is the future of Samba now...)
  • Need significant hardware upgrades
  • No anti-virus bundled with Vista
  • Many third party applications still not supported
  • Your machine better have a truck load of Memory - somewhere around 2 GB. (Linux works flawlessly with just 128 MB... even less).
  • Too many Vista editions.
  • Need product activation. (Now that is something you will never see in Linux).
  • Vista OS will take over 10 GB of hard disk space. (With Linux you have a lot of flexibility with respect to the size of the distribution.).
  • Backing up the desktop will take up a lot of space. (Not so in Linux)
  • No must have reasons to buy Vista. (The fact that Linux is Free is reason enough to opt for it)
  • Is significantly different from Windows XP and so there is a learning curve. (Switching to Linux also involves some learning curve but then it is worth it as it doesn't cost you much and in the long run, you have a lot to gain).
  • You'd better come to terms with the cost of Vista - it is really exorbitant running to over $300. (In price, Vista can't beat Linux which is free as in beer and Freedom).
  • Hardware vendors are taking their own time to provide support for Vista.(Now a days, more and more hardware vendors are providing support for Linux).
  • Vista's backup application is more limited than Windows XP's. (Linux has a rich set of backup options and every one of them is free).
  • No VoIP or other communication applications built in. (Skype, Ekiga... the list goes on in Linux).
  • Lacks intelligence and forces users to approve the use of many native applications, such as a task scheduler or disk defragmenter. (Linux is flexible to a fault).
  • Buried controls - requiring a half a dozen mouse clicks. (Some window managers in Linux also have this problem but then here too, you have a variety of choice to suit your tastes).
  • Installation can take hours, upgrades even more. (Barring upgrades, installation of Linux will take atmost 45 minutes. Upgrades will take a little longer).
  • Little information support for Hybrid hard drives.
  • 50 Million lines of code - equates to countless undiscovered bugs. (True, true... It is high time you switch to Linux).
  • New volume-licensing technology limits installations or requires dedicated key-management servers to keep systems activated. (Linux users do not have this headache I believe).
  • Promises have remained just that - mere promises. A case to the point being WinFS, Virtual folders and so on. - (Clever marketing my friend, to keep you interested in their product).
  • Does not have support for IPX, Gopher, WebDAV, NetDDE and AppleTalk. (Linux has better support for many protocols which Windows do not support).
  • Wordpad's ability to open .doc files have been removed. (Now that is what I call extinguishing with style. OpenOffice.org which is bundled with most Linux distributions can open, read and write DOC files).

Sunday, 17 December 2006

Various ways of detecting rootkits in GNU/Linux

Consider this scenario... Your machine running GNU/Linux has been penetrated by a hacker without your knowledge and he has swapped the passwd program which you use to change the user password with one of his own. His passwd program has the same name as the real passwd program and works flawlessly in all respects except for the fact that it will also gather data residing on your machine such as the user details each time it is run and transmit it to a remote location or it will open a back door for outsiders by providing easy root access and all the time, you will not be aware of its true intention. This is an example of your machine getting rooted - another way of saying your machine is compromised. And the passwd program which the hacker introduced into your machine is a trojaned rootkit.

A rootkit is a collection of tools a hacker installs on a victim computer after gaining initial access. It generally consists of network sniffers, log-cleaning scripts, and trojaned replacements of core system utilities such as ps, netstat, ifconfig, and killall.

Hackers are not the only ones who are found to introduce rootkits in your machine. Recently Sony - a multi billion dollar company, was found guilty of surreptitiously installing a rootkit when a user played one of their music CDs on Windows platform.This was designed *supposedly* to stop copyright infringement. And leading to a furore world wide, they withdrew the CD from the market.

Detecting rootkits on your machine running GNU/Linux
I know of two programs which aid in detecting whether a rootkit has been installed on your machine. They are Rootkit Hunter and Chkrootkit.

Rootkit Hunter
This script will check for and detect around 58 known rootkits and a couple of sniffers and backdoors and make sure that your machine is not infected with these. It does this by running a series of tests which check for default files used by rootkits, wrong file permissions for binaries, checking the kernel modules and so on. Rootkit Hunter is developed by Michael Boelen and has been released under a GPL licence.

Installing Rootkit Hunter is easy and involves downloading and unpacking the archive from its website and then running the installer.sh script logged in as root user.

Fig: Rootkit Hunter checking for rootkits on a Linux machine.

Once installed, you can run rootkit hunter to check for any rootkits infecting your computer using the following command:
# rkhunter -c
The binary rkhunter is installed in the /usr/local/bin directory and one needs to be logged in as root to run this program. Once the program is executed, it conducts a series of tests as follows :
  • MD5 tests to check for any changes
  • Checks the binaries and system tools for any rootkits
  • Checks for trojan specific characteristics
  • Checks for any suspicious file properties of most commonly used programs
  • Carries out a couple of OS dependent tests - this is because rootkit hunter supports multiple OSes.
  • Scans for any promiscuous interfaces and checks frequently used backdoor ports.
  • Checks all the configuration files such as those in the /etc/rc.d directory, the history files, any suspicious hidden files and so on. For example, in my system, it gave a warning to check the files /dev/.udev and /etc/.pwd.lock .
  • Does a version scan of applications which listen on any ports such as the apache web server, procmail and so on.
After all this, it outputs the results of the scan and lists the possible infected files, incorrect MD5 checksums and vulnerable applications if any.

Fig: Another screenshot of rootkit hunter conducting a series of tests.

On my machine, the scanning took 175 seconds. By default, rkhunter conducts a known good check of the system. But you can also insist on a known bad check by passing the '--scan-knownbad-files' option as follows :
# rkhunter -c --scan-knownbad-files 
As rkhunter relies on a database of rootkit names to detect the vulnerability of the system, it is important to check for updates of the database. This is also achieved from the command line as follows:
# rkhunter --update
Ideally, it would be better to run the above command as a cron job so that once you set it up, you can forget all about checking for the updates as the cron will do the task for you. For example, I entered the following line in my crontab file as root user.
59 23 1 * * echo "Rkhunter update check in progress";/usr/local/bin/rkhunter --update
The above line will check for updates first of every month at exactly 11:59 PM. And I will get a mail of the result in my root account.

Chkrootkit
This is another very useful program created by Nelson Murilo and Klaus Steding Jessen which aids in finding out any rootkits on your machine. Unlike Rootkit hunter program, chrootkit does not come with an installer, rather you just unpack the archive and execute the program by name chrootkit. And it conducts a series of tests on a number of binary files. Just like the previous program, this also checks all the important binary files, searches for telltale signs of log files left behind by an intruder and many other tests. In fact, if you pass the option -l to this command, it will list out all the tests it will conduct on your system.
# chkrootkit -l
And if you really want to see some interesting stuff scroll across your terminal, execute the chkrootkit tool with the following option:
# chkrootkit -x 
... which will run this tool in expert mode.

Rootkit Hunter and Chkrootkit together form a nice combination of tools in ones forte to detect rootkits in a machine running Linux.

Update: One reader has kindly pointed out that Michael Boelen has handed over the Rootkit Hunter project to a group of 8 like minded developers. And the new site is located at rkhunter.sourceforge.net

Friday, 15 December 2006

FSF starts campaign to enlighten computer users against Microsoft's Vista OS

When a multi-billion dollar company famed for their extreme stand for all proprietary software is on the verge of releasing their much touted next generation OS named Vista, what does Free Software Foundation which shuns all things proprietary do ? That is right, they start a campaign trying to enlighten the computer users about the pitfalls of buying Vista and also introduce them to the Free alternatives that one can have in place of Microsoft's offer.

FSF has launched a new site named badvista.org which will focus on the danger posed by Treacherous Computing in Vista.

John Sullivan, the FSF program administrator has aptly put it as thus :
Vista is an upsell masquerading as an upgrade. It is an overall regression when you look at the most important aspect of owning and using a computer: your control over what it does. Obviously MS Windows is already proprietary and very restrictive, and well worth rejecting. But the new 'features' in Vista are a Trojan Horse to smuggle in even more restrictions. We'll be focusing attention on detailing how they work, how to resist them, and why people should care.
FSF invites all Freedom loving computer users to participate in the campaign at Badvista.org.

Thursday, 14 December 2006

RPM to be revitalized - courtesy of Fedora Project

The hot news right out of the oven is that RPM - the famous package manager that is the base of all Red Hat based Linux distribution packages is going to get a shot in the arm. The Fedora project has decided to create an active community around the RPM. Already a wiki for RPM has been setup which details the project goals.

My first foray with Linux was with Red Hat and during the course of time, I learnt to use RPM to install, upgrade and uninstall packages. But once I started using it, I realized that it was not as simple as it looked. For example, if Package A depended on a library in Package B and Package B was not installed on the machine, then RPM refused to install Package A. And if Package B in turn is dependent on a library residing in Package C, then this problem gets repeated down the line. This came to be known popularly as dependency hell. I have always wondered why Red Hat was not bringing changes to RPM and making the lives of the users easier, given that most packages for Red Hat are RPM based packages.

Perhaps the need of the hour is that all Linux distributions support a universal package format with all packages residing in a central repository, which can be shared by all Linux distributions alike. But this scenario looks bleak with Debian having its own dpkg format and Red Hat based distributions having their own RPM based package formats. Atleast there is going to be better inter operability with different Red Hat based Linux distributions in the future as one of the goals of this new project is to work towards a shared code base between SuSE, Mandrake, Fedora and so on. At present, a lot of work in creating packages and maintaining repositories is being repeated over and over. But Fedora's decision breathes new life in the future of RPM and one can hope to see RPM morph into a more efficient, robust package manager with lesser bugs.

Some of the initial goals of the new project are going to be as follows :
  • Give RPM a full technical review and work towards a shared base.
  • Make RPM a lot simpler.
  • Remove a lot of existing bugs in the RPM code base.
  • Make it more stable.
  • Enhance the RPM-Python bindings thus bringing greater interoperability between Python programs and RPM.

Sunday, 10 December 2006

Sun Microsystems - doing all it can to propagate its immense software wealth

A couple of weeks back, Sun Microsystems created a buzz in the tech world when it announced its decision to release their flag ship language Java under a GPL license albeit GPL v2. But even though it could have surprised and gladdened the Free Software fans the world over, it is clear that it was a well calculated, deeply thought out decision which was aimed at the survival and further propagation of the Java language.

It is true that at its core, Sun is a hardware company with the bulk of its revenue being generated from selling high end servers, workstations and storage solutions. But it has also invested heavily in developing robust software. And what is amusing is that it does not charge anything for most of the software it has developed and has been providing it free of cost. OpenOffice.org, Netbeans, Java and Solaris being a case to the point.

At one time, Solaris was the most popular Unix operating system enjoying a huge market share, greater than even IBM AIX and HP-UX combined. Then Linux arrived at the horizon and slowly started chipping away at the market share of all the Unixes including Solaris. With Linux gaining demi god status, it was inevitable that Sun take a deep look at itself. It realized that if it did not re-structure its thinking, it will be reduced to a mere hardware company like Dell selling boxes, from its present status as an IP creator. And it has shown enough foresight to change with changing times. Instead of fighting Linux, it started bundling Linux - more specifically Red Hat Linux with its servers along side its own operating system Solaris. And over an year back, it released the Solaris code under an open license and named it OpenSolaris.

Now Sun is going even further by hinting that it is seriously considering releasing Solaris under a GPL license. A few years back, the PCs that were sold did not meet the minimum requirements for running Solaris which made it a difficult proposition to run it as a desktop. But with rapid advances made in the hardware field, a drastic drop in hardware prices and partly thanks to Microsoft for upping the ante with regard to minimum memory requirements for running Vista, it has suddenly become possible to look at Solaris as a viable desktop OS alternative as it works smoothly with just 512 MB RAM.

Fig: Get a Free DVD consisting of Solaris 10 and Sun Studio software

Taking all these events into consideration, Sun is doing everything in its power to ensure that the fruits of its hard work lives on and gains in popularity. A few days back when I visited Sun's website, I was surprised to see a link offering to send a free DVD media kit consisting of the latest build of Solaris 10 and Sun Studio 11 software to the address of ones choice. I have always believed that one of the reasons for Ubuntu to gain so much popularity was because of its decision to ship free CDs of its OS. Perhaps taking a leaf from Ubuntu, Sun has also started shipping free DVDs of Solaris 10 OS to anybody who want a copy of the same - a sure way of expanding its community.

In the long run, the logical thing for Sun to do will be to release Solaris under GPL. By doing so, Sun would gain the immense good will of the Free Software fans the world over and ensure a permanent place in the history of computing. Unlike GNU/Linux which is a loose amalgamation of scores of individual software pieces around the Linux kernel, Solaris is a whole product whose tools are tightly integrated with its kernel. So even if Solaris is released under GPL, it may not see as many distributions as we see in Linux. And who is better qualified to provide services and support for Solaris other than Sun itself?

Thursday, 7 December 2006

Travails of adding a second hard disk in a PC running Linux

Over the years, I have accumulated a couple of hard disks which I salvaged from my old computers. I have a Seagate 12 GB hard disk, a Samsung 2.1 GB hard disk apart from another Seagate 20 GB hard disk. In fact, these were just lying around with out being put to any use and recently I decided to add one of them to my present computer.

I opened up the case and inserted one of the hard disk in the hard disk bay, set up the connectors and turned on the machine hoping to see it boot up as normal. And it did go beyond the Bios POST and I got the grub boot loader screen. But when I chose to boot the Linux distribution, it gave the error that it couldn't find the root partition. That was rather surprising as I had not made any changes to the structure of the hard disk either by re-installing Linux or modifying the grub menu. After some head scratching, I figured out that perhaps the hard disks are detected in a different order by the computer. To verify this, I booted using a Linux Live CD and I was right. The original hard disk was detected by Linux as /dev/hdb instead of /dev/hda and this screwed up everything as the /etc/fstab file and the grub menu had the entry /dev/hda.

The thing to remember is that the hard disks - and I am talking about the IDE variety - have around 8 pins at the back which can be connected together via jumpers. And depending on what position you have set the jumpers, the hard disks will be detected in different ways by the computer.

Usually, when you buy a new hard drive, it will have the jumper pins in the cable select position. This allows the drive to assume the proper role of master or slave based on the connector used on the cable. For the cable select setting to work properly, the cables you are using must support the cable select feature.

In my case, I had two hard disks connected using the same cable and both had the jumper pins in the cable select position. This meant that when I booted the PC, it automatically selected one hard disk as the primary master and the other as the primary slave. And unfortunately, it selected the hard disk which had the Linux OSes as the primary slave which was why it was detected by Linux as /dev/hdb instead of /dev/hda.

Fig: Hard disk jumper settings

Once I figured this out, the solution was simple. I re-opened my computer case and changed the jumper settings of the hard disk containing the Linux OS to the primary master and the jumper settings of the second hard disk to the slave position (See figure above). And I was able to boot into Linux without a problem.

One thing worth noting is that different IDE hard disks have different jumper positions for setting them as primary and slave and the positions are usually printed on top of the hard disk. So you should check the table printed on your hard disk before changing the jumper pins.

Now if you are wondering what I did with the remaining two hard disks, I could have very well added them too but then you can connect only a total of four devices this way namely primary master, primary slave, secondary master and secondary slave. And if I did that, there wouldn't have been a vacant slot to connect the internal CD Writer and the DVD drive. So I use these two hard disks for backing up data.

Monday, 4 December 2006

Humor - Get your ABC's of Linux right

Recently, one of my friends shared with me this rather funny ode to Linux which was passed on to him by a friend of his, which I am in turn sharing with you. So without much ado, here is the rhyming ode to Linux ...

A is for awk, which runs like a snail, and
B is for biff, which reads all your mail.
C is for cc, as hackers recall, while
D is for dd, the command that does all.
E is for emacs, which rebinds your keys, and
F is for fsck, which rebuilds your trees.
G is for grep, a clever detective, while
H is for halt, which may seem defective.
I is for indent, which rarely amuses, and
J is for join, which nobody uses.
K is for kill, which makes you the boss, while
L is for lex, which is missing from DOS.
M is for more, from which less was begot, and
N is for nice, which it really is not.
O is for od, which prints out things nice, while
P is for passwd, which reads in strings twice.
Q is for quota, a Berkeley-type fable, and
R is for ranlib, for sorting ar table.
S is for spell, which attempts to belittle, while
T is for true, which does very little.
U is for uniq, which is used after sort, and
V is for vi, which is hard to abort.
W is for whoami, which tells you your name, while
X is, well, X, of dubious fame.
Y is for yes, which makes an impression, and
Z is for zcat, which handles compression.

I noticed one error in the third line of the poem though, which is that Linux does not use the cc compiler, rather it uses gcc. But apart from that, this is a nice compilation.

Friday, 1 December 2006

Trolltech's Qtopia Greenphone

We are moving towards an era where the line demarcating a computer and the rest of the electronic devices is at best getting hazy. Take the mobile phones for instance... Now a days, the sheer power and the number of features available in some models of mobile phones rivals those found in a low end PC. Electronic devices are fast morphing into gadgets which are many things for different people.

Trolltech, the creators of the Qt library which is used to develop KDE has released a Linux mobile development device - the rest of us can call it a mobile phone. What is unique about this phone is that it is powered by Linux and more importantly, it is aimed at the developers who are interested in creating applications using the Greenphone SDK and the phone allows the developers to test their applications on it. The current model of greenphone was developed with close cooperation with a Chinese device manufacturer called Yuhua Teltech. Offered as a part of the Greenphone SDK, Trolltech claims that this powerful GSM/GPRS device provides the perfect platform for creation, testing and demonstration of new mobile technology services.

Fig: Trolltech's greenphone powered by Linux.

Nathan Willis who spent a couple of weeks with a review unit reveals his thoughts about this unique product from Trolltech. And even though he finds a couple of faults with the design of the phone, he concludes that nevertheless, it is a small step in the right direction. He has also made available a slide show of the pictures of the phone here.

Specifications of the Qtopia Greenphone
The software that powers this phone consists of Qtopia Phone Edition 4.1.4 and Linux kernel 2.4.19

The hardware consists of the following:
  • Touch-screen and keypad UI
  • QVGA® LCD color screen
  • Marvell® PXA270 312 MHz application processor
  • 64MB RAM & 128MB Flash
  • Mini-SD card slot
  • Broadcom® BCM2121 GSM/GPRS baseband processor
  • Bluetooth® equipped
  • Mini-USB port
Minimum system requirements for the development environment are as follows:
  • 512 MB RAM
  • 2.2 GB HDD space and
  • 1 GHz processor
It may be worth noting that there are a number of embedded devices which are powered by Linux, Nokia's internet tablet being a prominent one. But what makes Trolltech's greenphone unique is the open developement environment provided with a capability to reflash application memory thus making it truly Open Source.

Wednesday, 29 November 2006

FizzBall - A well designed enjoyable game for Linux

Anybody who has played games on their PC will be familiar with a classic game called Breakout where you have to bounce a ball with a paddle and smash all the bricks. While this game in its original make does not sport any special features, it has helped spawn a number of breakout clones which provide additional special effects such as power-ups that provide more power to the ball for a short while - and which make it far more entertaining and enjoyable to play. A couple of years back, I enjoyed playing a breakout clone called DxBall. But most of these so called breakout clones are developed to be run exclusively on Windows. And one of the standing grouse of Linux users is the dearth in quality professional games which run on Linux.

But that is bound to change as more and more professional game developers are seriously considering Linux as a viable platform alongside Windows to release their games. One such professional game development company is Grubby games - founded by Ryan Clark and Matt Parry, which has been developing games which entertain as well as educate the players.

FizzBall one of the games developed by Grubby games and released for Linux is a game with a little similarity to the classic Breakout game in that you have to bounce a bubble using a machine which has the same functions as a paddle in Breakout. But barring that, the game play is entirely different. The aim of the game is to collect all the animals from the wild by directing the bubble towards the animals. In the beginning of each level, the bubble will be small and will bounce off the animals which are larger than the bubble. So you have to collect food in the form of apples, coconuts, acorns and so on littering the area; and as the bubble gobbles up these things, it grows in size and is able to collect larger animals. The level is completed once all the animals are collected inside the bubble in which case, you are taken to the next level. There are over 180 levels in this game.

Fig: You have to break the crates to get to the animals inside.

Fig: Another game level.

What I really liked about the game is that the developers have kept a sharp eye for details. The game is gorgeously animated and illustrated. For example, the animals do not remain stationary but move around. When the bubble bounces off an animal, the animal emits a sound - for example if it is a cow, it moos, if it is a lion, it roars and so on. And if at all the bubble when it is tiny, hits a skunk, it will release a smell.The animals you have collected in each level are kept in an animal sanctuary. All along the game play, you get lots of money and power-ups which you have to collect by directing the machine to them. The money you collect helps you to hop from one island to another (there are seven of them) and also to feed the animals residing in the sanctuary.

Fig: Animal sanctuary

And the power-ups provide additional power to the bubble. Some of the extra powers that are available are the gravity bubble, energy shield, faster bubble, wacky weather ... just to name a few. There are bonus levels after every few regular levels which allow you to gain additional points and money. And each island has offbeat paths that introduce a new animal. And in some levels, you come face to face with an alien which shoots at you and the animals. And it is your duty to capture the alien by directing the bubble towards it.

Fig: View your trophies in the trophy room

The game has two modes - the regular mode and the kids mode. In the kids mode, you do not lose the bubble even if you miss hitting it with the machine. And each new level in the kids mode is preceded by a fun quiz. Just to give a taste, these are some of the questions I encountered in the fun quiz:
  • Which baby animal can be called a kid? Goat
  • A group of these animals can be called a Mob. - I forgot the answer ;-)
  • A group of these animals can be called a pride. Lions
  • Which baby animal can be called a gosling ? Goose
  • Which animal's baby can be called a snakelet ? snake
  • A group of these animals can be called a Parliament. Owl
It is clear to see that the developers behind this game, had dual purpose in mind while creating this game - which is, to educate and entertain. For instance, there are bonus levels in the game where the player has to break the numbered objects in the right order - a good way to teach the little kids how to count.

Fig: Break the numbered crates in order

The story is good, the game play is simple but entertaining and the graphical effects are outstanding which makes this game a very good one for both adults and children alike.

FizzBall game features
  • Over 180 unique levels of game play.
  • The game stage is automatically saved once you exit the game and you can continue where you left off the next time you start playing.
  • Multiple users can be created and each user's game is saved separately.
  • There are two modes - Regular mode and Kids mode. The kids mode does not allow you to lose the balls and includes fun quizzes between levels.
  • If you lose all your bubbles, you can still continue with the game, though all your scores will be canceled.
  • Get trophys for achieving unique feats. For example, I recieved a trophy for capturing an alien without getting hit by a laser :-) .
Running FizzBall in GNU/Linux
This game for GNU/Linux is packaged as a gzipped archive. And all you have to do is unpack the archive and run the script named run.sh and the game will commence.

Pros of the game
  • Eye catching design and excellent graphics.
  • Is educative for little kids as well as entertaining for all ages.
  • Over 180 levels in both the regular and kids mode of the game.
Cons of the game
Is not released under GPL, with the full version of the game costing USD $19.95. A time limited demo version of the game is available though for trying out before buying. But having played the full game, I would say that the money is well spent.

The good news is, professional game developers are seriously eyeing the Linux OS alongside Windows as a viable platform to release their games, FizzBall being a case to the point.

Sunday, 26 November 2006

Richard M Stallman talks on GPL version 3 at the 5th International GPLv3 Conference in Japan

The fifth international GPLv3 conference was held on 22nd and 23rd of November in Akibara Tokyo Japan. A couple of months back, RMS had spoken at the 4th GPLv3 international conference held at Bangalore India. These conferences are a part of a series of events organized by Free Software Foundation to enlighten the public about the upcoming new version of GPL, more specifically to make them aware how GPLv3 will help them better in safeguarding their freedom vis-a-vis the software they use.

In Tokyo too, RMS gave a talk which concentrated on the upcoming GPLv3 and the major changes that they are thinking of bringing to the license in its current form. fsfeurope.org is running a transcript of Mr Stallman's talk in Tokyo which is a must read for any GNU/Linux enthusiast.

He dwelt in depth on a variety of topics such as the differences between GPLv1 and GPLv2, The changes that are aimed at GPLv3 such as better support for internationalization, better license compatibility with the Apache license and the Eclipse license, preventing tivoisation, fighting software patents by carrying an explicit patent license and a few other things.

It is really simple when you look at the logic provided by RMS. He is not concerned about any particular OS or software... rather, his number one priority is to conserve the freedoms enjoyed by the people who use Free software in a way such that nobody will be able to hold the Free Software Movement at ransom. Today Linux is the darling of many corporates with many of the heavy weights jumping on the Linux bandwagon. For any business, the fundamental aim is to make money. And with Linux becoming a viable platform, businesses are slowly realizing the advantages of embracing Linux. The only irritant that is standing in their way is the GPL license which they could do without. RMS and Free Software Foundation is working towards safeguarding the GPL by plugging all its loopholes so that it is not possible to circumvent it and thus compromise any of the freedoms guaranteed by GPL.

Thursday, 23 November 2006

Making the right decisions while buying a PC

With the speed with which advances are made on the technological front, I sometimes wonder if buying an electronic product now is a good decision. Especially since if I choose to wait for a couple more months, I could get an even better product with more features at more or less the same price as the product I intended to buy now.

This truism is especially valid while buying a PC. On the one hand, the applications that are being developed demand more and more processing power and memory to run at their optimal level and on the other, the hardware prices are coming down at a steep rate. So if I go out to buy a PC, I have to make sure that it will be able to meet my purpose for atleast the next one and half to two years... after which it will be time to either upgrade - if I am lucky enough to have taken the right decision of buying a PC which was designed with expansions in mind, or just discard the PC and buy a new one.

So what are the things you need to watch out for if you are seriously considering buying a PC now? Thomas Soderstrom has written a very informative article which throws light on the components that one should select to be included in ones PC. He touches on the cases to be used such as full towers, ATX, mini ATX, shuttle form factor and so on, the best processor (CPU), the type of interface slots on the motherboard, the memory, the capacity of the hard drive and so on.

The gist of his choice filters down to the following:
  • ATX tower case - is capable of holding a full size motherboard with space for several optical drives and is ideal for home users and gaming enthusiasts.
  • CPU - As of now Intel core duo provides the best power-performance-price ratio. Enough applications have been optimized for dual-core chips that these should be considered for any moderate to heavy use, especially when multitasking.
  • Always go for motherboards that have the PCI Express slots over the now fast becoming outdated ordinary PCI slots.
  • And with respect to memory (RAM), your best bet is to go for atleast DDR-400 and above though ideally DDR2-800 is recommended. And don't even think of a machine with less than 512 MB RAM. The article strongly recommends a choice of 2 GB memory if you can afford it as near future applications and OSes will demand that much memory.
  • On the storage front, if you are in the habit of archiving video or hoarding music on your hard disk, do consider atleast a hard disk of 150 GB. The article recommends Western Digital's Raptor 150 GB drives if you are on the look out for better performance and Seagate Barracuda 750 GB for those on the look out for larger capacity drives. Both are costly though.
  • And do go for a DVD writer over a CD-RW/DVD combo.
I remember reading an article on the best value Desktop PC in the most recent print edition of PCWorld (Indian edition) magazine. And they selected the "HCL Ezeebee Z991 Core2 Duo" branded PC as the best buy from among a number of other branded PCs. This PC sports the Intel core 2 Duo E6300 processor, 512 MB DDR2 RAM, An optical DVD-RW drive and 160 GB SATA hard disk.

Something I have noticed is that in India, the PCs that are advertised sport just enough memory for the current needs. In fact, it is the habit of these people to skimp on memory while selling a PC. Every day, I see atleast 3 to 4 advertisements selling PCs with just 256 MB memory and in one or two cases with a measly 128 MB. The rule of the thumb to follow is the more memory the better.

Wednesday, 22 November 2006

A peep into how Compact Discs are manufactured

Ever wonder how a CD aka Compact disc is manufactured? There is a whole string of tasks involved in creating the compact disc. It starts by creating an original master disc made of glass. During the process, the glass disc is treated with two chemicals - a primer and a photo resistant coating. Then the photo resistant coating on the glass surface is dried in an oven for 30 minutes. Then the data that goes on the CD is etched on the glass and then the glass is electrocoated by applying a thin coating of nickel and vanadium. After going through a few other steps, what you have is a die - or a master copy. The CDs that you hold in your hand are manufactured from this master copy. The CDs are not made of glass but is actually a liquid polycarbonate which is injected into the mold to create the CDs.

One thing worth noting is that there are two different ways of creating a CD. One is the recordable CDs or blank CDs and the other is the pressed CD in which the data is directly stamped on the disc at the time of creation of the disc. An example of pressed CDs are the ones you get along with a IT magazines.

I found this short video of manufacturing a CD quite informative. The video clip details the creation of a pressed CD.

Update (Feb 14th 2007): The Youtube video clip embedded here has been removed as I have been notified by its real owners that the video clip is copyrighted.

Sunday, 19 November 2006

Ifconfig - dissected and demystified

ifconfig - the ubiquitous command bundled with any Unix/Linux OS is used to setup any/all the network interfaces such as ethernet, wireless, modem and so on that are connected to your computer. ifconfig command provides a wealth of knowledge to any person who takes the time to look at its output. Commonly, the ifconfig command is used for the following tasks:

1) Configuring an interface - be it ethernet card, wireless card, loop back interface or any other. For example, in its simplest form, to set up the IP address of your ethernet card, you pass the necessary options to the ifconfig command as follows:
# ifconfig eth0 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 up
Where the 192.168.0.1 number pertains to the IP address of your machine. I have used a private IP address. 255.255.255.0 denotes the network mask which decides the potential size of your network and the number 192.168.0.255 denotes the broadcast address and lastly, the 'up' keyword is the flag which loads the module related to this particular ethernet card and makes it ready to receive and send data.

2) Gathering data related to the network off which our computer is a part.
When used without any parameters, the command ifconfig shows details of the network interfaces that are up and running in your computer. In my machine which has a single ethernet card and a loop back interface, I get the following output.

eth0 Link encap:Ethernet HWaddr 00:70:40:42:8A:60
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1
RX packets:160889 errors:0 dropped:0 overruns:0 frame:0
TX packets:22345 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33172704 (31.6 MiB) TX bytes:2709641 (2.5 MiB)
Interrupt:9 Base address:0xfc00

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:43 errors:0 dropped:0 overruns:0 frame:0
TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3176 (3.1 KiB) TX bytes:3176 (3.1 KiB)
As you can see, it throws up a lot of data, most of it providing one detail or another. Lets look at the data spewed out by the ifconfig command one by one for the ethernet device.
  • Link encap:Ethernet - This denotes that the interface is an ethernet related device.
  • HWaddr 00:70:40:42:8A:60 - This is the hardware address or MAC address which is unique to each ethernet card which is manufactured. Usually, the first half part of this address will contain the manufacturer code which is common for all the ethernet cards manufactured by the same manufacturer and the rest will denote the device Id which should not be the same for any two devices manufactured at the same place.
  • inet addr - indicates the machine IP address
  • Bcast - denotes the broadcast address
  • Mask - is the network mask which we passed using the netmask option (see above).
  • UP - This flag indicates that the kernel modules related to the ethernet interface has been loaded.
  • BROADCAST - Denotes that the ethernet device supports broadcasting - a necessary characteristic to obtain IP address via DHCP.
  • NOTRAILERS - indicate that trailer encapsulation is disabled. Linux usually ignore trailer encapsulation so this value has no effect at all.
  • RUNNING - The interface is ready to accept data.
  • MULTICAST - This indicates that the ethernet interface supports multicasting. Multicasting can be best understood by relating to a radio station. Multiple devices can capture the same signal from the radio station but if and only if they tune to a particular frequency. Multicast allows a source to send a packet(s) to multiple machines as long as the machines are watching out for that packet.
  • MTU - short form for Maximum Transmission Unit is the size of each packet received by the ethernet card. The value of MTU for all ethernet devices by default is set to 1500. Though you can change the value by passing the necessary option to the ifconfig command. Setting this to a higher value could hazard packet fragmentation or buffer overflows. Do compare the MTU value of your ethernet device and the loopback device and see if they are same or different. Usually, the loopback device will have a larger packet length.
  • Metric - This option can take a value of 0,1,2,3... with the lower the value the more leverage it has. The value of this property decides the priority of the device. This parameter has significance only while routing packets. For example, if you have two ethernet cards and you want to forcibly make your machine use one card over the other in sending the data. Then you can set the Metric value of the ethernet card which you favor lower than that of the other ethernet card. I am told that in Linux, setting this value using ifconfig has no effect on the priority of the card being chosen as Linux uses the Metric value in its routing table to decide the priority.
  • RX Packets, TX Packets - The next two lines show the total number of packets received and transmitted respectively. As you can see in the output, the total errors are 0, no packets are dropped and there are no overruns. If you find the errors or dropped value greater than zero, then it could mean that the ethernet device is failing or there is some congestion in your network.
  • collisions - The value of this field should ideally be 0. If it has a value greater than 0, it could mean that the packets are colliding while traversing your network - a sure sign of network congestion.
  • txqueuelen - This denotes the length of the transmit queue of the device. You usually set it to smaller values for slower devices with a high latency such as modem links and ISDN.
  • RX Bytes, TX Bytes - These indicate the total amount of data that has passed through the ethernet interface either way. Taking the above example, I can fairly assume that I have used up 31.6 MB in downloading and 2.5 MB uploading which is a total of 37.1 MB of bandwidth. As long as there is some network traffic being generated via the ethernet device, both the RX and TX bytes will go on increasing.
  • Interrupt - From the data, I come to know that my network interface card is using the interrupt number 9. This is usually set by the system.
The values of almost all the options listed above can be modified using the requisite ifconfig options. For example, you can pass the option 'trailers' to the ifconfig command to enable trailer encapsulation. Or you can change the packet size by using the 'mtu' option along with the new value and so on. But in majority of the cases, you always accept the default values.

Learning to use the right command is only a minuscule part of the job of a network administrator. The major part of the job is in analyzing the data returned by the command and arriving at the right conclusions.

Wednesday, 15 November 2006

LinuxBIOS - A truly GPLed Free Software BIOS

A few months back, I had posted an article related to BIOS which described its functions. A BIOS is an acronym for Basic Input Output System and is the starting point of the boot process in your computer. But one of the disadvantages of the proprietary BIOS which are embedded in most PCs is that there is a good amount of code which is used in it to support legacy operating systems such as DOS and the end result is a longer time taken to boot up and pass the control to the resident operating system.

This time can be significantly reduced if the code pertaining to legacy OSes is removed; especially if you intend to install and use any of the modern OSes on your system which tends to do all the hardware probing and load its own hardware drivers anyway. So in a PC running a modern OS such as one of the BSDs, Linux or Windows, the BIOS is doing nothing but providing information, and much of the information it provides will not even be used. In such machines, all the BIOS really had to do is load the bootstrap loader or bootloader and pass the control to the resident OS.

One project which intends to give the BIOS chip makers such as Phoenix and Award a run for their money is the LinuxBIOS project. LinuxBIOS aims to replace the normal BIOS found on PCs, Alphas, and other machines with a Linux kernel that can boot Linux from a cold start. The trick that LinuxBIOS uses is to use a embedded Linux OS to load the main OS. Some of the benefits of using a LinuxBIOS as listed in their website over the more common BIOS-es are as follows (and I quote):

  • 100% Free Software BIOS (GPL)
  • No royalties or license fees!
  • Fast boot times (3 seconds from power-on to Linux console)
  • Avoids the need for a slow, buggy, proprietary BIOS
  • Runs in 32-Bit protected mode almost from the start
  • Written in C, contains virtually no assembly code
  • Supports a wide variety of hardware and payloads
  • Further features: netboot, serial console, remote flashing, ...
The LinuxBIOS project has been making rapid inroads into general acceptance by many computer manufacturers. But one of its major break through was that the One Laptop per Child project selected it for inclusion in its laptop meant for use by children. But the hot news fresh out is that Google - the search engine giant has jumped in the foray by deciding to sponsor the LinuxBIOS project. As of now the LinuxBIOS supports a total of 121 motherboards from 58 vendors.

You can watch a video of Linux BIOS booting Linux on a rev board below:


Sunday, 12 November 2006

Is Free Software the future of India? Steve Ballmer CEO of Microsoft answers...

The solemn occasion was the talk show hosted by NDTV 24x7 - a premier cable television news channel in India. And the discussion centered on the topic - "Bridging the digital divide between the urban rich and rural poor in India". The panel composed of distinguished personalities including Steve Ballmer - the CEO of Microsoft, N.R. Narayana Murthy - Chairman of Infosys Technologies, Ashok Jhunjunwala professor of Electrical Engineering from IIT Chennai and Malvinder Mohan Singh - the chief executive and MD of Ranbaxy Laboratories. And the talk was hosted by NDTV's Prannoy Roy. The very first question that was asked off Steve Ballmer was the following: Is Free Software the future of India?

Taking care not to use the word(s) "Free software", Mr Ballmer conceded that a number of revenue streams including those by selling hardware, internet connectivity and software are important. He went on to say, "As rich and good be bridging the digital divide, software companies should look forward to three or four sources of income. Many revenues for software companies will come from not any one thing but will include subscription fees, lower cost hardware, advertising and of course traditional transaction (read proprietary software)". He does agree that "prices must come down" though it was plain to see him take care not to use the word "FREE" in his answer.

Another question that was posed to him was "Is bridging the rural divide all about money ?". Mr Ballmer answered by saying "It is not not about money but also not about short term profits". In short Microsoft is looking for long term profits.

And when asked , "American government spearheads democracy. Are the American businesses in tune with that?". He answered as follows: "Any multi-national should behave appropriately and lawfully in any country in which it does business. But our primary aim is to have a generally more helpful participation in world economy". He went on to say, "You can do three things ... you can stay in and do nothing, stay in and have a point of view or stay out".

Watching the talk show, I could not help thinking that Microsoft is more or less resigned to the fact that Open Source and Free Software is here to stay. And what ever one might do, you cannot easily wish it away. If you can't beat them, join them is the new mantra at Microsoft. The recent news of Microsoft's acquisition (sic) of (Um... partnership with) Novell being a case to the point. But I was left with the feeling that Microsoft needs to be honest and more outright in acknowledging the very important part that Free Software and Linux plays in the over all big picture in IT. Steve Ballmer was on a three day visit to India, his itinerary included calling on the Indian Prime Minister Dr Manmohan Singh to discuss Microsoft's future plans for India.

Friday, 10 November 2006

Book Review: Ubuntu Hacks

Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning LinuxI recently got hold of a very nice book on Ubuntu called Ubuntu Hacks co-authored by three authors - Kyle Rankin, Jonathan Oxer and Bill Childers. This is the latest of the hack series of books published by O'Reilly. They have made available a rough cut version of the book online ahead of schedule which was how I got hold of the book but as of now you can also buy the book in print. Put in a nutshell, this book is a collection of around 100 tips and tricks which the authors choose to call hacks, which explain how to accomplish various tasks in Ubuntu Linux. The so called hacks range from down right ordinary to the other end of the spectrum of doing specialised things.
Read more »

Tuesday, 7 November 2006

A list of Ubuntu/Kubuntu repositories

At a time when I was using Red Hat (Fedora), One of my favourite repositories was Dag-wieers not only because the official Red Hat repository was dead slow due to excess traffic but also because dag-wieers contained a number of additional RPM packages which were missing in the official repositories such as those with support for proprietary file formats. That was the culmination of my search for additional repositories to include in my Yum configuration file.

Now a days, this is not at all a problem especially when you are using Ubuntu, as the repositories have been demarcated into different sections such as Universe, Multiverse and so on depending upon the type of package available in each one of them such as whether the package is released under a free license or a proprietary one. And it is only a matter of enabling the desired repository and then using apt-get to install the requisite package. Still, it doesn't hurt to have a number of additional repositories apart from the ones provided officially by Ubuntu. Trevino has compiled an exhaustive collection of repositories for Ubuntu and Kubuntu which you can include in your /etc/apt/sources.list file. A word of caution is in order though, which is that since these are unofficial repositories, it is difficult to verify the integrity of the packages. So use at your own risk.

Monday, 6 November 2006

Learning to use netcat - The TCP/IP swiss army knife

NC - short form for Netcat is a very useful tool available on all Posix OSes which allow one to transfer data across the network via TCP/UDP with ease. The principle is simple... There is a server mode and a client mode. You run the netcat tool as a server listening to a particular port on the machine which sends the data and you use netcat as a client connecting to that particular port on the machine it is running as a server. The basic syntax of netcat is as follows :

For the server :
nc -l <port number >
... where -l option stands for "listen" and the client connects to the server machine as follows :
nc <server ip address> <port number>
And in which all ways can you put it to use ? For one,
  • You can transfer files by this method between remote machines.
  • You can serve a file on a particular port on a machine and multiple remote machines can connect to that port and access the file.
  • Create a partition image and send it to the remote machine on the fly.
  • Compress critical files on the server machine and then have them pulled by a remote machine.
  • And you can do all this securely using a combination of netcat and SSH.
  • It can be used as a port scanner too by use of the -z option.
To see how all the above tasks are accomplished, check out the very nice compilation by G.Notaras who provides a couple of netcat examples . Just remember, the actual command is called 'nc' and not netcat.

Sunday, 5 November 2006

AptonCD - Create a backup of all the packages you have installed using apt-get

Consider this scenario... You are interested in installing GNU/Linux on your machine. Assuming you already have the latest version burned on to a CD, it is a simple affair of popping the CD into your CD drive and starting the instalation. But once the installation is done and finished, you will most certainly want to install additional software apart from the ones bundled with the CD. And if you are using a Debian based Linux distribution such as Ubuntu, you will be using the apt-get method. Over a period of time you would have installed a number of additional software including any packages satisfying their dependencies as well as upgraded some of the software to the most recent version.

The problem occurs when you decide to re-install Linux on your machine. You are forced to start all over again, downloading additional software using apt-get. Personally, I have re-installed Debian or a Debian based Linux distribution umpteen times on my machine. And each time I have wished there was a simple way of backing up the packages which I have previously downloaded and installed via apt-get.

A good samaritan has pointed out to a unique project named AptonCD which allows one to create a CD image (ISO) of all the packages downloaded via apt-get or even the packages in a given repository.

On Ubuntu for instance, you can install it using the command:
# sudo apt-get install aptoncd
Once it is installed, you will find a Gnome menu entry at the location System -> Administration -> AptonCD. Clicking on it opens a GUI which will aid in the creation of an ISO image of all the packages stored in the /var/cache/apt/archives directory and any other files which are needed. You can also run aptoncd from the command line to start the AptonCD program.

So how do you use the program ?

It is simple really, the GUI has two tabs namely Create and Restore. The Create tab has a single button which when clicked copies all the necessary packages from the /var/cache/ directory and displays it in a pop up dialog. Here you get to decide if you need to add any additional packages stored in an alternate location or remove some of the already selected packages. There is also an option to set the target media as a CD or DVD and the location where you want to save the resultant image.

Fig: The create tab has just a single button

Fig: Selectively include the packages using this dialog

Once the choices are made, the program creates the necessary CD/DVD image and saves it in the location you had chosen. Now you can either store it in a different location or burn it to a CD/DVD.

Fig: Restore tab allows you to restore from backup

The Restore tab of the AptonCD GUI contain three buttons each catering to a specific purpose. This tab allows you to -
  • Restore all packages available from an AptonCD media (read it as CD or DVD) on to the computer.
  • Restore packages from an AptonCD ISO image previously generated and stored locally.
  • Add a CD/DVD created as a repository for apt-get, aptitude or synaptic. Which means the program adds the necessary lines of code required in the /etc/apt/sources.list file which will enable you to use apt-get or any similar program to install the software on the CD.
I found this program really convenient to use not only when I re-install Ubuntu but also when I want to install the same set of programs on a different machine.

One thing it lacks is a way to automatically download the packages from a remote repository and create a CD/DVD image. But then this software is still in its beta stage and hopefully we can see more features built into it in coming years.

The AptonCD project is the brain child of Rafael who's first language (I believe) is Portuguese. Not surprisingly, I found the help files bundled with the project to require a bit more work as in the present form, they are just place holders for the required documentation. But he has done a remarkable job on the software itself and in its current form, it works flawlessly.

Friday, 3 November 2006

A talk with Jon Maddog Hall - the spokesman for the open source community and president of Linux International

Jon Hall, president of Linux® International, is a passionate spokesman for the open source community and ideal. Over a 30-plus-year career, Hall has been a programmer, systems designer, systems administrator, product manager, technical marketing manager, author, consultant to local, state and national governments worldwide and college educator. He is currently an industry consultant. While at Digital, he became interested in Linux and was instrumental in obtaining equipment and resources for Linus Torvalds to accomplish his first port, to Digital's Alpha platform. Hall serves on the boards of several companies, and several nonprofit organizations. At the U.K. Linux and Open Source Awards 2006, he was honored with a Lifetime Recognition Award for his services to the open source community.

Scott Laningham interviews Jon Maddog Hall where he quizzes Maddog on the progress and challenges for open source, and on the need to recapture a purer vision of education. It is fascinating to hear Maddog speak (mp3 - 11MB) about his experiences (mp3 - 7MB) and thoughts related to Linux and the efforts that are put into making the businesses understand the concept of open source and free software.You can also read the full transcript of the talk here.

Wednesday, 1 November 2006

Drupal 5.0 beta scaling new heights in the Content Management Systems arena

There was a time when publishing content online required fairly good technical knowledge even when the data resided in static HTML pages. Then the blogging revolution happened and the rest as they say is history. Now a days it is possible to publish content online even without an iota of knowledge about the HTML elements. The person writing the article can free his mind to fully concentrate on what he is writing as the technical aspects of publishing is taken care of for him. And catering to this new found craze of publishing content online, a plethora of content management systems have sprung up - most of them released under a free license.

One of my hobbies is to try out different blogging and content management systems. And I have tried out a whole lot of them. In a previous article, I had explained how to setup and configure a Wordpress blog on ones personal machine. Where as Wordpress is purely a blogging tool, a content management system is much more than a blogging tool. In fact a blog forms only a part of a content management system (CMS). An application should have support for other features to qualify it as a CMS such as forums, fine grained access control and a rich collection of modules which extends the functionality of the site.

One such content management system is Drupal. I have been an ardent fan of the Drupal project from the first time I tried it out which was ver 4.6. As much as I liked Drupal, it did have a big drawback; which was that it was quite difficult to install Drupal on a remote host without having shell access. Mind you, the installation as such was a simple affair and it was possible to install it painlessly. But without the shell access, it became a bit complicated. So I found that when most web hosting providers were touting one click installs of other CMSes and blogging tools, they were silent about their support for Drupal. The end result being that you had to do a lot of research before choosing a web hosting provider if you intended to use Drupal for your site.

Things have changed for the better though with the beta release of the latest version of Drupal ver 5.0. Even though Drupal 5.0 is still in the beta stage with a formal release happening once all the bugs are ironed out, I did not run into any problems while installing and using it on my machine.

One of the foremost features of Drupal 5.0 beta is its support for entirely web based installation. Finally the Drupal developers have heeded to the requests for this and many other features and have delivered on their promises. I am happy to say that I found the Drupal two click installation as painless and simple as a Wordpress installation.

Once the installation is done and finished, the first thing that will strike you is the default theme. Now Drupal sports a polished new theme called Garland which is a fluid theme. But you are also provided a fixed width version of the same theme which is named Minnelli. One of the advantages of the new themes is the ease with which it is possible to change the colors of various elements in the theme such as the header, content and links. And all this is possible with the help of a color scheme dialog. Drupal offers 7 preset color schemes but there is an option to create a custom color scheme too.

colorscheme_dialog
Fig: Set your own color scheme to your site

The administration section has been significantly overhauled and has seen major visual changes albeit for the better. Now it is possible to have one theme for the administration section and a separate one for the rest of the site ala Joomla style which is a big step forward considering that some themes messed up the rendering of the administer section while it looked beautiful for the rest of the site. For example, if I want to change the theme of the administration section, I can navigate to Administer -> Site configuration -> Administration theme and select the one of my choice from a drop down list.

administer_dialog
Fig: The administer page has seen a visual overhaul

It is now easier to navigate across different settings in the administer section as it is now broadly divided into 5 intuitive sections them being Content Management, Site Building, Site Management, User Management and Logs. Again the contents in the administer section can be sorted by task or by modules.

Drupal 5.0 beta makes extensive use of JQuery (a popular javascript library) to impart more functionality to many aspects of Drupal.

The default Drupal 5.0 setup does not ship with a WYSIWYG editor for content creation. But you can install the Tinymce module and use the popular Tinymce editor for the same. I think it would have been better if Drupal had made it available in the default setup. After all the primary purpose of a CMS is to make it as easy to publish content online without getting bogged down in the diverse technical vagaries.

One feature which I would like Drupal developers to work on is to try and keep the class and id tags of the CSS files in the themes consistent across versions even when adding new features in upcoming versions. The current problem is that if you have created a very beautiful theme for Drupal ver 4.7, you will have to do some major tweaking of the theme to get it to work well in Drupal 5.0 beta.

All in all version 5.0 beta is a big step in the right direction for this very popular content management system. And not withstanding the beta tag, I found Drupal ver 5.0 to be remarkably stable and to provide a lot of improvements over the earlier versions.

Monday, 30 October 2006

MythTV : How it flags commercial advertisements

MythTVMythTV is a Free Open Source digital video recorder project distributed under the terms of the GNU GPL. In a previous post, I had written about the features of MythTV and listed a number of MythTV sites which provide help in installing MythTV and configuring it to work on ones machine. MythTV is a GPLed software which allows one to watch TV on ones computer. It has capability to intelligently detect commercials in the TV programs and skip through them.

The result is an advertisement free program for your viewing pleasure. I have at times wondered about the technique used by MythTV to accurately detect the times when the ads come up and successfully skip them. It seems MythTV has a variety of tricks up its sleeves.

MythtvPVR - an unofficial site dedicated to MythTV personal video recorders has an interesting article which explains all the techniques used by MythTV to successfully flag all the commercial advertisements and skip through them to provide a non-interfering viewing of your favorite TV soaps.