Tuesday, 28 February 2006

apt-pinning - Configuring Debian to run the latest packages

The first time I installed and tried out Debian Linux distribution, I was surprised by the different way of configuring it which included the placement of configuration files, the change in commands used and so on. Coming from a Red Hat background and tuned to the Red Hat way of doing things, I did have some learning curve to overcome.

But once I mastered how to configure things in Debian, I realised that I liked the Debian way of doing things much more than the Red Hat way. But one thing which really put me off was that Debian installed the antiquated packages of the software I use on a daily basis. And I needed something more recent. I explored how to install the cutting edge of software of my choice in Debian and I did get quite a few suggestions from various quarters including one of incorporating backports repository in the distribution.

But none told me about Apt-Pinning - the process of mixing and matching between stable, unstable and testing repositories to get a stable Debian distribution which also ran the latest version of ones software. And because I was largely unsuccessful in my endeavour of getting the latest version of software running on Debian stable, I switched to Ubuntu.

I recently came across this lucid tutorial written by John.H.Robinson called "Apt-Pinning for Beginners", which explains the process in very clear terms. If I had come across this tutorial earlier, I would still have been using Debian on my PC.

Sunday, 26 February 2006

Deskbar Applet - Integrating Google Search on the Linux Desktop

Deskbar is an applet available for Gnome users which incorporates all the search functions into one single utility. To start using this applet, one has to download it from the repositories which is quite simple as running the command:




# apt-get install deskbar-applet
...if you are using a Debian based system. Once the applet is installed on the machine, one has to add it to the panel. This is because, this applet does not work as a standalone application and needs a wrapper for it to work. I added the Deskbar applet to the panel by right clicking on the panel and then clicking the "Add to Panel..." menu selection.

I have been using this applet for some time now and am really impressed by the amount of search integration that is possible on the desktop. In fact, it wouldn't be far off if one compares Deskbar to its search counterpart - spotlight on OSX.

Fig: Deskbar applet in action on the Gnome desktop

Some of the things which I found really interesting with this nifty search tool are as follows :
  1. Excellent integration with Beagle to search for files containing a certain text.
  2. Open applications by just typing the name of the application in the deskbar text box.
  3. Yahoo search engine integration : Just type in some text in the Deskbar input residing in your panel and supposing you are online, a series of web search results pop up on your desktop .
  4. Google web search on your desktop : I found this really interesting because Google has not released an equivalent search tool for Linux platform yet, similar to that for windows. But to have an integrated Google search in Linux (using Deskbar of course), some additional work has to be done.
Steps to integrate Google web search with Deskbar applet

The first thing to do is go to Google API page and create an account. If you have a gmail account, then that would be sufficient. Once the account is created, Google will generate a Google Web API licence key and send it to the email address that was specified while creating the account. The key has to be retrieved and placed in the file '~/.gnome2/deskbar-applet/Google.key' . This licence key has to be included with every call made to the Google Web API service which is exactly what the Deskbar applet does. Google allows one to make upto 1000 queries per day in this manner.

Fig: The licence key generated by Google

Next step is to download the Google developers kit (which is around 650 KB in size) from the same site and extract the file GoogleSearch.wsdl and store it in the location '~/.gnome2/deskbar-applet/GoogleSearch.wsdl' . That is it. Now if a query is typed in the Deskbar applet, a list pops up giving the results of the Google web search for the query. Double clicking one of the query result will open the corresponding web page in the default web browser.

Saturday, 25 February 2006

An Indepth Review of Ext3 Journaling Filesystem

How many of us take a pause to think about the differences between ext3 and ext2 filesystems? I would guess not many. It has been all around acknowledged that ext3 is superior to ext2 and most Linux distributions default to formating ones partition to ext3 unless specified otherwise. If you ask me, one of the fundamental differences between ext2 and ext3 is that ext3 has journaling support inbuilt into it where as ext2 does not.

Now here is the interesting part; if you do a clean unmount of the ext3 file system, there is nothing to stop you from mounting it as a ext2 filesystem and your system will recognise the partition as well as the data in it with out any problem.

I came across this excellent analysis of ext3 journaling filesystem by Dr. Steven Tweedie which should be an informative read for people interested in the subject of filesystems. The article was writen in 2000 but I believe what has been explained there has not lost its relevence even today.

Friday, 24 February 2006

Password protect your website hosted on Apache web server

At times, when I am browsing the web, I click on a link such as this one and instead of a web page, I get a dialog box asking me to enter my user name and password. And only after I have been authenticated do I get access to the website (or page). This feature of password protection is very simple to implement in Apache web server.

Basically, the whole process of password authentication banks on just two files . Them being :
  1. .htpasswd - This file contains the user name - password combination of the users who are allowed access to the website or page. This file can reside anywhere other than the directory (or path) of your website. But usually, it is created in the Apache web server directory (/etc/apache2/.htpasswd). This is because, this file should not be accessible to the visitors to the site.
  2. .htaccess - This file defines the actual rules based on which users are given access or denied access to a particular section or whole of the website. This file should reside in the base directory of one's website. For example, if my website is located in the path '/var/www/mysite.com' and I want to provide user authentication to the entire website, then I will store the file .htaccess in the following location - '/var/www/mysite.com/.htaccess '.
If one is implementing this feature for the first time on one's server, then he has to create the .htpasswd file. I chose to create the .htpasswd file in the apache 2 configuration directory. After this, one can add any number of users and assign passwords to them. This is done by the htpasswd utility which gets installed with the apache2 web server.
# htpasswd -c /etc/apache2/.htpasswd ravi
New password: *****
Re-type new password: *****
Adding password for user ravi
#_
In the above command, -c option asks the htpasswd utility to first create a .htpasswd file in the /etc/apache2 directory. Simultaneously, I have provided the name of the user (myself) and the utility asks me to type a password which is used to authenticate me before allowing access to the site.

Note: Any number of users and their password may be entered in the same .htpasswd file per website.

Now I make this .htpasswd file readable by all the users as follows:
# chmod 644 /etc/apache2/.htpasswd
Next step is the creation of the file .htaccess which will prohibit the full or a part of the website which is situated in /var/www/mysite.com. Since I am interested in password protecting the whole website, I create the file in the /var/www/mysite.com base directory. But if I am interested in protecting only a sub-directory (say by name 'personal') of this site, then I will have to create it in the '/var/www/mysite.com/personal' directory.
# touch /var/www/mysite.com/.htaccess
Now I enter the following lines in the .htaccess file :
AuthUserFile  /etc/apache2/.htpasswd
AuthGroupFile /dev/null
AuthName MySiteWeb
AuthType Basic
require user ravi
Here 'AuthUserFile' points to the place where I have stored the .htpasswd flat file which contains the user names and passwords.
AuthGroupFile points to the group file which contains the group of users. Here I have opted to not have a group file and hence points it to /dev/null .
AuthName directive sets the name of the authorization realm for this directory. This name can contain spaces.This is a name given to users so they know which username and password to send.
AuthType value of Basic instructs apache to accept basic unencrypted passwords from the remote user's web browser.
The last line - require user ravi - tells apache that only the user with name 'ravi' should be allowed access provided the right password is entered. If more than one user is to be allowed access, then those user names could also be appended to the line. Suppose I want another user also to access the file, I modify the line as follows:
require user ravi john
And if I want all the users listed in the .htpasswd file to be allowed access, the line is modified as thus:
require valid-user
The .htaccess file also has to be provided the right file permissions.
# chmod 644 /var/www/mysite.com/.htaccess
One more step is needed; that is to change a line in the apache2 configuration file. In a previous article titled "Host websites on your local machine using Apache websever", I had dwelled upon the modular structure of the Apache 2 configuration files.

Following that structure, assuming my configuration file for the website /var/www/mysite.com is stored in /etc/apache2/sites-available/mysite.com , I open the file in an editor and change the following line :
<Directory /var/www/mysite.com/>
...
AllowOverride None
...
</Directory>
TO
<Directory /var/www/mysite.com/>
...
AllowOverride AuthConfig
...
</Directory>
... and save and exit the file. Now restart apache web server to re-read the configuration file.

Note: If you are using apache instead of apache2, then the file to be edited is /etc/httpd/conf/httpd.conf though the lines to be edited are the same.

That is it. From now on any user who visits the website will first have to enter the correct username and password before he is allowed access to the website.

Monday, 20 February 2006

A concise explanation of I-Nodes

At any given time, a Linux machine will be having 10's and 1000's of files including both system as well as user files. File systems like ext2 or ext3 support file names of 255 characters length and can grow in sizes of up to 2 GB. Now managing these files and keeping track of which files contain what data could be a nightmare for any OS. To overcome this logistical nightmare, Linux uses what are called i-nodes to organise block allocations to files.

Each file in Linux irrespective of its type, has a unique identity by way of an i-node number associated with it. No two different files can have the same i-node number.

So what are I-Nodes ?
I-nodes are verily data structures which hold information about a file which is unique to the file and which helps the Operating System differentiate it from other files.

For example, I have a file by name test.txt in my home directory. To know the i-node number of the file, I run the following command:

$ ls -il test.txt
148619 -rw-r--r-- 1 ravi ravi 125 2006-02-14 08:39 test.txt
As you can see, the inode number of the file is the first number in the output which is 148619. In the entire hard disk, no other file will have this number and the operating system identifies the file test.txt not by its name but by its inode number.

Suppose I create a hard link of the file test.txt as follows:

$ ln test.txt test_hardlink.txt
Will the two files test.txt and test_hardlink.txt have the same i-node number ? Lets find out.

$ ls -i test.txt test_hardlink.txt
148619 test_hardlink.txt
148619 test.txt
As you can see, both the files have the same i-node number and as far as the OS is concerned, both the files are one and the same. And if I make any changes in one of the files, then it will reflect in the other file too. And the interesting thing is if I move the file test_hardlink.txt to another location, still it will be pointing to the same file.

This brings up a security issue here. Suppose, you have a file which contains some data which you do not want another person to read without your consent. Now a person having access to this file can create a hard link to this file in another location and he will automatically be able to access the file and even see the changes that you make to the file. And even if you have deleted the said file in your directory, the file is actually not deleted as there is a file handle remaining which points to the i-node number of the file.

So system administrators are sometimes known to search out all the files pertaining to an i-node number and then delete them from the system to ensure that the file is indeed deleted. You can do it using a combination of the find and rm command as follows:

# find / -samefile test.txt -exec rm - {} \;
Or if you know the inode number of the file, then

# find / -inum 148619 -exec rm - {} \;
... which will also do the same job.

Note: Every file on the system is allocated an i-node. And there can never be more files than i-nodes.

Typically, an i-node will contain the following data about the file:
  • The user ID (UID) and group ID (GID) of the user who is the owner of the file.
  • The type of file - is the file a directory or another type of special file?
  • User access permissions - which users can do what with the file.
  • The number of hard links to the file - as explained above.
  • The size of the file
  • The time the file was last modified
  • The time the I-Node was last changed - if permissions or information on the file change then the I-Node is changed.
  • The addresses of 13 data blocks - data blocks are where the contents of the file are placed.
  • A single, double and triple indirect pointer
A point to note here is that the actual name of the file is not stored in the i-node. This is because the names of the files are stored in directories which are them selves files.

Thursday, 16 February 2006

Fstab in Linux

Fstab is a file in Linux or Unix which lists all the available disks and disk partitions on your machine and indicates how they should be mounted. The full path of the fstab file in Linux is /etc/fstab. Ever wonder how Linux automatically mount any or all of your partitions at boot time ? It does so by reading the parameters from the /etc/fstab file.

Syntax of fstab file


The following is the syntax of the /etc/fstab file in Linux.


[Device name | UUID] [Mount Point] [File system] [Options] [dump] [fsck order]

Device Name - Denotes the unique physical node which is identified with a particular device. In Linux and Unix, every thing is considered to be a file. That includes hard disks, mice, keyboard et al. Possible values are /dev/hdaX,/dev/sdaX,/dev/fdX, and so on. Where 'X' in /dev/hdaX and other devices stand for numerals.

UUID - This is an acronym for Universally Unique IDentifier. It is a unique string that is assigned to your device. Some modern Linux distributions such as Ubuntu and Fedora use a UID instead of a device name as the first parameter in the /etc/fstab file.

Mount Point - This denotes the full path where you want to mount the specific device. Note: Before you provide a mount point, make sure the folder exists.

File System - Linux supports lots of filesystems. Some of these options are ext2,ext3,reiserfs,ntfs-3g (For mounting Windows NTFS partitions),proc (for proc filesystem),udf,iso9660 and so on.

Options - This section contains the most number of options, simultaneously providing the highest flexibility to the user in mounting the devices connected to his machine. Check man page of fstab to know all the options. There are too many to list here.

Dump - The values dump option takes are 0 and 1. If this value is set to 0, then the dump utility will not backup the file system. If set to 1, this file system will be backed up.

fsck order - This option also takes values 0,1 and 2. 0 denotes the fsck tool does not check the file system. If the value is 1, fsck tool checks the file system on a periodic basis or when it encounters physical errors in the device.

Contents of a typical /etc/fstab file in Ubuntu Linux


The following are the contents of the /etc/fstab file on my Ubuntu machine.


proc /proc proc defaults 0 0
# /dev/sda1
UUID=dc2c5c36-3773-4627-8657-626f0ef8aa9e / ext3 relatime,errors=remount-ro 0 1
# /dev/sda2
UUID=57864470-a324-4c5c-ad49-ed1a05300b0d none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0
/dev/fd0 /mnt/floppy ext2 noauto,user 0 0

From the above listing, one can gather that the proc file system has been mounted in the /proc directory. A UUID has been provided for /dev/sda1 and /dev/sda2. The fsck order for the root node '/' has been set to 1; which means, it will be checked for any sector errors on a periodic basis.


For more details on the contents of the /etc/fstab file, check the man page of fstab in Linux.

Wednesday, 15 February 2006

Host websites on your local machine using Apache websever

Apache is a popular web server which has grabbed a major slice of the web server market. What is interesting about Apache is its stability and scalability which makes it possible to serve even very high traffic web sites without any hitch. It also helps that it comes with an unbeatable price (free). It is bundled by default with most Linux distributions. And in cases where it is not included in the CD, it is a simple case of downloading the package and installing it.

Here I will explain how one can set up Apache web server to serve ones web pages from your machine.

Packages to be installed
apache2 , apache2-common , apache2-mpm-prefork
And optionally ...
apache2-utils , php4 (If you need PHP support) , php4-common

Apache2 has a very modular structure. In fact if you see the directory structure of the configuration files (see figure below), you will realise that it is designed to avoid clutter. For serving web pages on our machine, we need be concerned with just two directories - them being, /etc/apache2/sites-available/ and /etc/apache2/sites-enabled/ .

Fig: Apache2 directory structure

In the sites-available directory you create the files (can be any name) containing the configuration details of the sites you aim to host - one file per site. And in the sites-enabled directory, you create a soft link to the previously created file.

Configuration details
Let us assume for the sake of this tutorial that I have decided to host two websites on my local machine. All the files related to the two websites have already been created and saved in two separate directories by name websiteA and websiteB . The default location for serving the files in apache is usually in the /var/www location. So I move the two directories websiteA and websiteB to this location.
$ sudo cp -R -p websiteA /var/www/.
$ sudo cp -R -p websiteB /var/www/.
The -p option preserves the ownership of the files and directories while copying.

Next I move into the directory /etc/apache2/sites-available in order to configure apache web server to recognize my sites. In this directory, there is a file called default which contains the configuration parameters. It is meant to be a skeleton file which can be used as a base for additional configuration.

I made two copies of this file in the same directory and renamed them as websiteA and websiteB. You can give any name really but for clarity it is prudent to give them the name of your site.

Now I opened up the file /etc/apache2/sites-available/websiteA in my favourite editor (vi) and made changes to the following portions (shown in bold):
#FILE: /etc/apache2/sites-available/websiteA
NameVirtualHost websiteA:80

<virtualhost websiteA:80>
ServerAdmin ravi@localhost
ServerName websiteA
DocumentRoot /var/www/websiteA/
<directory>
Options FollowSymLinks
AllowOverride None
</directory>
<directory /var/www/websiteA/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
...
</directory>
...
...
</virtualhost>
In the above listing, ServerName indicates the name (or alias) with which your site has to be recognised. Usually a webserver will be hosting multiple sites with different domain names on the same machine. At such times, the webserver distinguishes between different websites through the ServerName directive. Since this is a local machine, I have given it the name websiteA. But in actual hosting where you want to make the website available on the net, you will be giving it your domain name say like www.mysite.com or something.
The DocumentRoot option denotes which files to be served when the server name is entered in the web browser. This usually points to the place where you have stored your website files.
You can have multiple Directory tags but one of them should point to the location of the website files.
I made similar changes to the file websiteB where the ServerName was given a unique name websiteB instead of websiteA as given above. Also the Directory tag contained the path /var/www/websiteB/ instead of websiteA.

Finally, because I was hosting this on my local machine and since I had not configured DNS, I had to edit the /etc/hosts file and include the name given in the ServerName portion of the configuration file. After the inclusion, my machine's /etc/hosts file looked as follows:
#FILE : /etc/hosts
127.0.0.1 localhost.localdomain localhost websiteA websiteB
That was it. Now in order to enable the websites, all I had to do was create a symbolic link to these files in the /etc/apache2/sites-enabled/ directory and restart the web server. It is quite clear that to disable a website, all you need to do is remove the symlink in the sites-enabled directory.
$ cd /etc/apache2/sites-enabled
$ sudo ln -s /etc/apache2/sites-available/websiteA .
$ sudo ln -s /etc/apache2/sites-available/websiteB .
Restart the apache web server
$ sudo apache2ctl restart
OR
$ sudo /etc/init.d/apache2 restart
Now I tested it by opening up the web browser and typing http://websiteA to get the first website and http://websiteB to get the second website. Voila! Success!!.

Note: In previous versions of Apache, all these configuration parameters were inserted in the /etc/httpd/httpd.conf file. But I feel, apache2's configuration file layout is much more intutive and easier to manage.

Tuesday, 14 February 2006

Multi booting 100+ OSes on a single machine

When I wrote an article called Effective Partitioning - The how and why of it, many readers had questioned why I installed over 4 OSes on my machine. Now here is a person who has installed 100+, - I am not pulling your leg - the exact number is 110 OSes on his machine.

Now I can understand someone running 4 OSes ;) on ones machine. But what is the use of running over a 100 different OSes ? Perhaps he is a Linux enthusiast who has undertaken this project for his personal satisfaction. But what ever be the case, reading his experiences throws new insights into partitioning ones hard disk(s) to accommodate multiple OSes.

He has used 4 hard disks (two IDE and two SATA) totaling 900 GB. And all this space across multiple hard disks is divided into 144 partitions (12 primary partitions and 4 extended partitions). Not surprisingly, he has arrived at the conclusion that Grub is the better boot loader and has installed it on its own partition for booting all these OSes.

These are his recommendations to users intending to multi-boot between OSes.
  • Partition your hard disk first before starting to install the OSes.
  • Keep personal data segregated from the OS files. Put another way, it is prudent to have a separate partition for /home.
  • He doesn't find any drawbacks in using one partition per linux distribution.
  • Learn to boot the system manually which will give you a fair understanding of how the system is booted.
  • For all the Linux OSes, he has used one common swap partition.
Not withstanding what I have described here, I strongly recommend reading the original article which will throw new insights into partitioning ones hard disk. And the knowledge gained will hold in good stead when you decide to traverse this path the next time you decide to dual or multi-boot between OSes.

Additional Info
SATA stands for Serial Advanced Technology Attachment , which is touted as the next big advancement in the hard disk technology. SATA hard drives are said to have a much lower power requirement (of just 250 mV). Delivers increased data transfer rates of up to 300 MB/sec and does away with master / slave configurations and jumper settings. And it is claimed that the technology allows for hot swapping the hard disk - that is removing and replacing the hard disk while the computer is running. Because of these characteristics, Serial ATA is considered to be a good choice for implementing RAID.

Monday, 13 February 2006

Basic setup of MySQL in GNU/Linux

MySQL is a robust, lean database ideal for use on the web. In fact many of the forums and blogs use this database to store and manage the content. Here I will explain how I set up the MySQL database on my machine. The main reason for documenting this is because of the difficulty I encountered in connecting to the database in the first place. I was getting a error message when I tried connecting to the database using a user name even after successfully creating the database. So this post is more like a reference point for me rather than one targeted for general read.

I first downloaded and installed the following packages : mysql-server, mysql-client, mysql-admin. The latter package is a GUI front-end for easy administration of the database.

Next I opened up a terminal and executed the following command:
$ sudo mysql
Password : *****
mysql>_
... and I was dropped into the mysql prompt. Here I granted full rights on the database to myself by using the following command:
mysql> GRANT ALL PRIVILEGES ON *.* TO ravi@localhost 
IDENTIFIED BY
'password'
WITH GRANT OPTION;

mysql> quit
Since I was setting up the database for local use, I gave the machine name as localhost. If your machine is a part of the LAN and you have set up a domain, then change 'localhost' to your machine name as recognised by the domain.

Once I have given full rights to myself, I logged into the mysql database with my username as follows:
$ mysql -h localhost -u ravi -p
Enter Password: *****
mysql>_
The password is the one which was provided while giving GRANT access to my username previously and is different from my Linux account password. From now on, I can execute further commands to manipulate the database as well create and modify tables from the mysql prompt.

Some basic SQL manipulations which I found useful

Creation of database
mysql> CREATE DATABASE mydatabase
Deleting a database
mysql> DROP DATABASE mydatabase
Show which all tables are there in a given database
mysql> SHOW TABLES;
Select another database for use
mysql> USE my_other_database
MySQL Administrator - A very good GUI front-end for mysql administration
Once I have completed the above tasks, I fired up this GUI (mysql-admin) and I was presented with the login screen for accessing the database (see figure below).

Fig: mysql administrator GUI login interface

Fig: The administration interface after logging in

I entered my login name as well as the mysql password and was able to access the interface from which I was able to manipulate the database as well as the tables depending upon the rights allotted to me.
Resources
Official MySQL documentation online.
Also read : MySQL Cheat Sheet

Saturday, 11 February 2006

Integrating SSH in Gnu/Linux

SSH stands for Secure SHell and as the name indicates, it is a secure form of connecting to a remote network. SSH uses strong encryption to protect data while it is transfered from one node to another across a network which means any body who grabs packets while they are in transit will not be able to read the data. I had covered this topic in earlier posts namely "Setting up SSH Keys" (here) and "Secure SHell" (here).

Now Kimmo Suominen has written an excellent tutorial titled getting started with SSH in Linux which makes an interesting read. He takes the readers through the task of creating SSH keys, integrating SSH in X Windows (both local as well as remote displays), securely copying files between remote systems using scp, logging into remote systems using slogin and lot more.

Friday, 10 February 2006

XGL - An Xserver framework based on OpenGL

Now a days, you see a lot of excitement over XGL - the X server architecture layered on top of OpenGL. What this means is that you can see a flurry of activity on the desktop front, including special effects which would put even the upcoming Microsoft Vista OS to shame. What I find exciting about this project started by David Reveman way back in 2004 is the support and contribution to this project provided by Novell. In fact, Novell has released a few video clips show casing some of the special effects that are possible using XGL which are worth watching. For those of you who are constrained about bandwidth, I have included a few screen shots of the video clips below.

Fig: OSX Expose like functionality built-in

Fig: Transparency of windows

Fig: Special effects while changing virtual desktops

The project is still in the testing phase and so, we the ordinary users, will have to wait with our fingers crossed till a server based on XGL has been integrated with the Linux distributions (I guess, the first one most probably would be SuSE).

Well, I belong to the old school of thought when I say that it is better to have a spartan desktop if you want to do some serious work. In support of my views, try putting a kid in a room full of toys and make him do his home work. In 90 % of the cases, the kid will be distracted. Perhaps you will have to scold him, threaten him with consequences and even punish him and he may eventually complete the work allotted to him. I can see the same happening in the work front too. For many of us the fear of getting chewed up by the boss or an imminent deadline will be the dominant factor which will make us put our mind where the work is. So any day, while I am doing serious work on my computer, I prefer using a desktop without any frills.

That doesn't mean that XGL and related projects are a waste of time. They do play a significant role in furthering the Linux cause. Some of them being:
  • Showcasing the power of OpenGL which will grab the attention of the large number of game developers who then will hopefully consider developing cross platform games using OpenGL instead of the Windows centric games that are developed at present using DirectX.
  • You can show these special effects to your friends and members of your family and I can bet my shirt that they will be asking you to help them in installing Linux on their machine. This means, a lot of effort is reduced in persuading people to embrace Linux over any other proprietary OS.
  • If you are using your Linux machine for entertainment and fun, these special effects will be a good time pass.
  • I strongly feel that it is projects such as these that will be the tipping point in enabling Linux to grab a major slice of the OS market.
So what are the prerequisites for running XGL based X server on your machine ?

Novell claims that your PC specifications need be modest for this to work. But you definitely would need a graphics card preferably one from NVIDIA - which in my opinion has the best support for Linux. This is because OpenGL relies heavily on a graphics card for rendering the effects.

Wednesday, 8 February 2006

A collection of books, howtos and documentation on GNU/Linux for offline use

If I am asked which is one of the most important strengths of GNU/Linux, then I would tell you that it is the documentation. That is right! What makes GNU/Linux such a pleasure to use is the excellent documentation that is included with it for each and every tool bundled with it. Just try learning to use iptables without reading the documentation even once, and you will get the idea. The documentation in Linux is available in a variety of formats - as man pages, info, HTML pages, postscript and in some cases even pdf. These documents are so good that many a times when I have asked questions in the past in various forums and Linux chat rooms, I have often got the now popular RTFM reply which means Read The Fine Manual.

But not many people are aware that you can have additional documentation and even whole books available locally for making your GNU/Linux experience that much richer. Here are a few of them that have come to my notice.

Note: In each of the cases below, I have given the package name (in bold and blue color) and you have to first install them using the apt-get command if you are using Debian Linux, which means you run the command:
# apt-get install <package name>
Apt documentation
  • apt-doc - A detailed documentation on apt package management.
  • apt-howto - A HOW-TO on the popular subject of apt package management.
  • apt-dpkg-ref - This is a apt dpkg quick reference sheet which forms a handy reference for those who find difficult to memorize the commands.
Bash documentation
  • bash-doc - The complete documentation for bash in info format.
  • absguide - This is an advanced bash scripting guide which explores the art of shell scripting. It serves as a textbook, a manual for self-study, and a reference and source of knowledge on shell scripting techniques. The exercises and heavily-commented examples invite active reader participation, under the premise that the only way to really learn scripting is to write scripts.
Books on Debian
  • debian-installer-manual - This package contains the Debian installation manual, in a variety of languages. It also includes the installation HOWTO, and a variety of developer documentation for the Debian Installer.This is the right documentation to read if you are thinking of installing Debian on your machine.
  • debian-reference - This book covers many aspects of system administration through shell command examples. The book is hosted online at qref.sourceforge.net but by installing this package, you get to read the whole book offline.
HOW-TOs and FAQs
  • doc-linux-text and doc-linux-nonfree-html - Any long time Linux user will know that in the past the HOW-TOs and FAQs formed the life line of anybody hoping to install and configure Linux on their machine. This was especially true when the machine had some obscure hardware. These packages download all the HOW-TOs and FAQs hosted on the tldp.org site and make available for offline use.
  • linux-doc - This package makes available the Linux kernel documentation. Might come handy if you intend to compile a kernel yourself or you want to dig deep into understanding the working of the kernel.
PC Hardware
  • hwb - This is a hardware book which contains miscellaneous technical information about computers and other electronic devices. Among other things, you will find the pin out to most common and uncommon connectors available as well as information about how to build cables.
Linux books
  • rutebook - Rute's Exposition is one of the finest books available for any aspiring and even established Linux user. It is actually a book available on print but the author has released it online for free. You may read this book online too but by downloading this package, you get to read the book offline. (I highly recommend reading this book by anyone interested in using Linux).
  • grokking-the-gimp - Linux has a top class graphics suite in Gimp. I have been using gimp to manipulate images published on this site as well as for other uses. This book which is also available online, covers all the aspects of working with Gimp in detail, all the while giving excellent practical examples interspersed with images. You may install this package if you wish to read the whole book offline. It is a 24 MB download though.
Now that you know which all packages to download and install, you are faced with the big question. That is how does one find in which all places the files are installed ? That is simple; just use the following dpkg command to find the files for a particular package:
$ dpkg -L <package name>

Tuesday, 7 February 2006

Setting up a Mail Server on a GNU/Linux system

Have you ever wondered how you can convert your Linux machine into a mail server ? A person who wants to set up a mail server is faced with choosing from a variety of mail server software like sendmail, postfix and qmail. But setting up a Linux mail server is not about just installing this software - the real job is the configuration part where you need to know which options to enable and which to leave disabled. And of course, there are the collection of software which aid the mail server to do its job in a secure,safe and efficient way by providing additional features like on-the-fly virus checking, database integration, spam filtering, providing a user friendly web based client for checking mail as well as cryptographic support.

Faced with all this work, you wish how nice it would be to have a step by step documentation to implement a mail server in Linux from start to finish, right ?

Then look no further because Ivar Abrahamsen has written a lucid step-by-step tutorial on implementing a mail server on GNU/Linux which even a newbie can understand. He has used Postfix as the mail server, Courier IMAP, MySQL, Spam assassin for spam filtering, virus checking using ClamAV and provided Squirrel mail as the web based mail client. A very good reference for people faced with the task of implementing a mail server in Linux.

Monday, 6 February 2006

Upgrading to Firefox 1.5 in Ubuntu Linux

I have a few OSes installed on my machine. But Ubuntu is the main distribution I use to do most of my work and Ubuntu breezy has included the web browser firefox 1.0.7 with it. One grouse I have about Ubuntu is the quality of the Firefox web browser bundled with it. I am not talking about the version number. But using firefox in Ubuntu has been a real bad experience. The browser is such a memory hog that at times, it has crashed on me when I am in the middle of sending an email. Also it doesn't render the fonts properly for certain web pages.
So I decided to find ways to upgrade the web browser to the latest version 1.5.Unfortunately, the Ubuntu repository doesn't yet have the latest build of this web browser and I had to search for other ways to upgrade the web browser.
That was when I came across this walk through at wiki.ubuntu.com . The basic steps outlined there for upgrading to Firefox 1.5 in Ubuntu breezy are as follows :
  • Install libstdc++5 library. Ubuntu breezy has a later version (6.0) of this library and firefox 1.5 requires the earlier version.
  • Download the firefox 1.5 package from the official Firefox website and unpack it into the /opt directory. (Usually this is all that is required to get firefox 1.5 running). But for greater integration with other programs as well as importing your bookmarks from the earlier version of the web browser, some additional work has to be done.
  • Link to your plug-ins
  • Rename your old profile in your home directory leaving it as a backup.
  • To ensure that firefox 1.5 is the default browser, modify the symbolic link using the dpkg-divert command.
Now you have the latest version of firefox for Ubuntu. For people using other Linux distributions, after unpacking the firefox into the /opt directory, you may include the path in your $PATH variable. Earlier when I was using Fedora, I used to unpack any software I want in the /opt directory and then include the path to the directory containing the binary in my .bashrc file.

Sunday, 5 February 2006

Book Review: Linux Patch management - Keeping Linux systems up to date

Any system or network administrator will know the importance of applying patches to the various software running on their servers be it the numerous bug fixes or upgrades. Now when you are maintaining just a single machine, this is really a simple affair of downloading the patches and applying them on your machine. But what happens when you are managing multiple servers and hundreds of client machines? How do you keep all these machines under your control up to date with the latest bug fixes? Obviously, it is a waste of time and bandwidth to individually download all the patches and security fixes for each machine. This is where this book named "Linux Patch Management - Keeping Linux systems up to date" authored by Michael Jang gains significance. This book released under the Bruce Perens' open source series aims to address the topic of patch management in detail.


The book is divided into seven detailed chapters, each covering a specific topic related to patch management. In the first chapter, the author starts the narration by giving an introduction to the basic patch concepts, the various distribution specific tools available for the user including Red Hat up2date agent, SUSE YaST online update, Debian apt-get and also community based sources like those in Fedora. What I found interesting was instead of just listing the various avenues that the user has regarding patching his system, the author goes the extra mile to stress the need for maintaining a local patch management server and also the need to support multiple repositories on it.

The second chapter deals exclusively with patch management on Red Hat and Fedora based Linux machines. Here the author walks the readers through creating a local Fedora repository. Maintaining a repository locally is not about just downloading all the packages to a directory on your local machine and hosting that directory on the network. You have to deal with a lot of issues here, like the hardware requirements, the kind of partition arrangement to make, what space to allocate to each partition, whether you need a proxy server and more. In this chapter, the author throws light on all these aspects in the process of creating the repositories. I really liked the section where the author describes in detail the steps needed to configure a Red Hat network proxy server.

The third chapter of this book namely SUSE's Update Systems and rsync mirrors describes in detail how one can manage patches with YaST. What is up2date for Red Hat is YaST for SuSE. And around 34 pages have been exclusively allocated for explaining each and every aspect of updating SuSE Linux using various methods like YaST Online Update and using rsync to configure a YaST patch management mirror for your LAN. But the highlight of this chapter is the explanation of Novell's unique way of managing the life cycle of Linux systems which goes by the name ZENworks Linux Management (ZLM). Even though the author does not go into the details of ZLM, he gives a fair idea about this new topic including accomplishing such basic tasks as installing the ZLM server, configuring the web interface, adding clients ... so on and so forth.

Ask any Debian user what he feels is the most important and useful feature of this OS, then in 90 percent of the cases, you will get the answer that it is Debian's contribution to a superior package management. The fourth chapter takes an in depth look into the working of apt. Usually a Debian user is exposed to just a few of the apt tools. In this chapter though, the author explains all the tools bundled with apt which makes this chapter a ready reference for any person managing Debian based system(s).

If the fourth chapter concentrated on apt for Debian systems, the next chapter explores how the same apt package management utility could be used to maintain Red Hat based Linux distributions.

One of the biggest complaints of users of Red Hat based Linux distributions a few years back was a lack of a robust package management tool in the same league as apt. To address this need, a group of developers created an alternative called YUM. The last two chapters of this book explores how one can use YUM to keep the system up to date as well as hosting ones own YUM repository on the LAN.

Chapters at a glance
  1. Patch Management Systems
  2. Consolidating Patches on a Red Hat/Fedora Network
  3. SUSE's Update Systems and rsync Mirrors
  4. Making apt Work for You
  5. Configuring apt for RPM Distributions
  6. Configuring a yum Client
  7. Setting up a yum Repository
Meet the author
Michael Jang has specialized in networks and operating systems. He has written books on four Linux certifications and one of them on RHCE is very popular among students attempting to get Red Hat certified. He also holds a number of certifications such as RHCE, SAIR Linux Certified Professional, CompTIA Linux+ Professional and MCP.

Book Specifications
Name : Linux Patch Management - Keeping Linux Systems Up To Date
Author : Michael Jang
Publisher : Prentice Hall
Price : Check at Amazon.com
No of pages : 270
Additional Info : Includes 45 days Free access to Safari Online edition of this book
Ratings: 4 stars out of 5

Things I like about this book
  • Each chapter of the book explores a particular tool to achieve patch management in Linux and the author gives in depth explanation of the usage of the tool.
  • All Linux users irrespective of which Linux distribution they use will find this book very useful to host their own local repositories because the author covers all distribution specific tools barring Gentoo in this book.
  • The book is peppered with lots of examples and walk throughs which makes this an all in one reference on the subject of Linux patch management.
  • I especially liked the chapter on apt package management which explored many useful commands which are seldom used or known by the greater percentage of people.
Update: This review has been carried on slashdot.org. You can read the comments there.

Friday, 3 February 2006

PC-BSD : A user friendly BSD flavour geared for the desktop

If you ask me how many variants of Unixes are there, then my obvious answer would be quite a few. Some of them being IRIX, IBM AIX, Sun Solaris, HP-UX, Sco Unix and of course the BSD variants - FreeBSD, NetBSD and OpenBSD.

But over the years, a relatively recent upstart called Linux has been successful in stealing the thunder from all the above Unix variants. Mainly because Unix has a history of being newbie unfriendly. It also helped that the GNU movement caught on the public's fancy and Linux being released under GPL made it unbeatable vis-a-vis the price. One of the most talked about drawback of Unix is that it is very hard to install by a relative neophyte. But recently interest has been generated in making at least some of the Unix variants more user friendly and many projects have come up which aim to create a better experience for the end user both in installing and using them.

One such project is PC-BSD. As the name indicates, it is a BSD variant and is based on FreeBSD. The aim of the developers of PC-BSD is to make it more user friendly and fit for the desktop. Where as FreeBSD is first and foremost a server operating system, PC-BSD is packaged to bring the legendary stability and security of FreeBSD to the desktop.

Recently, I downloaded the PC-BSD image from their website with an intention of trying out this Unix distribution. I booted my machine using the PC-BSD installation CD. And within a short time I was presented with a very beautiful and clean GUI installer. Actually, I found out that the PC-BSD developers have trodden the same path embraced by the Linux Live CD creators in that it first automatically detects and configures the interfaces and then puts the user in a fully usable desktop in this case the developers have used fluxbox (or is it blackbox?). And immediately after that, the graphical installer is started and the user is led through the installation process which, sans the partitioning, keyboard, locale and time configuration is really just copying all the files into the partition of ones choice. At the end of installation, I was placed into a fully configured KDE desktop system.

Fig: PC-BSD Installation in progress

Fig: System configuration dialog

If you have read my past article named Effective partitioning - The how and why of it, you will find that I already had a primary partition which housed the FreeBSD OS. So I was not faced with the prospects of creating a separate partition and chose to install PC-BSD in this partition. I installed the OS on a Pentium IV 2.0 Ghz machine with 256 MB RAM and having on board sound and all the hardware interfaces were detected automatically by the installer.

PC-BSD comes with the latest version of KDE - ver 3.5. Underneath, it is the same FreeBSD operating system. But what makes PC-BSD stand out is the effort that has been put to make it user friendly. For one, the hard working developers have provided GUI widgets to accomplish almost all common system administration tasks like user creation, network configuration and installing and uninstalling programs.

Fig: Configure your network with ease

Fig: GUI for installing and uninstalling programs


Fig: Update manager checks for updates.

Secondly, for installing and uninstalling software, PC-BSD has developed its own method which is similar to that found in Windows. That is, the software is packaged as a monolithic .pbi (pcbsd installer) file which has to be downloaded from their repository site. Once the .pbi file is downloaded, just double clicking on the file will open a GUI and guide the user through the installation process. Similarly, uninstalling is also a simple affair.

Fig: User manager makes creation of users quite simple

Advantages of PC-BSD
  • A very easy to use GUI enabled installation process unlike the text installer found in FreeBSD.
  • Is geared for the desktop user but with all the power, stability and security of FreeBSD. BTW, FreeBSD was around when Linux was not even born, which should give you a fair idea about this OS.
  • Installing and uninstalling software is a point - and - click affair and will gladden the hearts of neophytes and Windows users alike.
  • Bundles two FreeBSD kernels - a single processor kernel and a SMP enabled kernel. And the user can easily switch between the two (See system configuration dialog figure above).
Any drawbacks ?
  • One drawback I found was the limited amount of software available on their site. But I consider that as a temporary phenomenon and is bound to change in the future as they continuously add more software.
  • The GUI even though very clean and intutive at the present stage is not without its quirks. For example, I installed bash shell by the point and click method. But when I tried to change my default shell to bash from the User Manager widget (see figure above), the bash shell was not available for selection in the drop down box. And I had to do it the command line way. But I am sure such minor matters would be sorted out as more and more people try out PC-BSD.
  • Also some people talk about the obvious blot in installing software using PC-BSD click and install method because there is some duplication of the common libraries used by the various software.
    But seriously, for a person who has dedicated his full 80 GB hard disk or even half or quarter of that space for running PC-BSD, it is not a serious issue.
All in all, PC-BSD is an OS which has a bright future in the Desktop market provided the developers provide more variety of software or at least equivalent to those found in the FreeBSD ports.

Thursday, 2 February 2006

Bruce Peren's forcasts for 2006

I have been a Bruce Perens fan for quite some time now ever since a reader of one of my posts kindly pointed out to me the important part played by Mr Perens in the Open Source Movement. For those in the dark (like I was once), Bruce Perens is credited for creating the Open Source definition. But that is not all, he is also an active proponent, supporter and contributor to the open source movement.

So any forcasts coming from such an eminent person should be taken seriously. On his site, Bruce Perens provides five things that he believes will happen in the near future (by the end of 2006). In a nut shell, they are as follows:
  1. Java will be overshadowed by newer entries like Ruby on rails in the enterprise arena.
  2. Native Linux API will play a dominant role in the embedded market - especially in creating applications that run on cell phones.
  3. Cellular carriers (companies) will lose significance and will be just a means to an end. And customers will not be tied down to a particular cellular company.
  4. Feature phones will prosper only in cities where people overtly use mass transit and nowhere else.
  5. And lastly, Mr Perens feels that PHP will die a slow death.
There you have it - all 5 of them. A prediction from none less than a stalwart in the Open Source field.

Wednesday, 1 February 2006

WINE vs Windows XP benchmarks

Wine is a recursive acronym for "WINE is not an Emulator" and is designed to run on Linux, software, which run natively only on windows. Specifically, Wine implements the Win32 API (Application Programming Interface) library calls thus one can do away with the Windows OS as such. In the past, I have successfully run Photoshop, MS Word and a few Windows games in Linux using Wine. At times, I have noticed a remarkable performance improvement while running a particular software using wine when compared to running it on windows. Now Tom Wickline has posted the results of the benchmark tests he carried out comparing Windows XP and Wine.

The results are rather interesting. Out of a total of 148 tests conducted, Wine leads in 67 tests which includes texture rendering, OpenGL, virus scanning, Memory tests and CPU test suite.

And not surprisingly, Wine lags behind Windows XP in the graphics test suite which uses DirectX instead of OpenGL.

If you are tech savvy and would like to see the finer details of the tests, then you should possibly read the benchmark details.