Tuesday 31 January 2006

Create a custom Linux distribution online

In the process of installing different Linux distributions, I have realized that none of the Linux distributions install every software that I want and in all cases, they install many unneeded software too thus taking up excess disk space. Of course I leave out 'Linux from Scratch' project and 'Gentoo' from what I have said above but installing these two are beyond the scope of neophytes in Linux.

Wouldn't it be nice if we could pick and choose which software to be included in a Linux distribution and once we commit our choice, an ISO image of the Linux distribution was created on the fly and made available for download ?

Well then, you have just got your wish. A website called instalinux aims to provide just that. On their site, you first decide your choice of Linux distribution and then pick and choose the software that need to be bundled with it, all from the confines of a web interface. Once you have finished, you are presented with an ISO image of the distribution of your choice for download, containing just the software you have selected. You can choose from Fedora, Debian, Suse and Ubuntu Linux.

This site is worth a try for people who have low bandwidth Internet connection or those who are looking to re-master their favorite Linux distribution.

Monday 30 January 2006

Which software would you like to see ported to Linux?

I have always wondered about this all the while not because I am unhappy with the applications currently bundled with Linux. But purely in a business point of view, this is a pertinent question. A major reason that businesses shy away from embracing Linux is because many of the software that they use do not have a Linux equivalent. And some of these software do play a dominant role in the market in their respective spheres.

I can think of a few software like Adobe Photoshop, FlashMX, Director, Quicken Financial software, Shockwave and Adobe Illustrator to name a few.

Now Novell - a big shot in the Linux distribution market - is running a survey to determine which applications the users feel is critical for their enterprises and would like them to be ported to Linux. So if you have any software in your wish list which run only on Windows at present, then this is the right time to visit their site and let yourselves be heard.

Friday 27 January 2006

Effective Partitioning - The How and Why of it

A few days back, when my non-techie friend came to visit me at home, he was amazed to see me booting into multiple OSes (4 to be exact) on my machine. He then wanted to know how I accomplished this feat. I told him about creating partitions and how these partitions play a vital role in installing multiple operating systems on ones machine. But this conversation with my friend set me thinking; why is there so much fuss on creating partitions? I think the primary reason that people face this issue of re-partitioning is because they do not think ahead about their future needs.

Even I have faced this problem of re-partitioning my hard disk many a times. And each time, it was a fine balancing but time consuming act of shifting important data from one partition to another, sometimes taking backup and also at times wiping the disk clean and starting afresh.

At present, I run 4 OSes on my machine. Them being Windows XP, FreeBSD, Ubuntu Breezy and Gentoo Linux. For the curious ones, my hard disk is partitioned as follows :

# fdisk -l
Disk /dev/hda: 40.0 GB, 40020664320 bytes
16 heads, 63 sectors/track, 77545 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes

Device Boot Start End Blocks Id System
/dev/hda1 * 1 38745 19526976 7 HPFS/NTFS
/dev/hda2 38745 47461 4393336+ a5 FreeBSD
/dev/hda3 47462 48466 506047+ 82 Linux swap
/dev/hda4 48466 77536 14651280 5 Extended
/dev/hda5 48467 65902 8787523+ 83 Linux
/dev/hda6 65902 77536 5863693+ 83 Linux
As you can see above, I have divided my entire 40 GB hard disk into three primary partitions and one extended partition and the extended partition is further divided into two logical partitions (See figure below).

Fig: A view of my partitioned hard disk

You must be aware that one can create only 4 primary partitions per hard disk. The big question is why just 4 ? Why can't we create more than that number? The reason lies in the boot sector. In your primary hard disk, the first 512 bytes is reserved for storing the partition table also called the Master Boot Record (MBR). The problem is that this space fills up after storing just 4 records. So to create multiple partitions, a work around was found out. That is creating an extended partition whose address was stored in the partition table and then creating any number of sub partitions (called logical partitions) inside the extended partition.

Creating partitions is also dependent on the type of file system and OS used
For example, way back when I used to install Windows 95/NT, I remember having to split the hard disk into partitions of 2 GB size. This was because the file system used at that time FAT16 could not reside in partitions with sizes over 2.1 GB. Though the newer version FAT32 or NTFS does not have this limitation.

Similarly, OSes like Sun Solaris and FreeBSD can be installed only in primary partitions.

Deep thought should be given prior to creating partitions
The major work in creating partitions is to decide how many partitions to create and how much space to allocate to each of them. If too little is allocated, within no time, all the space will be filled up. And if too much space is allocated, then there will be a lot of unused space being wasted left around.

Generally speaking for Linux/Unix mail servers, you put the /var directory in a separate partition. This is because the users mail box resides in this directory. And if you are allowing the public to create email accounts on your machine, then in no time, the /var directory will get filled up and will eat up the remaining space on the hard disk and in the process bring the server down. This will not happen if /var resides in its own partition.

Should /tmp reside in a separate partition ?
A few people believe that the /tmp directory in Linux/Unix should reside in its own separate partition. There is a good reason for it. That is, the /tmp directory is world writable. If you do a long listing of the /tmp directory ...

$ ls -ld /tmp
drwxrwxrwt 14 root root 4096 2006-01-28 16:32 /tmp
... you will find that the directory is having read, write and execute permissions for all. Now why is that a bad thing? It is not exactly bad and has its uses. But a hacker (if he knows your IP address and is able to access your machine behind the firewall), will be able to upload and execute programs in the /tmp directory thus making your system vulnerable. For one, he can upload a tiny program to your /tmp directory, which when run will create a core dump.

Core dumps are files of huge sizes (some can be 1 GB size) which are generated when a faulty program is run on a Linux/Unix machine. A core dump can be used by a developer for finding out what is wrong with his program, but that is besides the point. One can imagine what will happen if the hacker repeatedly executes his tiny program and creates a series of core dumps on your machine. In no time, all the space on the hard disk will be filled and your system will crash.

Note: I have found out that in Ubuntu the functionality of creating core dumps is disabled by default. So people using Ubuntu can have the /tmp directory in the same partition as the '/' .

Another thing I feel positive about is creating a separate partition for the /home directory. This way, one can easily reinstall the operating system or switch distributions without worrying about creating backups. More over if you are using multiple Linux distributions like I do (see figure above), then you can have the same /home directory for both distributions.

How do you create a partition ?
There are a variety of tools available right from the ubiquitous fdisk to ranish partition manager, qparted, cfdisk to commercial products like Partition Magic. But as a long time Linux user, I would recommend learning to use fdisk. This is because even though it uses cryptic commands, once you master it, partitioning becomes a piece of cake. Also in emergencies, you will not be left high and dry for want of your favourite partitioning tool because fdisk is bundled with all Linux distributions and Unixes alike.

Will partitions ever go out of vogue ?
I think we will have to put up with partitioning for a couple more years. But with the recent interest in virtualisation technologies (like Vmware, Xen, Usermode Linux and QEMU), which allow one to run any operating system over another at the same time, and also taking into consideration the increase in processing power and availability of cheap memory, I feel, may be in 10 years or so, people will opt out of partitioning their hard disks all together.

Update:
I forgot to mention that the partition housing FreeBSD (see figure above) was originally a FAT32 partition which I was using to share data between Linux and Windows. Because I felt a bit lazy or just didn't feel like it, I opted for converting the FAT32 primary partition to UFS2 to install FreeBSD. If you are considering dual booting between Windows and Linux, it is beneficial to have atleast a small FAT32 partition to facilitate sharing of data between the two OSes.
Also Read:
Creating and resizing a Logical Volume Manager in Linux

Thursday 26 January 2006

Learn how to use regular expressions the easy way

I am sure many of you will agree with me if I tell you that regular expressions are an integral part of all Posix operating systems. So what exactly are regular expressions ?

Regular expressions are characters that also include symbols which when used in a group, convey a special meaning to the shell. Broadly speaking, there are a few frequently used characters which are interpreted by the shell in a special way. They are as follows:

? - Question mark is used to mean a character repeated at most one time. That is 0 or 1 times.
* - Asterisk means the preceding character is repeated any number of times.
. - A dot signifies any character.
^ - The beginning of the line
$ - End of the line
\ - Used when you want to represent the special characters literally. For example, to show '*', I can use '\*' which tells the shell not to interpret the literal.

For example, a regular expression 'ap?le*' will select the following strings:
aplee
aple
aleee

but not 'apple' or 'appplee' because 'p' can occur only 0 or 1 times.

Usually, people are a bit confused the first time they have to use regular expressions. I was, the first time I used them.

For those who find learning regular expressions a real chore, there is a very useful utility called kregexpeditor which is bundled with Linux running KDE. This utility can be used effectively to come up to date with regular expressions. See figure below.

Fig: KRegExpEditor interface (Click on picture)

kregexpeditor has a very intuitive interface and contains both an inbuilt graphical editor, a verification window as well as a command line edit where one can try out different combinations of regular expressions. For example, try figuring out for yourselves what the following regular expression selects ...
\b[A-Z0-9._%-]+@[A-Z0-9-]+\.[A-Z]{2,4}\b 
Hint: Open up kregexpeditor and enter the above regular expression into the line edit box labeled Ascii Syntax. And you will get a graphical representation of what this string expands to in the upper portion of the editor.

Wednesday 25 January 2006

Illustrating desktop Linux distributions through screencasting

If you are the curious type and would like to compare how other Linux distributions package their desktops, then you are in luck. Now you can get a feel (a tiny one though) of different Linux distributions without installing and trying them out.

LinClips has put together a collection of flash videos which gives a peep into the desktops of different Linux distributions including the menu layout and the softwares bundled with them. What is even better is that each clip runs for exactly 2 minutes which brings it to the reach of people with low bandwidth internet connection.

Monday 23 January 2006

Festival - A Text to Speech synthesis software in Linux

What is Festival ?


Festival is a free text to speech synthesizer developed by the Center for Speech Technology Research at the University of Edinburgh.

It is shipped with most Linux distributions and has been released under an X11-type license allowing unrestricted commercial and non-commercial use alike.Read more »

Saturday 21 January 2006

Diverse experiences of a professional Unix/Linux Administrator

Unix/Linux administrators have great stories of their experiences at work, their day-to-day interactions with the Windows admins and the often hilarious situations they face. Here is an excellent post of a professional Unix/Linux administrator who has had many such light hearted experiences during his interaction with his Windows counterparts.

What makes this article unique is that the author has desisted from showing the Windows administrators in bad light while at the same time giving it a humorous tinge. A very good read for a light hearted moment.

Friday 20 January 2006

Optimus keyboard - A path breaking achievement

At times I have really got frustrated trying to remember a shortcut key for doing a task. This is especially so when I am working in Gimp. And I end up either using the mouse or checking the shortcut by hovering the mouse on the tool I intend to use. Now things are going to change though because, the designers at Optimus have developed a unique keyboard. But a keyboard is a keyboard is a keyboard right? Not quite. What is unique about the Optimus keyboard is that each key is a display unit in its own right. And each key highlights the characters according to the application you are using at that time - be it a game, editor or any other application.

Fig: Full view of the Optimus keyboard


Fig: Close up view

Fig: Optimus Keyboard mirroring photoshop shortcuts.

So in my case, if I am using Gimp, then the keys on the Optimus keyboard should show the gimp shortcuts instead of the normal characters. this makes it easier for people who have trouble remembering keyboard shortcuts. The end result being better utilization of time while doing ones work.

Now the big question is will it have support for Linux? We can only wait and see because it is slated to be released in February 2006. As far as the price is concerned, you can expect it at a higher range somewhere in between $100 and $150.

Thursday 19 January 2006

My dalliance with Gentoo Linux

I am a self confessed Linux addict. Seriously, I just love to try out new Linux distributions and see how they fare against one another. Now this addiction of mine has got its own advantages. For example, it gives me a fair idea about the pros and cons of each distribution, its weaknesses and strengths. And this experience form a basis on which I decide which should be my dominant distribution at any given time. Not surprisingly, I have reserved a ext3 partition of 6 GB size, specifically for installing and trying out new Linux distributions on my machine.

For quite some time now, I had wanted to install and try out Gentoo. But the experiences of other people showed that I should perhaps start installing it when I have at least a few days to myself. So the weekend prior to last when I got a few days off, it was Gentoo's chance for getting installed on this partition I talked about earlier.

I dutifully downloaded the ISO image for Gentoo. Actually, Gentoo is all about choice and even for downloading it, the user is faced with choices. I found that, broadly speaking, I had a choice of downloading the Universal installation ISO which included all that was needed to get a workable Gentoo on your hard disk like the stage 3 files (I will come back to it later), the source code for extra applications as well as the Gentoo hand book which is a must read if you are to install Gentoo on your machine.

Alternatively, you could also download the minimal installation ISO image which will only configure your network for you and allow you to connect to the internet from where, you will have to download the additional files including the stage 3 tar balls. The second option is better if you have a broadband internet connection of 512 kpbs or more because then you get the latest versions of packages. But since I had less than the 512 Kpbs line, I chose to download the universal installation ISO image and burn it to the CD.

Next I booted my machine using the Gentoo Universal CD and Gentoo automatically detected all the hardware like sound, PCI slots, USB and so on and put me in the root shell.

From here it was a matter of opening the Gentoo Handbook in links (a web browser) on one of the terminals and carrying out the tasks listed in the book on the other terminal. I had around 5 terminals at my disposal and I could easily switch from one terminal to the other using the [Alt+Fn] Key combination.

A word about stages in Gentoo
There are three stages in Gentoo. They are as follows:
  1. Stage 1 : In this stage the system must be bootstrapped and the compilers and essential libraries need to be compiled first from the source.
  2. Stage 2 : In a stage 2 tar ball, the system has already been bootstrapped for you, but you still have to compile all the packages from source.
  3. Stage 3 : A stage 3 tar ball contains all the essential files like compilers and libraries in a binary format. When you are installing Gentoo on your machine, it is recommended to use the stage 3 tar ball unless you have a lot of time to kill on your hands. The advantage of using stage 3 is that the user is saved from the mundane tasks of starting from the very beginning.
Gentoo uses Portage to install software
Each Linux distribution out there has adopted one or the other form of automation for installing packages on an end user's system. If Red Hat has adopted Yum, Debian based distros swear by apt-get.

Gentoo also has developed its own way of automating the whole task. And that is using portage. Portage is a python based package management system similar to those found in FreeBSD which downloads the source of a software and compiles them on the end user's machine. So suppose, I want to install abiword on my machine running Gentoo, I can achieve it through just one command - that is :
# emerge abiword
The emerge script will download the source of the Abiword package along with the source of any other dependency packages to my machine, compile it using the parameters I have opted for and install it on my machine. It is as simple as that.

I will not go into the exact steps I took in installing Gentoo on my machine. The best and the right place for that would be the Official Gentoo Handbook which has done an excellent job of detailing in a simple manner, the steps to be taken in installing this fine distribution . Any way, if you intend to install Gentoo on your machine, you will compulsorily have to read the book at least once so there is no point in detailing the steps here.

Gentoo makes heavy use of environment variables. One environment variable you can be sure of dealing with is the USE variable. I found the use of this variable really unique and useful (no pun intended). Another environment variable I had to set was the CFLAGS variable which specified the architecture and processor make of my machine.

For example, there are a lot of GPLed libraries like gtk, gtk2, qt and so on. If I do not intend to install KDE applications or KDE desktop on my machine, I can easily disable the support for Qt library in my programs at the time of compilation. This I accomplish using the USE variable.
# USE="-qt gnome gtk gtk2" emerge abiword
... will download the source and compile abiword on my machine without any support for Qt but with support included for gnome, gtk and gtk2 libraries.

The environment variables are usually inserted one per line in the /etc/make.conf file. But as far as USE variable is concerned, you can also pass values at the time of emerging as shown in the above example.

Gentoo helps understand Linux better
One thing I found unique to Gentoo was the knowledge gained by the user in the process of installing it on the hard disk. And if you are an experienced Linux user / administrator, it will be a revision time well spent. Some of the things you will understand more are as follows:
  • Partitioning the hard disk using fdisk utility.
  • Creating file system of ones choice on the partition.
  • Turning on swap using the swapon command.
  • Mounting the partitions on to a mount point (/mnt/gentoo in my case)
  • chrooting to a directory or partition
  • Configuring the kernel parameters and compiling and installing the kernel the old fashioned way (If you are not opting for the stock genkernel that is).
Advantages of Gentoo
  • After all this trouble, what you get is a Linux distribution which is tailor made for your machine, blazing fast, with just the packages you need devoid of unnecessary baggage. In fact, if you can correlate other Linux distributions to a Honda motorcycle, ie rolled out from a factory assembly line, all looking and working alike, then Gentoo is a Harley Davidson of Linux distributions; no two installs identical to each other.
  • Another advantage for Gentoo is a very strong user community with prompt help support.
The down side of installing Gentoo
  • Unlike other distributions, it takes a lot of time for installing gentoo on the machine. In my case it took a larger part of 4 days to finish installing gentoo with X and the applications I needed. Coming back to our bike analogy, same way as a Harley Davidson bike is a fuel guzzler, Gentoo is a time guzzler.
  • To successfully install Gentoo on your machine, at some point of time, an internet connection is mandatory.
  • Finally, you should stock yourselves with a large dose of patience.
A suggestion to the developers of Gentoo
One thing which grabbed my attention was the inordinate time it took to download and install software on the machine. This is especially evident when there are dependency packages to be installed. For example, when I gave the command:
# emerge kde-base/kgoldrunner
It started downloading the first package, unpacking it in a directory and compiling and installing it. Then downloading the second package, unpacking,compiling and installing it and so on. In fact it was a download - compile - install - download - compile - install cycle.

The down side of following this cycle is your internet connection should be maintained at all times till the installation of all the packages are completed.

I feel it would be better if the emerge script is tuned to download all the package sources including the dependencies first at a single stretch and then, start the compile and install process. This way, once the download is complete, the internet connection is not important any more and even if the internet goes down, it will not hamper the installation in any way.

Tuesday 17 January 2006

Why should someone use Debian over another distribution?

I think the reason I run Debian rather than anything else is because its as true to the Unix philosophy as it can get: each of the system components does what it should, does that as well as it can, and otherwise keeps out of your way. Nothing on the system happens without your consent. And because everything is designed with little pieces building on top of each other, it's easy to keep an overview. This directly translates into manageability and security. In other words, you control the system, and not the other way around"
.
This was the reply given by Martin.F.Krafft, the author of the best selling book "The Debian System" in an interview with Sal Cangeloso when this question was posed to him. The interview also touched on topics on various other aspects of the Debian system like the importance of Debian policy to the final product, the path the author would like to see Debian adopt in the future and many more.

The questions have been well thought out and the replies are quite insightful which makes reading the whole interview a worthwhile exercise.

Monday 16 January 2006

Print a large banner on your terminal

I still remember those days clearly when I was taking a short term course in Unix. The Unix flavour being SCO Unix ver 5.0. The first command we were introduced to in Unix by the instructor was the 'banner' command.

'banner' is a command which prints a high resolution text banner on the system console or if you have a printer connected to your machine, you can redirect the output to the printer. This utility is available on all Linux / Unix platforms.

For example, to print my name as a large banner, I give the command as follows:
$ banner -w 60 Ravi
The above command will print my name on the console with a width of 60 characters. If the -w option is omitted, it prints my name in the default width of 102 characters. The character used to print the name is '#'.

Fig: My name printed on the terminal
You can also redirect the output to a printer as follows:
$ banner -w 60 Ravi > /dev/lp0
We had great fun by printing out a variety of text on the console using the banner command. In some ways, it is a pity that nowadays, the first thing a new user to Linux or Unix do is check out the GUI or the games installed in them.

UWIN - Unix for Windows

There are times when you are forced to use a Windows machine and there is no way of getting your hands on a PC running Linux. This situation is common if your office PCs all run windows and the company policy forbids you from installing an alternate OS on it. And you feel your productivity is severely hampered because certain tasks - which could easily be accomplished using the plethora of command line tools in Linux - do not have an easy solution in Windows.

This is where UWIN comes into the picture. UWIN or Unix for WINdows, is developed and released by AT&T Laboratories and David Korn - the creator of Korn shell. UWin basically consists of a set of tools and libraries which helps application developers compile and run Unix applications natively on windows. The tools include a complete shell (Korn Shell) for windows which is bundled with all the command line tools you find in Linux/Unix. UWIN is not a new development in fact, it was around since a long time back and AT&T even enjoyed a tie-up with Wipro technologies (a foremost IT firm in India) to sell this package for commerical use. The tie-up has since been dissolved.

UWIN comes with its own Unix compiler 'cc' but developers can also use Visual C++ or mingw (windows port of gcc) to compile the Unix applications. Some other software bundled with it include development files like libraries, groff, perl and X windows libraries. One thing that UWin lacks though is an X server. So if you want to run GUI applications in UWin, you might have to use a third party X server.

What I like most about UWIN is the Korn shell bundled with it. Considering the sub-standard Dos shell you have in windows, having a Unix shell (korn shell) is a real time saver for people who have some knowledge in Unix but who are compelled to work in Windows.

Fig: korn shell on Windows

Installing UWIN
UWIN comes with its own installer. So installing it in Windows is similar to installing any other software in it.

Fig: UWin installation in progress


Fig: Another screenshot of UWIN installer

To have a complete UWIN system, you have to download 8 setup files them being :
  • uwin-base - Which contains the UWIN runtime, korn shell, daemons, services and over 100 command line tools you find in Unix/Linux.
  • uwin-dev - Contains the compiler 'cc', make, as well as the libraries needed to compile the Unix applications for running natively on Windows.
  • uwin-groff
  • uwin-perl - Perl language packages so you can run perl programs.
  • uwin-terminfo
  • uwin-xbase - X windows base packages needed to run X applications (provided you have an X server running).
  • uwin-xdev - X windows libraries needed to develop X applications.
  • uwin-xfonts - All necessary fonts for X windows.
So all together it will be a 50 MB download. If you are only interested in having the korn shell, installing only the UWIN base package is sufficient.

The installation went with out any glitch for me and at the end, the installer created a short-cut to my windows desktop for the shell. And double-clicking it opened up the korn shell with all the command line tools in Unix at my fingertips.

Salient features of UWIN
  • Access to almost all the command line tools from Unix on windows. 245 command line tools to be exact.
  • Comes bundled with the original Unix compiler 'cc' as well as a plethora of tools like 'make' and the necessary libraries which allow Unix applications to be build and run on Windows machines with very little or no changes in the source code.
  • Option to use other compilers like Visual C++ or Mingw to compile programs.
  • Full fledged Perl package.
  • X windows libraries for those who aspire to develop X applications on windows. Though to run those applications in Windows, you need an X server which is not bundled with UWIN.
  • UWIN comes with a control panel applet (accessed through 'Start->Settings->Control Panel->UWIN') which can be used to configure some of the UWIN system parameters.
Uses of UWIN
  • Run Unix applications natively in Windows in full speed.
  • Use the full power of Unix command line tools on Windows.
  • The korn shell bundled with UWIN makes a Unix user feel right at home in a Windows environment.
  • Develop and run UNIX applications on Windows.
  • Develop X applications on the Windows platform.
Drawbacks of UWIN
  • UWIN does not come bundled with a X server so a user will not be able to run X applications on windows. Though there are third party commercial X servers available which can fill this gap.
  • UWIN is not released under the GPL but is free to download and use for educational and non-commercial purposes.

Saturday 14 January 2006

Book Review : Linux Quick Fix Notebook

I am always on the lookout for good books on Linux which covers system and network administration topics. So when I came across one of the Bruce Perens' Open Source Series books on Linux called "Linux Quick Fix Notebook" authored by Peter Harrison, I gave it a shot. At first I was a bit skeptical owing to my past experiences with books that claim to be all rounders in their respective field but which in reality covers the topic in shallow depth. But after flipping through a few pages, I marvelled at the sheer amount and depth of coverage of system and network administration topics.


This book positions itself as a System / Network administrator's guide to configuring and fixing various issues faced in Linux and so is a bit short on theory but more than makes it up by taking a hands-on approach in tackling these problems. It has everything from the basic configuration and trouble shooting to advanced security and optimization techniques.

Would you like to set up your own Wireless network, you can do it with ease by following the 13th chapter titled "Linux Wireless Networking". Or how about creating a complete website using Apache web server after setting up DNS, DHCP et al ? The author takes you through all these topics and more in this well structured book. This book is divided into three parts namely :
  1. Linux File Server Project - which gives a sound base on the introductory networking topics like a complete chapter on subnetting IP addresses - which interestingly has not been covered in any of the Linux books I have come across so far, setting up a NIC on Linux, Simple Network Troubleshooting, three chapters on Samba, configuring a DHCP server and more. This part includes 13 chapters.
  2. Linux Website Project - This is the second part of the book which contains 11 chapters and covers all the topics even remotely related to website hosting like setting up a firewall using iptables, setting up servers like FTP server, Apache web server, NTP server and mail server (sendmail), Monitoring server performance using MTRG and configuring DNS (both static and dynamic) and ...
  3. Advanced Topics - This forms the third part of this book and contains 11 chapters. This part deals exclusively with those topics in networking and system administration which are niche areas. These include a chapter on network based Linux installation, how to configure RAID (Redundant Array of Inexpensive Disks), managing disk usage through quotas, configuring NFS, NIS, LDAP and RADIUS, setting up a proxy server using squid, a chapter on Virtual Private Networking, modifying the kernel parameters to improve its performance and basic MySQL configuration details.
That is not all, the author has also included 4 appendices which can be considered to be chapters in their own right which covers those topics which did not fit in any other place in the book like using TCP wrappers to improve security, codes, scripts and configurations, examples of zone files for DNS configuration and so on. The author has also included a short section on Cisco routers and PIX firewalls and explains how to backup the configuration files of these hardware routers to a remote Linux server.

The Book at a glance
Because the book has over 35 chapters and 4 additional apendices, I will list the essential topics covered in this book instead of a chapter-by-chapter listing.
  • Software Installation
  • Network setup and troubleshooting
  • Samba for windows files on Linux servers
  • Linux-to Linux file sharing with NFS
  • Simple MySQL database administration
  • LDAP and NIS for centralized logins
  • FTP and SCP file transfers
  • Disk data redundancy with software RAID
  • Wireless Linux networks
  • MRTG server performance monitoring
  • Linux firewalls and VPNs
  • Squid for Web access control
  • Mail, Web, and DNS server setup
  • Websites on DHCP Internet links
  • Time synchronization with NTP
  • Error reporting with syslog
  • Restricting user's disk space usage with quotas
Book Specifications
Name : Linux Quick Fix Notebook
ISBN No: 0-13-186150-6
Author : Peter Harrison
Publisher : Prentice Hall (Professional Technical Reference)
No of Pages : Over 650
Price : Check at Amazon
Rating : Excellent

A word about the Author
Peter Harrison, the author of this book is a principal network engineer for a major Web-hosting company with over 20 data centers throughout North America and Europe. He has worked extensively with Linux in mission critical, high availability environments used by a number of Fortune 500 companies. He was founding president of PCJAM - Jamaica's first computer user group, and was principal systems engineer responsible for computerizing the island's tax collection and social security systems. He is a Cisco Certified Internetworking Engineer (CCIE).

Things I liked about this book
  • As the name indicates, this book is truly a hands on reference for any system or network administrator dealing in Linux. Each topic is dealt with by giving practical how-to kind of examples and all the person has to do is execute the commands the same way that has been given in the book.
  • The topics are explained in a simple and lucid manner keeping complex jargon to a minimum.
  • The chapters have a logical flow of information starting with concise backgrounders and ending with a troubleshooting section.
  • Each chapter is a complete entity in its own right and so it is not necessary to read the book from cover to cover. You can easily jump to the section you are interested in and carry on from there. For example, if you are interested in configuring RAID on your machine, you can directly flip to the chapter dealing in RAID (chapter 26) and start reading from there thus saving a lot of time.
  • Configuring and setting up various packages and services is explained using command line method, which I believe is a plus point as far as the topic of system and network administration in Linux is concerned.
  • This book addresses the real issues that systems administrators encounter when working with Linux.
The only thing I found a bit of a drawback was that the author has used the Red Hat way of configuring things all through out the book. So starting and stopping services is explained using the 'chkconfig' and 'service' scripts through out the book. Installing packages is explained using 'rpm' and so on. And a person using a non Red Hat distribution will feel a bit disoriented at first.

But that does not bring down the value of the book at all. Taking into consideration the sheer amount of topics covered and their depth, I can't fail to notice that it is a book worthy of being a companion to any professional system or network administrator in Linux.

Benchmark : Performance comparisons of various Filesystems under Linux

If you are asked to list the types of filesystems supported in Linux, I am sure you will be able to list quite a few of them. But when it comes to listing the pros and cons with respect to the performance of these filesystems in Linux, one is not so sure right? This may be because not many have benchmarked these filesystems and published their findings online.

Now things have changed though, because Justin Piszcz has written a two part series listing his findings regarding how each filesystem fared performance wise under Linux. He has published his findings for Linux kernel 2.4 as well as 2.6 .

The filesystems that were included in the benchmark were EXT2, EXT3, JFS, ReiserFS v3, ReiserFS v4 and XFS . To arrive at a conclusion, he performed 17 different tests which included creating 1000's of files on the filesystem and cat'ing a huge file (1 GB) to /dev/null among other things.

So what is the verdict ?
The author has concluded that XFS is the best filesystem in terms of performance and scalability though JFS has improved in some of the tests. And the surprise of all, ReiserFS is the slowest in most of the tests. But I think it would be prudent to read the article first and arrive at your own conclusions.

Thursday 12 January 2006

Creating and compiling Qt projects on the command line

Anyone who has used KDE will know that most of the UI interface in KDE as well as the KDE specific applications have been developed using the Qt tool kit. Qt consists of a set of libraries written in C++ language which aids the developer in creating robust well designed GUI applications in the shortest period of time. Qt is developed by a Norwegian company called Trolltech and is released under two separate licences. For the Linux and Windows platform, there is the GPL licence and it also sells the library under a commercial licence for the Windows platform if developers want to use it to create proprietary applications.

The advantages of Qt
  • Cross platform support
  • Free if you are creating GPLed applications.
  • Very less complexity unlike other libraries such as MFC.
  • A unique way of communication between user interfaces using Signal - Slot mechanism.
  • A rich collection of ready made widgets which reduce the development time drastically.
There are good GPLed UI designers available for the Linux platform like the Qt Designer, KDevelop Designer, Kommander Editor and so on to design the applications just like you do in Visual Basic in the windows counterpart.

Here is a simple method of creating a Qt project using command line:

The listing below shows a simple program which when compiled and run will display a label with the words "Linux is wonderful" inside its own window. If you want to try it out, copy the code shown below into a text editor and save it as test.cpp . You can give it any name.
#include <qapplication.h>
#include <qlabel.h>

int main(int argc, char *argv[])
{
QApplication app(argc, argv);
QLabel *label = new QLabel("Linux is wonderful", 0);
app.setMainWidget(label);
label->show();
return app.exec();
}

The .cpp extension tells us that it is a C++ file. For the uninitiated, you compile the C++ programs using g++ compiler which is installed by default in most Linux distributions. But commonly, you use a file by name MakeFile which directs the compiler to compile your programs. And all you have to do is move into the directory containing the MakeFile and your program, and execute the command make.

$ make
So to compile the above program, you have to create a MakeFile first. Qt has a easy way of generating a MakeFile. This is how you do it.

First you move into the directory containing your code - in our case test.cpp .
While in this directory, create a Qt project as follows:
$ qmake -project
This will create a project called test.pro and include our program test.cpp into it.

Now execute the following command to create a platform specific MakeFile :

$ qmake test.pro
At this stage if you do a listing of the contents of the directory, you will find a file by name MakeFile .

To compile our program, it is now as simple as running the make command.

$ make
After running make, you will find a executable file by name test in your directory which when run will display the label in a window.

$ ./test
This is one way of compiling a simple project which gives us an idea of how the compiling takes place. For complex projects, you usually use specialised editors like KDevelop which automates the creation of the project and the compilation.

Monday 9 January 2006

Xfce 4.2 - A light weight window manager heavy in features.

The first time I used Xfce was when I tried out the Belenix Live CD. Xfce was the only window manager bundled with it so I had no choice but to use it though my personal preference was Fluxbox. But after playing around in it for some time, I just couldn't stop admiring the usability and design of Xfce as well as the responsiveness of the applications when run in it.

So the first thing I did was to install it in Linux and take it for a test drive. And the things I found out were really interesting. For one, Xfce is not just any window manager out there but it is a desktop in its own might. It comes bundled with applications like its own light weight xterminal, a file manager, desktop configuration utilities, a light weight mail client, a media player and optional utilities like a calender similar to those that pop up in KDE and Gnome when you click on the clock in the panel and a very cool lightweight text editor.

Fig: Xfce Desktop - Note the highlighted menu and the calender

But what sets Xfce apart from the more popular heavy weights like Gnome and KDE is its very low memory foot print. In fact, in the developer's own words, the aim of Xfce is to be a simple, light and efficient environment which is easy to use and configure, stable, fast and at the same time visually appealing. And not to speak of a clean desktop. In fact, I found out that the desktop is a separate utility which goes by the name xfdesktop and the user has the option of not running it in Xfce if he chose to.

Fig: You can configure most of the features from the Xfce settings manager

Another aspect which endeared me to this light weight window manager cum desktop is that when you install or uninstall any software in Linux, the menus in Xfce are automatically updated to mirror the change which is a comfortable feature which is lacked by other light weight window managers including popular ones like Fluxbox.

In my opinion, it would be a good idea to install another light weight file manager called 'Rox' along side Xfce which I believe integrates quite well with the Xfce desktop.

Fig: Rox file manager in action

If you are using a Debian based Linux distribution, installing rox is as simple as executing the command:
# apt-get install rox
I recommend using Rox file manager with Xfce because it is quite easy to associate file types with the applications of our choice in rox and it is blazing fast.

Software bundled with Xfce
  • xffm - The light weight file manager
  • xfdesktop - Enables menus to pop up when right clicked on the desktop.
  • xfce-settings-show - Xfce settings manager, where you configure the desktop settings, keyboard, mouse, xfpanel.
  • xfce4-panel - The Xfce4 panel.
  • xfce4-menueditor - A useful widget to edit the menu entries in Xfce4.
  • xfmail - A light weight mail client.
  • xfmedia - A utility to play audio files.
  • xfcalendar - This is an optional widget which gives a compact monthly calendar similar to those found in KDE and Gnome.
  • xflock4 - Activates the screensaver and locks the display.
  • xfrun4 - Run application command dialog box. Can be activated by using key sequence [Alt+F2] .
  • xfhelp4 - Opens the Xfce4 documentation in the default web browser.
  • xfce4-terminal - This is actually a script which runs the ubiquitous xterm but with better configuration. See this link for more details.
  • mousepad - A light weight text editor. (Has to be installed separately).

Fig: xfmail - a light weight email client


Fig: A beautiful Xterminal

And if you are usually booting into run level 2 or 3 and then starting a window manager using the startx command, then you may make Xfce the default window manager by creating a hidden file by name .xinitrc in your home directory and entering the command startxfce4 in it as follows:
$ touch .xinitrc
$ echo "startxfce4" > .xinitrc

$ startx
One thing that I found really annoying though is that when I start nautilus (the Gnome file manager), it overlaps my xfce desktop and I stop getting the xfce menus when I right click on the desktop. I figured a work around here in that by using the --no-desktop flag, I was able to circumvent this problem.
$ nautilus --no-desktop
After using this light weight window manager (version 4.2.2) for a week now, I am so impressed by it that I have made it my default window manager in Linux.

Friday 6 January 2006

Input/Output redirection made simple in Linux

Linux follows the philosophy that every thing is a file. For example, a keyboard, monitor, mouse, printer .... you name it and it is classified as a file in Linux. Each of these pieces of hardware have got unique file descriptors associated with it. Now this nomenclature has got its own advantages. The main one being you can use all the common command line tools you have in Linux to send, receive or manipulate data with these devices.

For example, my mouse has the file descriptor '/dev/input/mice' associated with it (yours may be different).

So if I want to see the output of the mouse on my screen, I just enter the command :
$ cat /dev/input/mice
... and then move the mouse to get characters on the terminal. Try it out yourselves.

Note: In some cases, running the above command will scramble your terminal display. In such an outcome, you can type the command :
$ reset
... to get it corrected.

Linux provides each program that is run on it access to three important files. They are standard input, standard output and standard error. And each of these special files (standard input, output and error) have got the file descriptors 0, 1 and 2 respectively. In the previous example, the utility 'cat' uses standard output which by default is the screen or the console to display the output.
  • Standard Input - 0
  • Standard Output - 1
  • Standard Error - 2
Redirecting output to other files

You can easily redirect input / output to any file other than the default one. This is achieved in Linux using input and output redirection symbols. These symbols are as follows:
> - Output redirection
< - Input redirection
Using a combination of these symbols and the standard file descriptors you can achieve complex redirection tasks quite easily.

Output Redirection
Suppose, I want to redirect the output of 'ls' to a text file instead of the console. This I achieve using the output redirection symbol as follows:
$ ls -l myfile.txt > test.txt
When you execute the above command, the output is redirected to a file by name test.txt. If the file 'test.txt' does not exist, then it is automatically created and the output of the command 'ls -l' is written to it. This is assuming that there is a file called myfile.txt existing in my current directory.

Now lets see what happens when we execute the same command after deleting the file myfile.txt.
$ rm myfile.txt
$ ls -l myfile.txt > test.txt
ls: myfile.txt: No such file or directory -- ERROR
What happens is that 'ls' does not find the file named myfile.txt and displays an error on the console or terminal. Now here is the fun part. You can also redirect the error generated above to another file instead of displaying on the console by using a combination of error file descriptor and output file redirection symbol as follows:

$ ls -l myfile.txt 2> test.txt
The thing to note in the above command is '2>' which can be read as - redirect the error (2) to the file test.txt.

Fig: Two open xterms can be used to practice output redirection.

I can give one practical purpose for this error redirection which I use on a regular basis. When I am searching for a file in the whole hard disk as a normal user, I get a lot of errors such as :
find: /file/path: Permission denied
In such situations I use the error redirection to weed out these error messages as follows:
# find / -iname \* 2> /dev/null
Now all the error messages are redirected to /dev/null device and I get only the actual find results on the screen.

Note: /dev/null is a special kind of file in that its size is always zero. So what ever you write to that file will just disappear. The opposite of this file is /dev/zero which acts as an infinite source. For example, you can use /dev/zero to create a file of any size - for example, when creating a swap file for instance.

If you have a line printer connected to your Linux machine, and lets say its file descriptor is /dev/lp0 . Then you can send any output to the printer using output redirection. For example to print the contents of a text file, I do the following:

$ cat testfile.txt > /dev/lp0
Input Redirection
You use input redirection using the less-than symbol and it is usually used with a program which accepts user input from the keyboard. A legendary use of input redirection that I have come across is mailing the contents of a text file to another user.

$ mail ravi < mail_contents.txt
I say legendary because now with the advances in GUI, and also availability of good email clients, this method is seldom used.

Suppose you want to find the exact number of lines, number of words and characters respectively in a text file and at the same time you want to write it to another file. This is achieved using a combination of input and output redirection symbols as follows:

$ wc < my_text_file.txt > output_file.txt
What happens above is the contents of the file my_text_file.txt are passed to the command 'wc' whose output is in turn redirected to the file output_file.txt .

Appending data to a file
You can also use the >> symbol instead of output redirection to append data to a file. For example,
$ cat - >> test.txt
... will append what ever you write to the file test.txt.

Thursday 5 January 2006

BIOS - Basic Input Output System

You take any PC out there, and you can be sure that it houses a BIOS chip. BIOS is an acronym which stands for Basic Input Output System. You have various companies like Award, Phoenix and the likes which manufacture BIOS chips and your machine will most probably be using a BIOS chip from one of these companies.This ubiquitous chip is the starting point of the booting process in your computer which eventually loads a fully functional operating system. If there is no BIOS chip, then your machine will not boot - irrespective of the type of OS you are using (this might change in the future when flash memory gets cheaper and its use becomes more prevalent).

Knowing how to set the parameters in a BIOS becomes important when you want to tweak the performance of your CPU or change the boot sequence to boot from the CD ROM rather than the hard disk.

Andreas Winterer has written an in depth article on BIOS named BIOS A to Z which gives a well rounded view of the BIOS chip. The article is divided into 4 broad parts namely :
  • The Basics - Which introduces the reader to the different BIOS versions, how to access your BIOS menu, manipulating the BIOS settings, a brief overview of the various menus of a typical BIOS setup program and also how to come out of the BIOS session after you have made the changes to the settings.
  • Key Settings - This section deals with how you can change the boot priority order on your computer, start a desktop PC with a key press or a mouse click, activate support for USB 2.0, and handle problems with fans or hardware changes.
  • BIOS Tuning - I am sure you have heard stories of how some people over clock their processor.And if you are curious of how they achieved it, then this section throws light on this subject. Here the author explains how to speed the system boot-up to the maximum, accelerate graphics cards, make the fullest use of your CPU power, tune motherboard chipsets and squeeze more performance out of your RAM.
  • BIOS Update - From time to time, your BIOS chipset manufacturer will release newer versions of their BIOS programs. These updated programs will optimize existing hardware and may also introduce new functions. This section gives the steps to be followed in successfully updating your BIOS program to the latest version. This is popularly known as flashing your BIOS.
Finally the author ends this very complete article by listing 5 golden rules to be followed to help stay out of trouble while playing with the BIOS settings.

This guide is a must read for all people interested in knowing more about the BIOS and how to tweak its settings to achieve better hardware performance.

Wednesday 4 January 2006

10 things you should know about every Linux installation

I still remember the first time I had to use a non-windows operating system. I couldn't make head or tail of how to navigate the file hierarchy. It was so different from the working of windows that I had to overcome a steep learning curve. But once I mastered the Linux way of working and its file hierarchy, I felt that it was (and is) a robust operating system and has very many advantages over the windows OS.

Jeffery.G.Thomas has writen an insightful article which details the ten things one should know about every linux installation.