Friday 31 March 2006

Enabling and disabling services during start up in GNU/Linux

In any Linux distribution, some services are enabled to start at boot up by default. For example, on my machine, I have pcmcia, cron daemon, postfix mail transport agent ... just to name a few, which start during boot up. Usually, it is prudent to disable all services that are not needed as they are potential security risks and also they unnecessarily waste hardware resources. For example, my machine does not have any pcmcia cards so I can safely disable it. Same is the case with postfix which is also not used.

So how do you disable these services so that they are not started at boot time?

The answer to that depends on the type of Linux distribution you are using. True, many Linux distributions including Ubuntu bundle with them a GUI front end to accomplish the task which makes it easier to enable and disable the system services. But there is no standard GUI utility common across all Linux distributions. And this makes it worth while to learn how to enable and disable the services via the command line.

But one thing is common for all Linux distributions which is that all the start-up scripts are stored in the '/etc/init.d/' directory. So if you want to say, enable apache webserver in different run levels, then you should have a script related to the apache webserver in the /etc/init.d/ directory. It is usually created at the time of installing the software. And in my machine (which runs Ubuntu), it is named apache2. Where as in Red Hat, it is named httpd. Usually, the script will have the same name as the process or daemon.

Here I will explain different ways of enabling and disabling the system services.

1) Red Hat Method

Red Hat and Red Hat based Linux distributions make use of the script called chkconfig to enable and disable the system services running in Linux.

For example, to enable the apache webserver to start in certain run levels, you use the chkconfig script to enable it in the desired run levels as follows:
# chkconfig httpd --add
# chkconfig httpd on --level 2,3,5
This will enable the apache webserver to automatically start in the run levels 2, 3 and 5. You can check this by running the command:
# chkconfig --list httpd
One can also disable the service by using the off flag as shown below:
# chkconfig httpd off
# chkconfig httpd --del
Red Hat also has a useful script called service which can be used to start or stop any service. Taking the previous example, to start apache webserver, you execute the command:
# service httpd start
and to stop the service...
# service httpd stop
The options being start, stop and restart which are self explanatory.

2) Debian Method

Debian Linux has its own script to enable and disable services across runlevels. It is called update-rc.d. Going by the above example, you can enable apache webserver as follows:
# update-rc.d apache2 defaults
... this will enable the apache webserver to start in the default run levels of 2,3,4 and 5. Of course, you can do it explicitly by giving the run levels instead of the "defaults" keyword as follows:
# update-rc.d apache2 start 20 2 3 4 5 . stop 80 0 1 6 .
The above command modifies the sym-links in the respective /etc/rcX.d directories to start or stop the service in the destined runlevels. Here X stands for a value of 0 to 6 depending on the runlevel. One thing to note here is the dot (.) which is used to terminate the set which is important. Also 20 and 80 are the sequence codes which decides in what order of precedence the scripts in the /etc/init.d/ directory should be started or stopped.

And to disable the service in all the run levels, you execute the command:
# update-rc.d -f apache2 remove
Here -f option which stands for force is mandatory.

But if you want to enable the service only in runlevel 5, you do this instead:
# update-rc.d apache2  start 20 5 . stop 80 0 1 2 3 4 6 .
3) Gentoo Method
Gentoo also uses a script to enable or disable services during boot-up. The name of the script is rc-update . Gentoo has three default runlevels. Them being: boot, default and nonetwork. Suppose I want to add the apache webserver to start in the default runlevel, then I run the command:
# rc-update add apache2 default
... and to remove the webserver, it is as simple as :
# rc-update del apache2
To see all the running applications at your runlevel and their status, similar to what is achieved by chkconfig --list, you use the rc-status command.
# rc-status --all
4) The old fashioned way
I remember the first time I started using Linux, there were no such scripts to aid the user in enabling or disabling the services during start-up. You did it the old fashioned way which was creating or deleting symbolic links in the respective /etc/rcX.d/ directories. Here X in rcX.d is a number which stands for the runlevel. There can be two kinds of symbolic links in the /etc/rcX.d/ directories. One starts with the character 'S' followed by a number between 0 and 99 to denote the priority, followed by the name of the service you want to enable. The second kind of symlink has a name which starts with a 'K' followed by a number and then the name of the service you want to disable. So in any runlevel, at any given time, for each service, there should be only one symlink of the 'S' or 'K' variety but not both.

So taking the above example, suppose I want to enable apache webserver in the runlevel 5 but want to disable it in all other runlevels, I do the following:

First to enable the service for run level 5, I move into /etc/rc5.d/ directory and create a symlink to the apache service script residing in the /etc/init.d/ directory as follows:
# cd /etc/rc5.d/
# ln -s /etc/init.d/apache2 S20apache2
This creates a symbolic link in the /etc/rc5.d/ directory which the system interprets as - start (S) the apache service before all the services which have a priority number greater than 20.

If you do a long listing of the directory /etc/rc5.d in your system, you can find a lot of symlinks similar to the one below.
lrwxrwxrwx  1 root root 17 Mar 31 13:02 S20apache2 -> ../init.d/apache2
Now if I start a service, I will want to stop the service while rebooting or while moving to single user mode and so on. So in those run levels I have to create the symlinks starting with character 'K'. So going back to the apache2 service example, if I want to automatically stop the service when the system goes into runlevel 0, 1 or 6, I will have to create the symlinks as follows in the /etc/rc0.d, /etc/rc1.d/, /etc/rc6.d/ directories.
# ln -s /etc/init.d/apache2 K80apache2
One interesting aspect here is the priority. Lower the number, the higher is the priority. So since the starting priority of apache2 is 20 - that is apache starts way ahead of other services during startup, we give it a stopping priority of 80. There is no hard and fast rule for this but usually, you follow the formula as follows:

If you have 'N' as the priority number for starting a service, you use the number (100-N) for the stopping priority number and vice versa.

Wednesday 29 March 2006

Throwing Light on DRM

Digital Rights Management - also known popularly as DRM is a technology which can be used to restrict or control how and by what medium a content can be viewed by a user. And it forms the basis by which media industry hope to implement and safeguard their hold on the copyright of the content they generate - be it music, video or any piece of software. Not surprisingly, RMS and his ilk consider DRM a threat to the very ideology and freedom for which GNU stands for. So you have the GPL v3 draft which comes down heavily on anything even remotely related to DRM and hopes to incorporate safeguards against DRM in the new version of the licence itself.

DRM also known as trusted computing can be used to create a way for companies to trust your computer with their copyrighted material. fsfe.org has an enlightening article which throws light on DRM, Trusted computing and how it could adversely affect out lives. The author pursues the pros and cons of allowing DRM technologies in ones computer.

Monday 27 March 2006

40+ Suggestions for a better desktop in GNU/Linux

Usablity and functionality go hand in hand as far as a desktop environment is concerned. The more functional the desktop, the more usable it becomes. And the more usable it is , more people start using it. The Gnome developers have given this a great deal of thought and the results are for all to see in the latest versions of Gnome which even a child can navigate. But does that mean we have reached the end of the road as far as desktop usability is concerned? Not at all. There is still scope for improvement. The challenge is in making the desktop as feature rich and functional as possible without overwhelming the user.

Peter Chabada has listed over 40 suggestions which he feels could make the Linux desktop (particularly Gnome) more user friendly. His idea is to keep the Gnome desktop simple but make it wield more power through added features. Perhaps the Gnome developers could incorporate some of these features in the next release of Gnome.

Sunday 26 March 2006

Google Hires Vim Author

Vim logo
I am a great fan of Vim - a text editor created by Bram Moolenaar. I use Vim not just for coding but also for writing letters or a document.

A percentage of the money donated for Vim editor is made use by Bram Moolenaar for the welfare of under privileged children in Uganda through the ICCF.Thus by using and encouraging its use and obviously donating some money to the project, one is actually helping the children in Uganda.Read more »

Wednesday 22 March 2006

Incorporating special effects on the Desktop - A brief analysis

Recently, I came across this article where I read that a certain yet to be released proprietary OS required the top of the line graphics cards in order to display in full, the special effects that were being integrated into it. This set me thinking....

Why is the OS industry obsessed with providing richer, processor intensive graphical effects ? Shouldn't the stress be more on providing a functional desktop which runs on average hardware rather than an obsession with eye candy? After some pondering, I realised that this trend has more to do with the market dynamics. It is about staying ahead in the game. The proprietary OSes need to give their users a valid reason to upgrade to the next version. So improvements in the security as well as functioning of the OS is not enough. There should be something more tangible. And the answer to it is swarming the users with more eye candy and special effects which, hopefully can be made a selling point in making the users part with their money.

This trend has become so prominent that Linux corporations like Novell and Red Hat are compelled to take an active interest in the development of special effects in Linux too through projects like AIGLX and XGL which (they hope) would be an answer to the competition.

Now lets get this right; many of the software applications which run on the OS rightfully need to be feature rich. For example, a game like Half-Life will be well received if it incorporates better graphics and modelling which utilizes the extra processing power. But building more special effects in the OS level will rob the extra power and memory from the applications and games which rightfully require them.

There are other valid reasons too which prompt me to take the viewpoint that less eye candy is better for the OS. Experience tells me that it is futile to do productive work within a desktop with all the special effects enabled. The last time, I tried it, I was severely distracted and fell short of completing my work. Is it just me or are there others who have been through the same experience ? To do productive work, it always helps to have a fully functional but spartan desktop.

Thankfully Linux aims to provide a balance between both the worlds. One example where this balance is evident is in the clearlooks theme on the Gnome desktop. I find this theme really pleasant at the same time less distracting. And again, one is not compelled to upgrade the OS just to get the new features which are provided by the software developers.

But the Windows users do not have this luxury. For example, a person using Windows 2000 will be forced to buy a copy of Vista if he needs the added security and extra features like better search. And to install Vista on his computer, he will most certainly have to embark on a spending spree to upgrade his PC to accomodate the extra special effects that are integrated into the OS. The alternative being to keep on using the same old OS with reduced features and dwindling security updates.

The bottom line is that it is more beneficial for the users to have a choice when deciding on the features that the OS provides and Linux most certainly is strong in providing these choices.

Sunday 19 March 2006

Mondo Rescue - An easy to use disaster recovery software for GNU/Linux

Studies show that one of the principal point of hardware failures in a computer pertains to the hard disk. This is especially true if you are in the habit of running the machine for lengthy durations in between reboots to the order of days and months, if not years. Usually mission critical servers are run in such a manner. But now a days, with the advent of always-on internet access, more and more home users find it convenient to keep their machines turned on for days on end even when they are not in use. In such a scenario, it is only a matter of time before the user is faced with a hard disk failure.

This is where the disaster recovery software gain prominence. If you are in the habit of taking a snap shot of your computer file-system on a regular basis, in the advent of a hard disk failure, you can not only save all your data but also get your system up and running in very short time .

There are quite a few ways of getting a snapshot of a live system in Linux. Many of them requiring varying degrees of expertise. One such software is Mondo Rescue which simplifies the whole act of backup and restore of the filesystem in Linux. More specifically, Mondo rescue is a disaster recovery software developed by Hugo Rabson for GNU/Linux which allows one to effortlessly backup and interactively restore filesystems mounted in Linux which includes even NTFS partitions. And what is more interesting is that, you are provided the choice of backing up to a variety of media like the CD-R, CD-RW, DVD, NFS share, tape and even to another partition of the hard disk.

Fig: Mondo in action

To start using this software, you have to first download and install it as it is not bundled by default on Linux distributions. And for Debian based systems, it is as simple as :
# apt-get install mondo mondo-doc
You will need mono-doc package because it contains the detailed documentation in HTML format. Once the software is installed on the system, one can start to create a snapshot by typing the following command as root :
# mondoarchive
This command opens up a curses based GUI which walks you through the process of creating the backup of your Linux system to the desired media. To try it out, I decided to clone my partition containing Linux OS in which I had installed some additional software from the net. I gave it directions to backup everything except /var/log, /proc and /home directories and to my surprise, it neatly created a backup in the form of a series of ISO files ready for burning to the CD. In fact, I could have selected the option of CD/ROM in which case, it would have also automatically burned the image to the CD using cdrecord.

Fig: Backing up files in progress

Backing up is well and good. But what about restoring the data back to the hard disk?
Mondo actually works in conformation with another software or rather a script called Mindi. This script is responsible for creating a bootable CD or floppy image (in cases where you want to restore from the NFS share) which contains the Linux kernel, modules, tools (fdisk, cat, mkfs, gzip, less, more ...) and libraries which helps one to do basic system maintenance in Linux.

Fig: Restoring files in Mondo
So to restore my Linux system to the original state prior to taking the backup, all I had to do was boot the computer using the first backup CD and I would be put into a bash prompt. At the bash prompt, I type the following command:
# interactive
... and the software asks a series of questions which require a yes or no answer and then start the process of restoring the backup. I had the choice of restoring the complete filesystem, a subset of directories or even individual files which made this software very flexible to use.

Also if you have a need of restoring a part of the files from the backup CD while inside the Linux environment, you may use the command :
# mondorestore
... and follow the directions which will also have the desired effect.

Mondo can also be used on the commandline
This utility can also be perfectly used on the command line which will gladden the hearts of people who thrive working from within the shell.

What I found really interesting about the command line method is the combination of options used which made using this utility really intuitive. For instance, if I want to backup the filesystem to a CD, I would use the switch -Oc and for backing up to a tape drive, it is -Ot, to a remote NFS share -On and so on. But these are not the only switches available. I could exclude certain directories from being included in the backup by using the -E switch and include many others by using the -I switch.And if I want a running commentary of what is being done, then I use the verbose -V switch.

For example, instead of using the curses based GUI tool while creating the backup, I could have perfectly passed all the options on the command line as follows :
# mondoarchive -OiV -p mybackup -I / -E "/media/hda1 /var/log /proc"  -d /home/ravi -s 650m 
The above command will create (-Oi) an ISO image containing all the files (-I) in the filesystem except (-E) the files in /media/hda1, /var/log and /proc and save the resulting ISO image in the /home/ravi (-d) folder naming (-p) the ISO mybackup-[1-20].iso . And each of the resulting ISO will have a maximum size (-s) of 650 MB.

I found this utility really efficient and easy to use for my modest backup needs. Another aspect of this utility is that the ISO image created, when burned to a CD can be used as a fully functional CD-based mini-distribution as well as a recovery CD because, as I mentioned earlier, it contains the Linux kernel, the essential libraries, modules and tools.

Pros and Cons of Mondo Archive
After testing and using it for quite some time now, I find Mondo recovery to be a robust disaster recovery software suitable for the needs of both the home user as well as small to medium businesses. Its curses based GUI interface is rather intutive . This software makes backing up Linux and Windows partitions to removable media quite easy.It's support for backup to CD,CD-RW,DVD,tape, hard disk as well as NFS mounts makes taking snapshots of ones harddisk really flexible. Its support for both LVM and RAID as well as boot managers GRUB and LILO brings it at par with its propritery counterparts. One can even create multi-CD backups using this software albeit with an upper limit of 20 CDs per set.

Of course, as with all software, this too has its own shortcomings such as its inability to handle system and hidden attributes when archiving Dos/Windows files. Also some of the features like using regular expression to select a group of files to be backed up is yet to be implemented though there is a button for this made available in the GUI mode.

Having said that, I think these are mere wrinkles when compared to the usability and the sheer number of features of this GPLed piece of software which makes it a powerful tool for taking periodic snapshots of the filesystems.

Friday 17 March 2006

Know more about the configuration switches of your Linux kernel

Unlike in FreeBSD and other Unices, compiling a kernel in Linux is relatively a challenging experience. For instance, the last time I decided to recompile the Linux kernel, I was faced with taking decisions regarding whether to enable or disable a functionality at the time of configuring it. Just doing a ...
# make menuconfig
... will throw up scores of configuration options which may or may not be useful depending upon the type of machine used and how the user intends to use it. And what about when you want to compare the differences between two different versions of a kernel ? It would be nice to have a web interface where you can browse through the various configuration options of all the kernel versions and even get an idea of what each option accomplishes.

Well, Jason Wies maintains a website called kernel.xc.net which covers this niche area. This site has a clean interface which helps the user to compare the configurations of different versions of the Linux kernel. While on this site, you can check what are the new features provided by any of the Linux kernel versions as well as know more about a particular configuration parameter.

I found the search feature really useful. For example, to find out the kernel configuration options related to APM (Advanced Power Management), all I had to do was type in the word "APM" in the search box, select the kernel version and my machine architecture and press OK. And I was provided a list of all the places in the configuration related to APM. What is more, I get to know what APM stands for and a short help on the particular configuration parameter. And if you know more about a configuration, you can enter it in the message box on the page and communicate it to the author. All in all a very useful site for any one looking forward to compiling ones Linux kernel.

Wednesday 15 March 2006

Humor: Six relations between Sex and Linux

linux and sex
I have a friend who acts like a Luddite. Now don't get me wrong, he is aware of technology, just that he is not passionate about it and bears with it just to get his job done. In the past, I have tried many times and failed miserably in inciting interest in him for technology. However, despite these contrasts, we are good friends and what makes our friendship gel is his obvious knack of looking at the lighter side of things.
Read more »

Tuesday 14 March 2006

IPCop - The perfect Linux firewall

An expert once said, using Linux without a firewall in place is akin to having unprotected sex. You never know when your system will be compromised. That doesn't mean that Linux is devoid of any firewall software. For one you have the ubiquitous iptables which is installed by default on all Linux machines (albeit with an open policy). So to make your machine secure, some amount of work has to be done. I had described the basics of configuring iptables in previous articles namely - "iptables - The poor man's robust firewall for Linux" (Click here) and "Designing a firewall using Iptables for the Home User" (Click here).

But there are other firewalls too available for Linux and one such firewall software is IPCop. What is unique about IPCop is that it acts as a standalone solution tailor made for the enterprise networks. What is more, it provides much more features than the basic iptables/ ipchains firewall implementation and can easily be extended to include other features through add-ons.

Joseph Guarino has written a two part series which simplifies the installation and configuration of IPCop firewall software which should be quite informative for any one in the business of Linux security. In the first part of the series, he walks one through the installation of the software and in the second part, he dwells on the more advanced topics like setting up a firewall using IPCop for mail and web servers, installing a very useful add-on called copfilter to enhance the firewall setup and more. The interesting thing is, once you have installed the firewall software, the configuration can be done using a easily accessible web interface which endows this software a great degree of userfriendliness.

Thursday 9 March 2006

An interview with Jon Maddog Hall

Jon Maddog Hall is one of the strong proponents of the Open Source movement. He is currently the Executive Director of Linux International, a non-profit association of groups, corporations and others that work towards the promotion of growth of the Linux operating system and the Linux community. He also serves on the board of several companies and non-profit organizations like USENIX.

At some point of time in his life, he has worked as a software engineer, systems administrator, product manager, marketing manager and professional educator. In the past, he has worked for reputed companies like Western Electric Corporation, Bell Laboratories, Digital Equipment Corporation and VA Linux. Not withstanding his name, people who had the honor of interacting with him on a personal level vouch on his pleasant nature.

Recently, Dahna McConnachie quizzed him on his life, the work he is doing, as well as his opinion on issues related to Linux and the open source movement, which makes an interesting read.

To those wondering how he landed himself the infamous name of Maddog ;) , Jon attributes it to those times when he had less control over his temper.

Sunday 5 March 2006

A Six-headed, Six-user Linux system

Linux is a multi-tasking, multi-user operating system. Which means, more than one person can use the OS at the same time each running multiple applications simultaneously. And the number of applications that can be run by each user is only limited by the hardware on which Linux is installed. One area where this feature of Linux is put to good use is in the thin client technology, where there is a sufficiently powerful master server which hosts the Linux OS and there are multiple stripped down versions of a PC which contains just the motherboard with enough memory, a graphics card, network card and no hard disk. These stripped down versions of the PC, also called dumb terminals, get their display from the master server and all the processing is also done on the master server.

Now if you are setting this up in a very small area, you can further cut costs by doing away with the individual mother board and memory by connecting the input and output devices directly to the master server. This is exactly what Bob Smith - an electronics hobbyist and Linux programmer - and his team accomplished. He set up a Linux system which allowed upto six people to use it at the same time. He used standard USB keyboards and mice and the six monitors were connected to individual PCI graphics cards. A very good feat indeed.

Now that doesn't mean he hadn't run into problems. He claims the system is a bit unstable in that he was getting a kernel oops when ever a user logged out. Nevertheless a very good project which can find uses in small cybercafes or even homes where you have a need for more than one computer but is hard put to buy a new one. For further details, you may read the original article.

What is a kernel oops ?
An oops report is basically just the dumping of information by the kernel when it enocounters a serious problem. The problem can be a code related bug such as dereferencing a NULL pointer, accessing out of bounds memory, and so on. Or it could be because of faulty hardware. The oops report is generated by the kernel to help the end user debug, locate, and fix the problem. Some times when a oops occurs, the system may seem to continue running normally, but is likely to be in an unstable state.

Saturday 4 March 2006

A comparison of Solaris, Linux and FreeBSD Kernels

As a Linux user, I am sure you have come across the saying - Linux is just the kernel - and that all the tools (command line or otherwise) that come with it are bundled with it to make it the user friendly OS that it is . But what about other POSIX OSes like FreeBSD and Solaris ? Well, they are said to have the kernel and the command line tools more integrated with each other. But the difference doesn't end there. There are big differences in their respective kernels and how they accomplish a task.

Max Bruning
has written a very good article which throws light on the differences (or rather the comparisons) of the kernels of these three OSes. He examines the three subsystems namely sheduling, memory management and file system architecture.

He arrives at the conclusion that kernel visibility and debugging tools play an important role in understanding the system better. Tools which help one to understand the kernel code are even more invaluable than just having access to the source code alone. On this note, he feels Solaris has a distinct advantage over Linux and FreeBSD because it comes with tools like kmdb, mdb and DTrace which help a developer to analyze the code better.

Thursday 2 March 2006

eGovernance and Free Software : An event hosted by the United Nations University

Today I was rather surprised to get an invitation via email to attend a panel named 'eGovernance and Free Software' which is being hosted on 16th March 2006 by the United Nations University. The first reaction I had was that someone was pulling a fast one on me. But after visiting their website, I realised that there is indeed a university sponsored by United Nations and it enjoys a yearly grant of $40.7 million. And it also employs 211 staff from over 30 countries. The charter of this university is to contribute to the efforts to resolve the pressing global problems of member countries through research and by training personel.

The panel will concentrate on the area of how to achieve technological self determination by developing countries especially in the area of opensource software. The panel will research through discussion why it is in the best interests of the developing world to become an active participant of the opensource movement by contributing their efforts in further development of the products.

The function is being held in New York and a series of distinguished personalities like Mr. Michael Tiemann who is the Vice President at Red Hat are set to take part in the panel. So if you are living in and around New York or will be in the city on 16th of March 2006, and are interested in the opensource movement, then you may drop by at United Nations Headquarters building in conference room 6 and take part in the event.

But to attend the event, you have to first register at their website.