Wednesday, 31 May 2006

How to securely erase the hard disk before selling ones computer

There are times when the news sites are abuzz with sensational news items. I am speaking of those news items which tempts one to pitch in and have his/her say come what may. And the news of someone who bought a laptop on ebay only to find it defective and how he took revenge on the seller by posting all the personal data on the hard disk on a website is by now a legend.

Now it is hard to decide who is in the right here - the person who published the private data on the website (for all you know, the laptop in question could have been damaged in transit) or the seller who is now the talk of the town, whose life is being dissected. There is no way to know. But that is besides the point. The truth is that it is scary to realize that it is next to impossible to delete all the data that one stores on ones storage media without completely destroying it. Because, with the right tools anybody can retrieve even deleted data.

So what can be done to alleviate the situation ? If you are using GNU/Linux or any other UNIX, then you have a tool called shred which can be used to wipe all the data from the hard disk. Here is how it works. Suppose I want to erase all the data on my hard disk, then I boot using a LiveCD like Knoppix and open a shell and type the following command:
# shred -vfz -n 100 /dev/hda 
Here /dev/hda is my whole hard disk. And I am asking shred to make (-n) 100 passes by overwriting the entire hard disk with (-z) zeros. And shred program (-f) forces the write by changing the permissions wherever necessary.

Another GPLed tool (though not specifically related to Linux) which is quite popular is Darik's Boot and Nuke (DBAN) which also does a swell job of wiping ones hard disk.

It is claimed that experts in the field of retrieving data can still get some data from a hard disk that has been wiped in the above manner. But atleast lesser mortals who buy second hand laptops and computers will find it beyond their means to lay their hands on the data.

Tuesday, 30 May 2006

Build your own network firewall using a discarded old machine

Do you have an old discarded computer lying around which you are wondering how you could put to good use? I do. And I am sure most of you who bought your first computer around 7 years back would too. Well, now you can put it to use as a network firewall - even if your network contains only one other machine ;).

Will O'Brien has written a very good article giving step-by-step instructions to setup a network firewall using a discarded computer sans a hard disk. In fact, the firewall is run using a Linux live CD (Devil-Linux) and it stores the configuration on a separate disk like a floppy disk or USB key which is what makes it really interesting. Also he has used a 3 port ethernet NIC (Didn't know there was one) for the experiment as it uses up only a single PCI slot and the kernel module is shared between the three ports. A really useful project described in a simple and lucid manner.

Monday, 29 May 2006

WINE - An open source project which could be the tipping point for wide scale adoption of Linux by the masses

A few days back, Google did something which took everyone by surprise. It released Picasa - a very popular and advanced image archiving tool that also includes a lot of photo manipulation features - for Linux, which till now only worked in Windows . But what was cleverly shielded from the average user was that Picasa released for Linux is the very same Picasa for windows but running on top of Wine - an Open Source implementation of the Windows API on top of X and Unix. In fact any body who uses Picasa will not be aware that he/she is running a windows software in Linux unless it is pointed out.

Wine has been around for a long time now and there are even companies which operate for profit such as Codeweavers who sell a fine tuned version of wine to allow people to run Windows software on Linux. But what amazed me when trying out Picasa was the degree of integration of the software with the native OS. For the first time, a company has proved that it is possible to successfully package the windows software to work efficiently in Linux with minimal or no configuration needed at the user's end. Of course, Google has worked closely with the Wine developers to make this a reality. And a lot of code written by Google developers - 200 patches to be exact - have been contributed back to the Wine community.

Now here is the interesting fact. There are countless Windows software out there which is used on a daily basis by different sets of people but which does not command as much popularity as to get the free software community sufficiently excited to start working on a similar project. I myself was really fond of a free software called KeyNote which I was using on a regular basis on Windows. And I was a bit disappointed when I couldn't find a similar software in Linux. Wine project will give incentive to such people who are tied down to using their proprietary/free Windows only software to switch to Linux.

So the question is how is what Google has done different from say, what Codeweavers has been doing for so long? Well, Google has integrated wine with one particular software to make it seamlessly work in Linux where as Codeweavers is essentially a fine-tuned version of Wine which can be installed on Linux and it is up to the user to install and configure the necessary Windows software on top of it.

The need of the hour is for more and more companies to take a leaf out of Google's book and integrate Wine libraries with their Windows only software to make it work seamlessly in Linux. So tomorrow for example, if an Adobe releases a version of Wine lib integrated Photoshop for Linux platform the same way that Google has done for Picasa, then we would see more people embracing Linux. And as more and more companies come forward and ship their products with a self contained version of Wine to make them seamlessly run in Linux, then I would say it will snap the last thread which is holding back a large section of the users from ditching their proprietary OS for Linux. And Linux would achieve mass appeal.

At this stage some might have questions as to how it will affect the GNU movement. If something like this happens, then there will be both positive as well as negative effects for GNU community. The positive one being that a major section of the people would be exposed to GPL-ed software, start liking it and even cultivate a taste for the GNU philosophy. And the negative one being the free software community will still not be able to ween away people from using the proprietary software to which they are tied to.

And finally, will wide scale adoption of Linux bode the death of Microsoft ? Not really. Nobody can wish away a multi-billion dollar company. Microsoft will evolve itself like all great businesses do and move on to greener pastures. And along the way (hopefully) discard their unfair business practices and stop trying to monopolize the market.

Saturday, 27 May 2006

A Device Driver Development Kit for Linux

A driver development kit provides a build environment, tools, driver samples, and documentation to support driver development for a particular operating system. Linux had always lacked a proper driver development kit. And anybody who intended to write a device driver had to do with sifting through tons of documentation and example source code that other operating systems provide for their developers.

Greg KH has posted at LKML (Linux Kernel Mailing List) that a device driver kit has indeed been released for Linux which contain everything that a Linux driver author would need to create Linux drivers. Additionally, it also contain a copy of the O'Reilly book "Linux Device Drivers". The device driver kit for Linux can be freely downloaded as an ISO image here (92 MB).

Friday, 26 May 2006

First impressions of Picasa - Google's first rate Graphics suite for Linux

Today, Google did something which would gladden the hearts of thousands of GNU/Linux users - well atleast those who are not as rigid in outlook about GPL any way. That is they finally released a version of Picasa for Linux. Picasa is a first rate graphics package rivaling even Adobe Photoshop Elements in ease of use,functionality and above all cost. Any one who has some experience in the graphics industry would be aware that Adobe sells a stripped down version of Photoshop called Photoshop elements for Windows targeted at home users. Picasa is a direct competitor of Adobes product and a worthy one too. Google has provided the installer in three formats which is a deb file for Debian and Ubuntu users, an RPM file for Red Hat users and a BIN file for rest of the Linux distributions. I downloaded the deb file from the Google's Picasa site since I run Ubuntu as my main GNU/Linux distribution. It was a 21 MB download. And the installation went quite smoothly.

Once the installation was over, I was pleasantly surprised to see the picasa link in the Gnome Menu. Clicking on it the first time brings up the licence agreement to which one has to agree. Reading the agreement, one realises that Google allows one to freely use this software only for non-commercial use. That is, one has to get Google's explicit permission to use Picasa in a commercial setup.
Fig: The Picasa icon in the Gnome menu

Once the licence agreement is out of the way, the software offers to scan the machine for images. The scanning is relatively fast with Google's superior search algorithm and shortly I found all the images on my machine indexed and accessible from the Picasa user interface which Google chooses to call the image library.

Fig: Picasa first scans for images on ones machine

One look at the interface will convince anyone why this software has become so popular. The interface is ideally designed to be easily used by even a person new to computers. Picasa is rather heavy in features too. For example, I can select a couple of images and create a collage of the selected images by the mere press of a button. Moving a collection of images from one location to another is also a piece of cake and is easily made possible from the "actions" drop-down button.

Fig: Picasa image library

Google has extended the Gmail's tagging feature using a star or a label to Picasa. This in my opinion is a time saver. In Picasa, one can tag important images with a star which helps one to keep track of the images. And a label could be applied to a collection of similar images.

But I felt the real power of Picasa when I selected an image in the image library and pressed the enter key on my keyboard. I found myself face to face with the image editor interface where I could do basic fixes of the selected image like crop the image to the size of standard photo prints, rotate and zoom the image, change the contrast, do further tuning like change brightness and apply different tints to the image. But the best thing I like about this software is that one can remove red-eye from a photograph with the click of a button which I believe would be the most used feature by any home user - considering that many photos shot by amateurs suffer from this most common defect.

Fig: Create a collage of images with ease

This software is quite intelligent and can find out if a photograph has red color or not and if it doesn't find any red in the selected photo, the red-eye correction button is automatically disabled.

Other tasks one can accomplish are publish the picture to ones Blogger.com blog by pressing the "Blog This" button, order prints of the photos online, email the picture to someone or take a local printout of it.

One also has the option of viewing all the images in the screensaver mode or timeline mode either of which are quite useful. But if that is not enough Picasa allows one to select a group of images and create a movie in AVI format by navigating to 'Create -> Movie' . One feature I really liked was the "export as a webpage" feature. When I selected a group of pictures and pressing Ctrl+W, Picasa seamlessly created thumbnails of the pictures and displayed it in a webpage in the default web browser. And by clicking on the thumbnail, I was able to view the picture in its original size.

Fig: Images exported to a webpage from picasa

Of course, a curious user will find that the Linux version of Picasa is actually the same windows version being run on top of GPLed Wine software. But Google has done a remarkable job in cleverly shielding this from the average user. And in all respects a user will feel that he is using a Linux native application.

Salient Features of Picasa
  • Speedy indexing of images
  • A good collection of image manipulation tools including the popular red-eye correction button.
  • Create an AVI movie of a collection of images.
  • Publish the images to ones personal blog at blogger.com.
  • View images as a slideshow or a timeline.
  • Import pictures from ones digital camera, scanner or even a mobile phone.
  • Order photo prints from your favourite online provider.
  • Keep track of the images by applying a star or a label - similar to what one finds in Gmail.
  • Batch edit a group of pictures.
  • Display files of type JPEG, PNG, GIF, BMP, Photoshop format and RAW image formats.
  • View movies in AVI, WMV, MPG, ASF and Quick time.
  • Email the images right from inside Picasa.
  • Export the images as a webpage.
Google having released this superior piece of graphics suite as a freeware, I will be amazed if it doesn't give Adobe some sleepless nights pondering about the future of their Photoshop elements software.

Thursday, 25 May 2006

.htaccess File Generator

Apache is one of the most flexible web server around. And one of the features that aids it in being flexible is a hidden file which goes by the name '.htaccess'. This file is used by web site administrators to make configuration changes on a per-directory basis especially when the administrator does not have access to the main configuration file of the apache web server.
You can use this file (.htaccess) to password protect files in a particular directory in your website, give mod-rewrite rules, force HTTP requests to use secure socket layer and so on. In fact, one can write just about any rule that he/she can configure in the main configuration file of the apache webserver.

But if you find writing code to be a hassle, then this webpage will aid in creating a .htaccess file from scratch with the parameters of your choice.

Wednesday, 24 May 2006

DTrace toolkit ported to FreeBSD

One of the most useful toolkit which gives Sun Solaris a bit of an edge in system administration is DTrace. DTrace toolkit is a collection of scripts which help a Solaris system administrator to do an extensive check of the system , track down performance problems across many layers of software and tune the performance of the running processes. Dtrace has been built using the D programming language which has a lot of semblance with awk.

Work had been going on in porting this toolkit to FreeBSD. And now John Birrell and his team who have been working on this project claim to have reached a major milestone in porting this rather powerful toolkit to FreeBSD. The basic DTrace infrastructure is in place and of the 1039 DTrace tests that Sun runs on Solaris, 793 now pass on FreeBSD.

Linux system administrators do not have the luxury of such a powerful integrated toolkit and have to do with a small number of individual programs to achieve some of those tasks. Too bad, DTrace has been released under CDDL licence. Else someone would have started a project to port DTrace to Linux too by now.

Monday, 22 May 2006

Running Scripts from within Nautilus file manager

Recently, when I was in the process of installing and using JavE - an ASCII art editor, each time I wanted to run the editor, I had to open up a terminal, navigate to the directory containing the JavE binary and then execute the command :
$ java -jar jave.jar
After some time, this whole job of opening the terminal and typing the command became quite tedious. And I started wondering if it was possible to start the editor by just double clicking on the jar file. But double clicking on it opened the jar file in the Gnome archive manager which was not what I wanted. I even tried associating the command 'java -jar' with all jar files in Nautilus. But to no avail.

That was when I remembered that Nautilus has a special feature which allows one to pass file names to scripts from the file manager. Gnome has a special folder by name nautilus-scripts/ which resides inside the hidden directory '.gnome2/' in ones home folder. The full path for the nautilus-scripts/ in my home directory being '/home/ravi/.gnome2/nautilus-scripts/'. And any executable script that one drops in this directory will be accessible from the Gnome right click menu.

So I created a bash script by name 'Run_Java' and saved it in the folder '/home/ravi/.gnome2/nautilus-scripts/'. And voila! I was able to access and run the script by right-clicking anywhere on the Gnome desktop or file manager and selecting the script (See picture).

Fig: Shows how the script is executed from Nautilus
The contents of the script I wrote contains only two lines as shown below:
#File Name: Run_Java
#!/bin/sh

java -jar $1
In the above listing, the $1 contains the value of the first parameter - which in this case was the name of the JavE jar file. You can access the nautilus-scripts/ directory in Nautilus file manager by navigating to File Menu -> Scripts -> Open scripts folder.

Fig: The message one sees when nautilus-scripts folder is opened in the file manager

This is a very useful feature and opens up a lot of avenues as most GUI tools in GNU/Linux accept command line parameters. For instance, one can open a Jpeg image in Gimp from the command line by passing the name of the image file as a parameter to Gimp. So by writing a bash script and saving it in this magic folder, one can select a group of image files in Nautilus and right-click and select the relevant script to open all the selected files in Gimp.

But not every one is proficient in writing scripts you say ? No problem, there is a site by name g-scripts maintained by Shane.M who has taken it upon himself to collect and make available nautilus scripts suitable for diverse purposes. The afore mentioned site contains a large collection of scripts, some of which were written by Shane himselves and others collected from different sources on the net.

Friday, 19 May 2006

Mac OSX - Should it use the Linux Kernel ?

A couple of days back, in a news article titled - Monolithic Kernel vs Micro kernel - published on this blog, I had described the intense debate on this between two school of thoughts fueled by Linus Torvalds and professor Andrew.S.Tanenbaum. One of the examples Andy had given as proof of the stability of micro kernel architecture was the popular Mac OSX itself which (according to Andy) runs on top of the Mach kernel which follows the micro-kernel architecture.

Now it is well known that the Mach micro-kernel, from its inception has been ridden with problems both design wise as well as in performance. So there are many people who wonder why Apple still uses the mach kernel with all its design flaws. In fact there is a group in the Apple community who favor transferring OSX to run on top of the Linux kernel instead which by the way is a monolithic kernel.

Daniel Eran has written an interesting article debunking the popular belief that Mac OSX is based on a micro-kernel architecture and goes on to explain that Apple's kernel named XNU kernel is not implemented as a micro-kernel. In fact, he is of the opinion that the original Mach kernel on which OSX is based on is a fat kernel and it was the Mach micro-kernel project which was a failure.

My Opinion

We are not short of robust kernels be it the BSD kernel, Linux or any other. In fact the beauty of the POSIX environment is that one can switch one kernel for the other and with a little effort still build a stable and secure OS which is what makes this OS design of separating the OS kernel from the userland tools such an exciting proposition.

Thursday, 18 May 2006

Interview : Linus Torvalds

Linus Torvalds - the father of Linux is known for his reclusive nature and is less seen in public than one see the free software leaders like RMS and others. But that should not form any false opinions in ones mind about him as he is considered the final authority on what code goes into the Linux kernel and what stays out. It is not for nothing that he is given the title "Benevolent Dictator for Life". For instance, Jeff Dike, the creator of User Mode Linux had to wait for many years to see his code finally getting merged into the official Linux kernel tree.

Kristie Lu Stout interviews Linus Torvalds and tries to get him to open up and share his views on Linux and its future course. This interview is not a technical one, rather, it tries to reveal to the readers the personality of Linus Torvalds. Kristie quizzes him on his motivation for creating Linux in the first place, on how they decided on the now famous penguin as the Linux mascot, his future aspirations and his relationship with fellow Linux developers. The interview is attuned for general public rather than for a seasoned techie but it is nevertheless an interesting read. And for those who are interested in watching, CNN will be broadcasting this interview on their CNN International TV channel on the coming Saturday and Sunday.

Tuesday, 16 May 2006

JavE - A versatile editor for creating ASCII art

Many years back, when I was in my teens, my father once gave me a picture of Mickey Mouse - a very popular cartoon character. What was unique about this picture was that it was created entirely using alphanumeric characters . In other words a picture created using ASCII characters. ASCII stands for American Standard Code for Information Interchange and is a character encoding based on the English alphabet. It contains around 95 printable characters and another 33 non-printing control characters. This creation of pictures using ASCII characters is known as ASCII art.

Creating ASCII art was never an easy job. For one, you need to have imagination and a knack for portraying things in an intelligent visually appealing manner. And then again, it is much tougher than creating a drawing in any of the numerous graphics suites as your sole tool for creating ASCII art would be a text editor which has its own limitations. But with a healthy dose of perseverance and some talent in drawing, it was possible to create beautiful pictures which you could view in your text editor.
 
___________________
,' `.
/ JavE is a very \
| powerful editor for |
| creating ASCII Art. |
\ / .-.
`._______ _________,' ,-. //`- / ,' ( /,-\\=/ ))
/,' A / \ -.
\|||/ /' ( | | ( /,-.`.
(o o) () | | () _ /=/ \ :
+-oooO--(_)------------, || | | || /'-'/=/ \:
| \ ||.| |.|| | |//=/ _ `
| linuxhelp.blogspot.com \ / || | | ||\ |/\=/'-'
| for Tips, Tricks and '----| ~~ ~ ~~ |----+ \=/ |
| Treats in GNU/Linux |`-.______.-'| | \=/
| /| |\ | \=/
+------------Ooo-------, | | 2 Years! | | | \=/
|__|__| \ \ `-.______.-' / | \=/
|| || \ `-.________.-' | \=/
ooO Ooo '----------------------+ \=/
,----------------. ,'"""=/
/ `. \=/'''
,' `. \=/
-' `-. \=/ _.--
`---. _\=/_ _.------''
`---. (`---') _.-------''
`-. \___/ ,-''
`-------------'
Now though, the perseverance part of the job of creating ASCII art has been elevated by a software called JavE. JavE is one of its kind ASCII editor which can be used to create powerful ASCII art the same way one creates drawings using any graphics suite. The only difference is that the picture is created entirely using ASCII characters instead of pixels. And the end result can be saved into a text file.

JavE ASCII editor is developed using Java language and need Sun's Java runtime environment installed on ones system for it to work properly. But to realize the true power of this editor you also have to separately download a collection of fonts called figlet fonts and unpack them in the fonts sub-directory of the JavE editor. The figlet fonts enable one to write text in ASCII art format. The figlet fonts pack accompanying JavE editor contains a collection of over 195 fonts - enough to let one's creative juices flow.

Fig: JavE ASCII editor interface

Once the Sun's Java Runtime has been installed and JavE editor has also been downloaded and unpacked into a directory (I had unpacked the javE editor into a directory called javEditor), one can open the editor using the following command :
$ cd javEditor/
$ java -jar jave.jar
This editor is rather heavy in features. It has most of the tools one finds in any graphics suite like the tool for drawing lines, beizer curves, rectangles, circles, clone tool, fill tool and tool for inserting text. It also boasts of a clip art library which has a collection of ASCII art forms that can be readily inserted into ones creations.

But the one thing I like the most in this editor is the tool to convert a GIF or Jpeg image into ASCII text. To check it out, I downloaded a picture of Linus Torvalds from the net and used the tool to create an ASCII picture of yours truly. And you can see the result of the conversion below.

Fig: ASCII representation of Linus Torvalds - created using JavE

A unique aspect of this editor is that one can export his/her text art creation as a GIF or Jpeg image which makes it convenient to showcase ones artistic talent.

Here is another interesting application of this very useful piece of software. Suppose, I have a couple of individual ASCII pictures that I have created and I want to string these pictures together to create an animation. I can easily do it from within JavE. This ASCII editor suite has a component called movie editor in which I can import all the individual ASCII pictures I have created and then export the resulting product into a variety of formats including GIF animation and even compressed javascript animation fit for displaying on a webpage.

Fig: An ASCII Tetris game in progress within JavE

And after all this if you ever get bored, then this suite even contain a couple of inbuilt ASCII games such as tetris (see picture above) and labyrinth. So the next time inspiration strikes you, open the JavE editor, create your own ASCII art and share it with others.

Sunday, 14 May 2006

Monolithic kernel vs Microkernel - Which is the better OS architecture?

Anybody who reads the tech related news online would by now be aware of the fierce debate that is raging regarding the most suitable design of an operating system. On the one side are those who favor monolithic kernels - that is a kernel which take care of almost all the system tasks like interfacing between the sound card and the audio software, the graphics card drivers and so on. Linux is a perfect example of a monolithic kernel where most of these functions take place in the kernel space. And not surprisingly, Linus Torvalds is a strong proponent of monolithic kernels.

On the other side of the fence are those that favor a microkernel architecture. And Andrew.S.Tanenbaum - the creator of Minix operating system is a staunch supporter of this. He believes that microkernel architecture is a better design principle and is ideal in critical situations where reliability is of uttermost importance like military or aerospace industries.

The ball started rolling when a week back, Andy along with two of his colleagues published a paper titled "Can we make Operating Systems Reliable and Secure", where they predicted that microkernel architecture has lots of advantages over monolithic kernel. Then Linus Torvalds responded in a news site explaining why he favoured monolithic design over the microkernel design.

And now in a new article, Andy Tanenbaum has given a detailed explanation as to why he disagrees with Linus on this topic. He refutes many of the points raised by Linus Torvalds against microkernel architecture and also gives examples of numerous operating systems based on this architecture including his own Minix3. And the gist of his observation is that microkernels allow one to build self-healing reliable systems and according to him reliability scores over performance gains.

It is important to note that RMS's baby HURD runs on top of mach microkernel.

Saturday, 13 May 2006

Book Review: User Mode Linux

Nowadays with the advent of more powerful processors and cheaper memory, interest has been re-generated towards virtualization technologies - be it a commercial venture such as Vmware which does full virtualization or open source one which does paravirtualization such as Xen. But besides these two, there is a very interesting project called User Mode Linux (UML) which has gained prominence in the Linux arena. UML is used to create Linux virtual machines which run within a Linux computer. What makes UML stand apart from the rest of the virtualization technologies is that support for it has been incorporated into the official Linux kernel tree. So anybody who has downloaded the kernel source from the official website can easily compile his/her own UML enabled Linux kernel.

UML is the brain child of Jeff Dike who is well known within the Linux community. Now when a person who has created a popular software decides to write a book on the subject, then the book gains a lot of prominence. So when I came across the book titled "User Mode Linux" authored by the very same Jeff Dike and released under the Bruce Perens' Open Source Series, I couldn't resist laying my hands on it.

This book is a relatively small one spanning over 330 pages and divided into 13 chapters and 2 appendices.

The author starts the narration by giving a short introduction on UML and how it differs from other virtualization technologies. In this chapter he shares with the readers the trials and tribulations he faced in getting Linus to incorporate UML patch into the official Linux kernel tree. This chapter also gives a sound idea as to how one can use UML to ones advantage in various practical situations.

In the second chapter titled - "A Quick Look at UML" , one gets a taste of the inner working of UML. For example, with the aid of detailed snippets of output and various commands, the author explains why UML is considered to be a process as well as a kernel at the same time.

What is really interesting about UML is that in most respects it is identical to any normal Linux distribution. And a person who is working inside a UML instance will not be aware that he/she is in fact working in a virtual machine rather than within the host Linux OS. One can do anything and everything in UML that he/she can accomplish in a normal Linux distribution. This includes tasks such as creating partitions, adding swap space, networking and so on. The third chapter of this really useful book titled "Exploring UML" takes an in-depth look at carrying out some of the system administration tasks inside UML. The part where the author demonstrates with the aid of examples, how one can just plug in any file on the host machine into UML and access it inside the UML instance as a block device truly brings out the flexibility of this virtualization technology.

Running a single UML instance is fine. But what happens when one need to run multiple UML instances ? Is it possible to share the filesystem simultaneously between multiple UML instances ? In normal case, if you try and share a filesystem with multiple UML instances, then you run the risk of corrupting the filesystem. This problem is elevated with the use of what are known as Copy On Write (COW) files. The fourth chapter titled - "A second UML instance" pursues this topic in greater detail. The author explains the concept of COW files and how one can use it to share a filesystem between multiple UML instances thus considerably reducing the memory and disk utilization.

This book which explains a niche subject takes a hands-on practical approach which lays stress on getting things done rather than being a mere theoretical discourse. For example, in the fifth chapter titled "Playing with a UML Instance", the author explains how to connect a tar archive residing on the host machine to a UML and access the file from within the UML. He goes on to explain the basic steps needed to get networking between the host machine and the UML. This chapter breaks just the crust of networking as the advanced networking concepts have been provided a dedicated chapter of its own.

The next chapter titled "UML File Management" describes two ways of mounting a directory in the host machine as a UML directory. This is achieved via the virtual filesystems hostfs and humpfs. The uniqueness of these virtual filesystems is that they are not stored within a UML block device rather, the filesystem data is stored inside the UML kernel. In this chapter, the author goes into a detailed analysis of hostfs and humpfs virtual filesystems and explains how one can mount a directory on the host system into a UML instance using either of the methods.

One of the advantages of UML is that in all respects, it works as a complete Linux operating system. And one can do all the things in UML that one can achieve in a normal Linux distribution. UML is particularly strong in the networking area as one can interconnect two or more UML instances to create a virtual network inside ones machine. But one cannot set up networking in UML the same way as one does in a normal Linux distribution. To enable networking in UML, one has to first configure a TUN/TAP device which forms an interface between the UML and the host machine. Also enabling ip forwarding and routing in the host machine forms a part of the job in network enabling our UML instance. On this note, the seventh chapter titled "UML Networking in Depth" is a very important chapter in this book as it takes a step-by-step approach in explaining all these concepts to the reader. By going through this chapter, one acquires deep knowledge of various networking concepts like bridging, switching and various transports like TUN/TAP, Ethertap, SLIP, SLIRP and multicast. At the end of the chapter, the author ties up all the loose ends by giving a complete example of setting up a multicast network of UML virtual machines connecting to form 3 two node networks in which one UML instance acts as a switch.

There is a suite of tools available which aid a system or network administrator to control a UML instance from outside it. Using these tools, one can effectively control the system resources available to a particular UML instance. For example, one can allocate say 256MB memory to a UML instance and just 64MB for another all in real time. The chapter titled "Managing UML Instances from the Host" covers the full suite of UML management tools. In particular, one tool called uml_mconsole is explained in depth in this chapter.

Configuring and running UML instances in a non-production machine setup is fine. But when it comes to using UML on production servers, other aspects also have to be considered. In the next two chapters, the author discusses the various issues including security ones that have to considered and rectified before the users are allowed access to the UML on the server. Traditionally, UML has had two modes of operation, one for unmodified hosts called the "tt" mode and the second one called "skas" mode for hosts that have been patched with what is known as the "skas" patch. Skas is a short term which stands for Separate Kernel Address Space. Recently a third mode has also been added that provides the same security as "skas", plus some of the performance benefits, on unmodified hosts. This third mode of operation is named skas3. All these three modes of operation are covered in detail in the 9th chapter titled "Host Setup for a Small UML Server". In this chapter the author also shares his views on topics as diverse as managing long-lived UML instances, the networking aspects with respect to the server environment, the UML and host memory requirements and so on. Also in the 10th chapter titled "Large UML Server Management", one gets to know the security issues faced when allowing UML access on high traffic servers and the steps needed to be taken to overcome these issues.

Till this point, the author was explaining things assuming that the reader was using a pre-compiled UML enabled Linux kernel. But in the next chapter titled - "Compiling UML from Source", one gets to know which all kernel configuration parameters need to be enabled to compile a UML kernel. Since UML patch is already merged in the official Linux kernel tree, compiling a UML kernel is as simple as enabling the designated kernel configuration parameters. This chapter literally hand holds the reader all the way from downloading the UML kernel source, to setting the kernel flags to the actual compilation. At the end of the chapter, the reader would have accomplished building his own UML kernel.

The 12th chapter is a rather specialized topic and is aptly titled "Specialized UML configurations". Here one gets to know how UML could be used to explore the software limitations on ones machine like the hard limits in the Linux networking subsystem, the performance of large memory UML instances on ones machine as well as setting up a small UML cluster using Oracle's ocfs2.

In the final chapter of this rather well written book, the author shares with the reader the future road map of UML and the technologies that could influence its evolution.

The book also has two appendices which lists all the command line options that could be used while booting a UML instance as well as the various UML utilities that are used on the host side to control the UML.

About the Author
Jeff Dike, the author and maintainer of User Mode Linux is well known throughout the Linux community. He is currently working as an engineer at Intel. He has been active in Linux kernel development for more than five years. He holds a degree in Computer Science and Engineering from MIT.

Book Specifications
Name : User Mode Linux
ISBN No: 0-13-186505-6
Author : Jeff Dike
No of Pages : 330
Publisher : Prentice Hall
Price : Check at Amazon.com
Rating : Excellent

End Note
This is clearly a do-it-yourself book on UML with step-by-step instructions all the way in accomplishing the tasks. But that does not mean that the theoretical aspects of UML have been ignored. Rather, there is a right amount of synergy between theory and practice. And the fact that this book has been authored by the very same person who created UML lends a lot of credibility to this book. This book could meet all the requirements of a person who is interested in this niche area and wishing to gain more knowledge about UML and its working.

Friday, 12 May 2006

A step-by-step guide to running your own Unix Web Server

What does it take to convert ones computer to work as a web server ? You need a stable and secure network operating system, a web server software, a database which scales well and your choice of scripting language.

The interesting thing is that it is quite easy to convert a computer to work as a web server using only free technologies. We have a plethora of free OSes like Linux, FreeBSD and Open Solaris which are a robust and secure alternative to the proprietary ones, a very popular web server software in Apache and a robust free database in MySQL. I need not tell that PHP is one of the most used dynamic scripting language on the web and is the preferred choice for building most websites. So is that all that is needed to set up a web server? Not quite... You also need to have the expertise in configuring the various parameters of the above mentioned software.

In a previous article, I had explained how to serve webpages from ones machine using Apache web server.

Dave Tufts has written a three part series on setting up ones machine as a web server. He has used FreeBSD as his choice of OS and Apache, MySQL and PHP as the web server, database and scripting language respectively. In the first part of the article, he lists the steps needed to install FreeBSD on ones machine which includes the ideal partitioning scheme for this purpose. In the next part, he takes the readers through installing Apache, MySQL and PHP and interestingly he does it by compiling from source. And in the last part, he plows into configuring apache webserver.

Wednesday, 10 May 2006

Vim Tip : Using Viewports

Vi(m) is a versatile editor with great power built into it. There are a whole lot of commands which one can use to accomplish complex tasks which are next to impossible in other text editors barring say an Emacs. A couple of months back, in an article on Vim editor, I had covered some of the most commonly used commands. But what I had covered was only a tiny spec of the number of commands available for this editor.

Joe 'Zonker' Brockmeier has written a very interesting article which explains the concept of viewports in Vim. Viewports enable one to open and view multiple documents in the Vim editor simultaneously. It is similar to but not the same as the tabs concepts one see in many GUI text editors where multiple documents can be opened in tabs inside a single editor instance.

I have been a Vim user since the first time I started using it and I have never felt the need to use another one for any of my editing purposes be it web development, coding or writing a letter. It is really fascinating that in a couple of keystrokes, one can accomplish such complex tasks in Vim which would require much greater effort in other text editors.

Monday, 8 May 2006

Vector Linux - A sleek, secure Linux distribution based on Slackware

Over the years, I have installed and used quite a number of Linux distributions. But one distribution which I hadn't got the chance to install and use was the venerable Slackware. This was because I have encountered some difficulty in downloading this distribution from its official website - They actively encourage downloading the OS using a torrent rather than ftp. Anyway, recently I decided to download a relatively small Linux distribution going by the name Vector Linux. What piqued my interest in downloading this distribution was that it is based on Slackware. The version of Vector Linux I decided to try out was Vector Linux SOHO 5.1 which stands for Small Office Home Office. I downloaded the CD image and burned it into a CD. Then I booted using this CD.

Fig: aterm - a lightweight X terminal which supports transparency

And after a short while, I came face to face with a curses based text based installer. Any body who has installed FreeBSD or Linux distributions like Debian, Slackware or even Ubuntu will feel right at home with this installer. I found it quite intuitive. Here I was faced with making choices. Like I could decide to partition my hard disk using fdisk, resize my existing partition to make space for a new one or just start the installation by selecting the respective menus.

I chose to start the installation as I already had a 3 GB empty partition on my machine. Here the work for me was cut out and I had to make simple choices like decide whether to have the home directory in a separate partition, setup the swap and root mount points, decide which filesystem to use like reiserfs, ext3, ext2 and so on. Next the installer prompted me to choose the integrated packages to install. Here there were no choices to make as the three packages namely base system which contains the kernel, the SOHO base system which contain the software as well as the system configuration package were mandatory.

The installer then prompted me to choose the optional packages which included drivers for printing, wireless networking drivers, sane scanner back-end, samba and so on. Since I did not have a use for any of these on my machine, I chose not to install these optional packages. After this the actual copying of files to the machine started and I had the option of taking a coffee break or read the names of all the people who contributed to this project scroll through in the upper portion of the screen.

The copying took about 10 minutes on my Pentium IV machine. Once the copying of the files was done, it offered to install the boot loader on my machine. Vector Linux ships with LILO boot loader and one has the option to install the boot loader in a variety of places such as in the master boot record, in a floppy, in the boot sector or decide not to install the boot loader at all. Since I had already installed other OSes and because of my affinity towards Grub, I decided not to install the LILO boot loader.

Next I was faced with the job of configuring the system which includes tasks like setting the time zone, basic hardware auto detection, setting up the network, sound setup, X windows and root password. Optionally one can also set up additional users. But what impressed me the most here was that it offered to probe the legacy ISA sound cards if any on my system. This is the first time that I have found a Linux installer offering to detect any ISA sound cards. All said and done, the installer detected all the interfaces on my machine and I had no problem in booting into Vector Linux.

Vector Linux comes bundled with two stock kernels one optimized for SATA/IDE hard disks and other for SCSI hard disks. And during the installation, one has to choose between the two.

Once the installation is over, one is booted into the KDE 3.4.2 desktop which is the default desktop in Vector Linux. Though it also has the latest version of XFce desktop which is a light weight complete desktop replacement for KDE.

Vector Linux is a well designed Linux distribution with a good collection of software most suited for a small office or home use. What I liked the most about this distribution was that it supported all the multimedia formats out of the box. To test this, I downloaded some video clips and music files in quick time, realplayer, wmv, mpeg and mp3 formats and tried playing each of them by double clicking them. And I was able to play all these formats in Kaffeine which is the default multimedia player in Vector Linux. What is more, it also has the libdvdcss library installed by default which allowed one to play encrypted DVDs.

Vector Linux comes with a choice of media players like Xine, the all powerful Mplayer and Kaffeine to name just a few. I was pleasantly surprised that the developers have integrated the mplayer plugin in Firefox web browser which enabled one to watch video clips inside the browser itself.

Fig: Kaffeine plays all audio and video file formats by default

Fig: Xine playing a video clip

In Vector Linux, one has a robust easy to use administration front end in VASM. VASM - the Vector linux's Administration and System Menu - is a graphical front end which aids the user in doing all the system and network administration tasks with ease. VASM can be run in two modes - one the normal user mode where it helps the user in common system administration tasks pertaining to the user like changing his password or making his choice of window manager. And then there is the super user mode which allows one to do all the system administration tasks like managing users; changing the configuration details of the X server; setting up, enabling and disabling services running on the machine; networking and much more. In short, VASM is to Vector Linux what control panel is to Windows.

Fig: VASM in action

As this distribution is built on the Slackware base, it brings with it all the positive aspects of Slackware which has ensured it a dedicated fan following. All the packages are in the tar.gz format.And you have the pkgtool which is the equivalent of aptitude in Debian. And it uses slapt-get - an apt-get equivalent for Slackware to download, install and uninstall software. There is also a front-end called Gslapt which has the same functions as Synaptic in Debian based distros. But Vector Linux has made its own improvements too. For example, the Firefox web browser bundled with it is the latest version 1.5 and interestingly, it is free from the memory hog problem faced by it on Ubuntu.

Another aspect which endears me to this secure Linux distribution is its speed.Vector Linux had a couple of extra services like ssh server and inetd server running by default when I installed it on my machine. But I was really impressed by the speed in which it booted to the desktop. It took just 35 seconds from cold start even with the extra services running where as other distributions took a little more than a minute on the same machine with no ssh and inetd services.

The developers of this Linux distribution have taken pains to see that it has all the software needed for running a small to medium business. It comes bundled with OpenOffice.org ver 2.0, Scribus 1.3.1 a desktop publishing software equivalent to Adobe Pagemaker and an accounting package called KMyMoney just to name a few. SAMBA has also been bundled with Vector Linux for people who want to connect to and transfer files to other Windows machines on their LAN. And for printing, you have CUPS - an interface which allows one to feed ones documents to the printer.

Fig: Gslapt - a Synaptic equivalent in Vector Linux

Advantages of Vector Linux
Put in a nutshell, these are the advantages a default installation of Vector Linux boasts of when compared to the default installation of main stream Linux distributions like Fedora, Debian and Slackware.
  • Excellent Multimedia support - Plays any video or audio format one throws at it including encrypted DVDs.
  • The latest version of Firefox ver 1.5 - And no memory hogging problems too. (Ubuntu developers please note ;) ).
  • Sun's Java Runtime Environment ver 1.5 installed by default.
  • Flash ver 7.0 plugin for Firefox installed.
  • Optimized for speed - Boots up in much lesser time.
  • Ideal for a small office setup with the right kind of software bundled with it.
Drawbacks of Vector Linux
The only drawback I found with Vector Linux was regarding its collection of software in its repository. The software collection for Vector Linux is nowhere near those of Ubuntu and Debian in terms of numbers. But the fact that one can install Slackware packages, somewhat gives relief to this issue. More over I feel the Vector Linux maintainers have taken judicious choices in selecting the software to be included in the repository as well as the base install of the distribution that anyone who is using it in an office setup will find everything they need, available in the repository.

End Note
All things considered, if one is on the lookout for a Linux distribution which is robust, fast, secure, able to play multimedia files without any configuration from the user's side, containing the latest versions of the software and good enough to be used in a small business setup then Vector Linux could fit the bill. Additionally if you are looking for a Slackware based distribution which covers all the above criteria, then Vector Linux is definitely the obvious choice.

Sunday, 7 May 2006

Server monitoring with munin and monit

When running a production server, whether it is under heavy load or otherwise, it is very much desirable to keep track of the various physical aspects of the server like the CPU load, memory usage by the services, whether the required services are running, the network traffic and so on. As well as make sure that all the necessary services are running even when under heavy load and do not die on you.

The Linux server administrators have a perfect combination of tools in Munin and Monit to accomplish the above said situations. Munin surveys the computer and presents all the information in graphs through a web interface. And it is Monit's job to keep track of the processes and restart them if they do not respond or even completely stop a process if it is under heavy load which it cannot bear. You can use monit to monitor files, directories and devices for changes, such as timestamp changes, checksum changes or size changes. You can also monitor remote hosts; monit can ping a remote host and can check TCP/IP port connections and server protocols. These two tools are developed and maintained by different people but in conjunction with each other, they present a powerful medium for system administrators to monitor and manage the services running on Linux/Unix servers in an optimal manner.

Falko Timme has jotted down his experiences in installing and configuring these two nifty tools (munin and monit) on his server to monitor his server statistics and control the running processes which gives a sound understanding of the use of these two rather powerful tools.

Thursday, 4 May 2006

strace - A very powerful troubleshooting tool for all Linux users

Many times I have come across seemingly hopeless situations where a program when compiled and installed in GNU/Linux just fails to run. In such situations after I have tried every trick in the book like searching on the net and posting questions to Linux forums, and still failed to resolve the problem, I turn to the last resort which is trace the output of the misbehaving program. Tracing the output of a program throws up a lot of data which is not usually available when the program is run normally. And in many instances, sifting through this volume of data has proved fruitful in pin pointing the cause of error.

For tracing the system calls of a program, we have a very good tool in strace. What is unique about strace is that, when it is run in conjunction with a program, it outputs all the calls made to the kernel by the program. In many cases, a program may fail because it is unable to open a file or because of insufficient memory. And tracing the output of the program will clearly show the cause of either problem.

The use of strace is quite simple and takes the following form:
$ strace <name of the program>
For example, I can run a trace on 'ls' as follows :
$ strace ls
And this will output a great amount of data on to the screen. If it is hard to keep track of the scrolling mass of data, then there is an option to write the output of strace to a file instead which is done using the -o option. For example,
$ strace -o strace_ls_output.txt ls
... will write all the tracing output of 'ls' to the 'strace_ls_output.txt' file. Now all it requires is to open the file in a text editor and analyze the output to get the necessary clues.

It is common to find a lot of system function calls in the strace output. The most common of them being open(),write(),read(),close() and so on. But the function calls are not limited to these four as you will find many others too.

For example, if you look in the strace output of ls, you will find the following line:
open("/lib/libselinux.so.1", O_RDONLY)  = 3
This means that some aspect of ls requires the library module libselinux.so.1 to be present in the /lib folder. And if the library is missing or in a different path, then that aspect of ls which depends on this library will fail to function. The line of code signifies that the opening of the library libselinux.so.1 is successful.

Here I will share my experience in using strace to solve a particular problem I faced. I had installed all the multimedia codecs including the libdvdcss which allowed me to play encrypted DVDs in Ubuntu Linux which I use on a daily basis. But after installing all the necessary codecs, when I tried playing a DVD movie, totem gave me an error saying that it was unable to play the movie (see the picture below). But since I knew that I had already installed libdvdcss on my machine, I was at a loss what to do.

Fig: Totem showing error saying that it cannot find libdvdcss

Then I ran strace on totem as follows :
$ strace -o strace.totem totem
... and then opened the file strace.totem in a text editor and searched for the string libdvdcss . And not surprisingly I came across this line of output as shown in the listing below.
# Output of strace on totem
open("/etc/ld.so.cache", O_RDONLY) = 26
fstat64(26, {st_mode=S_IFREG|0644, st_size=58317, ...}) = 0
old_mmap(NULL, 58317, PROT_READ, MAP_PRIVATE, 26, 0) = 0xb645e000
close(26)
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
...
open("/lib/tls/i686/cmov/libdvdcss.so.2", O_RDONLY) = -1 ENOENT (No such file or directory)
stat64("/lib/tls/i686/cmov", {st_mode=S_IFDIR|0755, st_size=1560, ...}) = 0
...
stat64("/lib/i486-linux-gnu", 0xbfab4770) = -1 ENOENT (No such file or directory)
munmap(0xb645e000, 58317) = 0
open("/usr/lib/xine/plugins/1.1.1/xineplug_inp_mms.so", O_RDONLY) = 26
read(26, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\320\27"..., 512) = 512
fstat64(26, {st_mode=S_IFREG|0644, st_size=40412, ...}) = 0
In the above listing which I have truncated for clarity, the line in bold clearly shows that totem is trying to find the library in, among other places, the '/lib/tls/i686/cmov/' directory and the return value of -1 shows that it has failed to find it. So I realized that for totem to correctly play the encrypted DVD, it has to find the libdvdcss.so.2 file in the path it is searching.

Then I used the find command to locate the library and then copy it to the directory /lib/tls/i686/cmov/. Once I accomplished this, I tried playing the DVD again in totem and it started playing without a hitch.

Fig: Totem playing an encrypted DVD Movie

Just to make sure, I took another trace of totem and it showed that the error was rectified as shown by the bold line of output below.
# Output of the second strace on totem
open("/etc/ld.so.cache", O_RDONLY) = 26
fstat64(26, {st_mode=S_IFREG|0644, st_size=58317, ...}) = 0
old_mmap(NULL, 58317, PROT_READ, MAP_PRIVATE, 26, 0) = 0xb644d000
close(26) = 0
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
...
open("/lib/tls/i686/cmov/libdvdcss.so.2", O_RDONLY) = 26
...
stat64("/lib/tls/i686/sse2", 0xbffa4020) = -1 ENOENT (No such file or directory)
munmap(0xb645e000, 58317) = 0
open("/usr/lib/xine/plugins/1.1.1/xineplug_inp_mms.so", O_RDONLY) = 26
read(26, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0\360\20"..., 512) = 512
fstat64(26, {st_mode=S_IFREG|0644, st_size=28736, ...}) = 0
Opening the man page of strace, one will find scores of options. For example, if you use the option -t, then strace will prefix each line of the trace with the time of day. One can even specify the system call functions to trace using the -e option. For example, to trace only open() and close() function system calls, one can use the command as follows:
$ strace -o strace.totem -e trace=open,close totem
The ubiquitous strace should not be confused with DTrace that ships with Sun Solaris. strace is just a single tool which takes care of a small part which is tracing a single program. Where as Sun's DTrace toolkit is much more powerful and consists of a collection of scripts which can track, tune and aid the user in troubleshooting ones system in real time. More over, dtrace is a scripting language with close semblance to C/C++ and awk. Put another way, strace tool in GNU/Linux provides only one of the many functions provided by DTrace in Sun Solaris. That being said, strace plays an important part in aiding the user to troubleshoot ones programs by providing a view of the system calls that the program makes to the Linux kernel.

PS: If you are wondering which movie I was intent on watching, it is "For a Few Dollars More" - an all time classic western starring Clint Eastwood. I really like this movie.

Wednesday, 3 May 2006

An Interview with Theo de Raadt - The creator of OpenBSD

In these times when news on Linux dominates the rest of the free OSes, we seldom remember these other OSes which are just as open, robust and as secure - if not more than Linux. One such OS is OpenBSD which is created and maintained by Theo de Raadt and his small team of dedicated developers. The latest version of OpenBSD is version 3.9. What is unique about OpenBSD is the stress given to security and the integration of cryptography. It may also be noted that OpenBSD supports the binary emulation of most programs from Solaris, FreeBSD, Linux, BSD/OS, SunOS and HP-UX - which means that there is a better than good chance that ones favorite Linux program will run in OpenBSD.

The OpeBSD developers are also the maintainers of one of the widely used pieces of software called OpenSSH. OpenSSH encrypts all traffic (including passwords) to effectively eliminate eavesdropping, connection hijacking, and other attacks. Additionally, OpenSSH provides secure tunneling capabilities and several authentication methods, and supports all SSH protocol versions. Any body having anything to do with SSH'ing to a remote Linux/Unix server can be fairly sure that they are using OpenSSH for the same which tells a lot about the popularity and usefulness of this software.

Jermey Andrews at kerneltrap.org quizzes Theo de Raadt about the major changes that the project has faced during its evolution, the problems faced in getting the vendors of hardware devices to open up the documentation of their products, problems faced by the OpenBSD team with regard to getting funds, his reaction to the lack of support to OpenSSH software from corporate entities who make heavy use of the software and his aversion towards binary blobs among many other things.

The questions are well thought out and the answers are equally interesting which makes going through this interview an informative experience.

Tuesday, 2 May 2006

Scolarships from Red Hat for creating world class open source software

Are you an IT firm which is feeling the heat of the competition breathing down your neck? But you do not want to dilute your principles and go that extra mile to get the winning edge. So what do you do? If you are an open source firm, then you harness the tens and thousands of IT aware young minds from developing countries to create applications for your platform. And as an incentive, promise attractive cash prizes in form of scholarships. This is exactly what Red Hat has done.

Red Hat is conducting a competition for the students of the Indian sub continent and its neighbouring countries Pakistan, Nepal, Bangladesh, Sri Lanka and Bhutan. The competition is open to students who are studying for B.Tech,B.E, B.Sc (IT or Computer Science), MBA (Systems/IT streams), MCM and M.Tech .

To participate, the student or group of students (maximum of 5 nos per group) should develop a piece of high quality open source program in any of the following areas :
  • Localization of Indian Languages
  • Development of open source Enterprise Application Integration (EAI) frameworks
  • Applications for small and medium businesses
  • Audio and video applications for smart appliances or cellphones
  • Software for GIS
  • Network and security related tools and
  • System software and tools
The last date for registration and submission of project proposal is 31st May 2006. More details of this unique venture by one of the foremost open source IT product firms can be had at its site.