Wednesday 31 January 2007

AmaroK - A versatile music player for GNU/Linux

I was always of the impression that the function of a music player was little more than playing music. That impression took a beating when I tried a versatile music player called AmaroK. This music player is developed specifically for the KDE environment but can also be used in other window managers. What is unique about this music player is that it supports a wide variety of features apart from playing music.

I am not a keen connoisseur of music. But I do enjoy listening to certain kinds of music. My taste in music is driven by the rhythm of the music rather than its lyrics. So even if there is no meaning to the song but is pleasing to my ears, then I tend to enjoy it more. When ever I come across songs that I truly like, I usually rip them from my audio CD and save them on my computer so that I do not have to pop in the CD each and every time I want to listen to the music. Now if you are wondering what all this have to do with AmaroK, then read on...

The first time you install and run AmaroK, it will scan your home folder and any other location you have specified for music files and compile the data and create a collection. This collection will be stored in a database of your choice. Building your collection is an essential step in using all of AmaroK's features. As of now, Amarok supports MySQL, PostgreSQL and SQLite databases. If you are not database savvy or do not have a database installed on your machine, you can choose SQLite database option. SQLite is a small C library that implements a self-contained, embeddable, zero-configuration SQL database engine. Among other features, SQLite can store a complete database inside a file. AmaroK has bundled the SQLite database engine with it. Since I already have a MySQL database installed on my machine, I chose to use MySQL over SQLite. Once the scanning is out of way, you can start listening to the music from within AmaroK.

AmaroK music playerFig: First time, you will be prompted to build a collection of all your music files.

The AmaroK user interface has a couple of browsers as seen by the tabs on its left hand side. They are Context browser, Collections browser, Playlist browser and Files browser. Lets say, I am listening to a song titled "Hotel California" - a very popular song in an album which saw sale of millions of copies. If I want to know more about the song while playing it, I need just open the Context browser. Here is the interesting part, in the context browser, there are three tabs namely - Music, Lyrics and Artist. In the Music tab, I can view the details such as the name of the song, the number of times I have listened to the song, the last time I played it as well as a thumbnail of the cover of the album which includes this song. If I click on the thumbnail of the album, I am straight away taken to the relevant Amazon webpage where I can buy the album.

AmaroK music playerFig: Details of the song being played including a few other interesting data.

If I want to read the lyrics of the song, I need just click on the lyrics tab and if it is not already there, AmaroK will automatically download the lyrics of the song from the internet (yes you need a net connection) and display it in the lyrics tab. And suppose you detect some mistake in the lyrics, then it is also possible to edit the lyrics in place and save it.

AmaroK music playerFig: Read the lyrics of the song while listening to it.


AmaroK music playerFig: Right click on any song track and directly burn it to CD/DVD.

The Artist tab will show the relevant Wikipedia web page of the artists who composed the song. In the case of the song "Hotel California", it showed me the Eagles band's wikipedia page.

AmaroK music playerFig: Read more about the artists who sang the song.

AmaroK music playerFig: AmaroK will automatically pull the Album Cover of the song from the internet and let you store it.

As I noted earlier, AmaroK will scan all the music on your hard disk and compile a collection which is stored in a database. The Collections browser will show all the songs sorted in alphabetical order, album and artist's name.

It is possible to create a playlist in AmaroK. A playlist is merely a group containing a mix of songs from different albums which you wish to listen at a given time. To create a playlist, just drag the songs from the collections browser that you wish to include in the playlist to the right hand pane and AmaroK will create a playlist of the songs and you can give it a custom name. The newly created playlist can be viewed in the playlist browser on the left hand side. Initially, you will find a number of pre-created folders in the playlist browser which also includes a collection of internet radio stations. By just double clicking on one of the stations in the playlist, I was able to listen to a music stream from a radio station via internet. You can also categorize your playlists by rearranging them into your own custom playlist folders.

AmaroK also has the option of sending the name of every song you play, to the popular last.fm site. last.fm is a service that records what you listen to, and then presents you with an array of interesting things based upon your tastes such as the artists you might like, users with similar taste, personalized radio streams, charts, and much more. This popular form of mining data related to music is known as scrobbling. Of course, for this to work, you first need to create an account with last.fm site.

Considering the sheer number of unique features available in AmaroK, it is no doubt, one of the best music players around. What I truly like about this music player is that apart from playing music, it provides tools which allow you to learn more about your favorite song - right from the artist's name to the name of the album, lyrics of the song, details of the band and even the popularity ranking of the song depending on how many times you have played it.

Sunday 28 January 2007

Install Debian from within Windows

That is right, Debian has got itself a new Win32 installer. This new software is targeted at people who are not too tech savvy to know the steps needed to burn the Debian ISOs on to a CD/DVD. The first time I read the news, I wondered how it was any different from installing Linux on a UMSDOS filesystem ? It is very different it seems...

The setup consists of a Debian installer loader which merely downloads a Debian netboot installer - you can choose between a GUI install and a text based install. And in the next reboot of the computer, Grub loads and prompts you to either boot into Windows or initiate the Debian installation. This is made possible by utilizing the services of Grub4DOS which is a GRand Unified Bootloader which uses the grub console GDLR which can be loaded from within the Windows boot manager.

Once the Debian installer starts, the rest of the steps are the same as those you would carry out in a normal installation of Debian. So you have the option of repartitioning your hard disk from within the installer and dual boot between Debian and Windows or entirely wiping out your Windows OS to make way for Debian.

The Debian installer loader can be downloaded from the goodbye-microsoft.com website. A couple of screenshots of the installer have also been made available here.

Saturday 27 January 2007

How to use Tabs in Vim Text Editor

Vim
Vim is a very powerful text editor created by Bram Moolenaar. Vim is so versatile that it can even be used as a plug-in in Microsoft Visual Studio.

If you are interested in learning how to use Vim editor, then check out the following resources :Read more »

Friday 26 January 2007

Free Book - Linux Kernel in a Nutshell

One of the advantages of using GPLed software is that anybody who wish to use or modify the code can do so without fear of any repercussions. Ditto for the documentation of the software. This has at times tempted many a book author to release their books under a liberal license and make their efforts available for free in an electronic format.

One such author is Greg Kroab-Hartman who has released his book titled "Linux Kernel in a Nutshell" under the Creative Commons Attribution-ShareAlike 2.5 license which allows you to download and redistribute the book.

This book is not new rather, it has been significantly revamped to include details of the 2.6.18 Linux kernel.

This book covers the entire range of kernel tasks, starting with downloading the source and making sure that the kernel is in sync with the versions of the tools you need. In addition to configuration and installation steps, the book offers reference material and discussions of related topics such as control of kernel options at runtime.

The author claims this book is targeted at the lay person who wish to delve deep into understanding the Linux kernel and apart from a basic familiarity of the Linux shell commands, no particular prerequisites are expected from the reader. So it is a how-to sort of book which explains the steps that lead to properly building, customizing , and installing the Linux kernel.

So why should you recompile a Linux kernel ?
There are many advantages to compiling a Linux kernel. For one, you need enable only those modules which are required by your machine. For example, if your machine does not have support for infra red or do not have a need for PCMCIA, then you can disable those features in the kernel configuration and build your custom kernel. This will make the kernel lean and speed up the boot process. Similarly, If you intend to run Linux on a 486 machine (Yes it is entirely possible), you can turn off all the other processor specific support in the kernel configuration file and build a kernel targeted specifically at your processor.

So if you are the curious one who wish to learn how to configure, compile and install your own Linux custom made kernel then this book will be very useful.

Table of contents of "Linux Kernel in a Nutshell"
  • Title page
  • Copyright and credits
  • Preface
  • Part I: Building the Kernel
    • Chapter 1: Introduction
    • Chapter 2: Requirements for Building and Using the Kernel
    • Chapter 3: Retrieving the Kernel Source
    • Chapter 4: Configuring and Building
    • Chapter 5: Installing and Booting from a Kernel
    • Chapter 6: Upgrading a Kernel
  • Part II: Major Customizations
    • Chapter 7: Customizing a Kernel
    • Chapter 8: Kernel Configuration Recipes
  • Part III: Kernel Reference
    • Chapter 9: Kernel Boot Command-Line Parameter Reference
    • Chapter 10: Kernel Build Command-Line Reference
    • Chapter 11: Kernel Configuration Option Reference
  • Part IV: Additional Information
    • Appendix A: Helpful Utilities
    • Appendix B: Bibliography
    • Index
All the chapters have been made available as individual PDF files and can be downloaded from the author's website. This book is published by O'Reilly and if need be, you can also buy a printed version of the book. It is a very nice book which teaches the art of configuring, building and installing your very own custom Linux kernel.

Tuesday 23 January 2007

CNR for all - An easy way of installing software on any Linux distribution with just a few clicks

Linspire is synonymous with the popular CNR ("Click 'N Run") software where you download a package of your choice from the Linspire CNR store and install it just like you do in Windows - that is by double clicking on it. The CNR warehouse has a collection of over 20,000 Linux packages, libraries and products, some of them commercial products like Win4Lin Pro, CodeWeavers' CrossoverOffice and TransGaming's Cedega which are made available to the Linspire / Freespire users. And the users can search for applications by title, popularity, user rating, category, function, or author.

Till now CNR contained software primarily for Linspire or Freespire Linux distribution users. But Kevin Carmony - the President and CEO of Linspire Inc who operates the CNR repository has now made public, his intension of supporting all other Linux distributions via its CNR one click install software.

Just to bring it in perspective, consider this scenario... At present, if you are a Slackware Linux user and you want to install say, GNUCash - a financial software, there is no easy way of installing it other than downloading and compiling the source of GNUCash and all its library dependencies. This is because the Slackware distribution does not support Gnome or GTK2 based software and so the official Slackware repository does not offer a compiled version of GNUCash.

With CNR supporting all Linux distributions, it will be possible to download the GNUCash binary from the CNR website and install and run it on Slackware with just a couple of clicks. It will be easy to upgrade any software to the latest version with ease. Finally, we will have a process of installing software using the CNR installer, just like you do it in Windows, that too for any Linux distribution.

Fig: The CNR warehouse with the software packages divided into categories

Kevin Carmony claims CNR does dozens of things to make finding, installing and managing software on your desktop computer extremely easy. For example, it is very easy to find the right piece of software with user reviews, charts, screenshots, descriptions, friendly names, and so on. Once you've found what you're looking for; with literally one click, the software is installed to your computer and icons added to your desktop and Launch Menu. CNR then notifies you when updates are available, which you can then install with one click. With this anouncement, CNR now has a new website at CNR for all.

Sunday 21 January 2007

Book Review: SELinux by Example

SELinux by ExampleSELinux is a project started and actively being maintained by the U.S Department of Defense to provide a Mandatory Access Controls mechanism in Linux. It had been a long standing grouse of Linux power users and system administrators over its lack of fine grained access control over various running processes as well as files in Linux. While Solaris touts its famous RBAC and Microsoft Windows has its own way of providing finer rights to its resources, Linux had to put up with the simple but crude user rights known in tech speak as discretionary access control to control user access of files. But with SELinux project making great strides and now being bundled with many major Linux distributions, it is possible to effectively lock down a Linux system through judicious use of SELinux policies. SELinux implements a more flexible form of MAC called type enforcement and an optional form of multilevel security.

The book "SELinux by Example" is authored by three people - Frank Mayer, Karl Macmillan and David Caplan and is published by Prentice Hall. The target audience for this book is SELinux policy writers and system administrators with more content dedicated to be put to use by policy writers. There are a total of 14 chapters and 4 appendices spread just over 400 pages. The 14 chapters are in turn broadly divided into three parts with the first part containing chapters which provide an overview of SELinux, its background and the concepts behind it. The second part contain 7 chapters which are most useful for SELinux policy writers and contain detailed explanation of the syntax used in writing the policy files. It is the third part namely "Creating and Writing SELinux Security Policies" which could be most put to use by system administrators where the authors provide enough details of working with SELinux.

In the second chapter, the authors introduce the concept of type enforcement access control, understanding of which is imperative to ones knowledge of SELinux. They further talk on the concept of roles and multi level security. And true to the title of the book, all these concepts are explained by analyzing the security controls of the ubiquitous passwd program.

In the succeeding chapter the authors explain the underlying architecture of SELinux. More specifically, how SELinux integrates with the Linux kernel via the Linux security module (LSM), the organization of the policy source file and how to build and install policies.

SELinux policies to a large extent are based on object classes. For example, you can create an object class and associate a set of permissions to that class. And all objects associated with that class will share the same set of permissions. In the fourth chapter, one get to know about different types of object classes and the permissions that can be assigned to these classes. A total of 40 classes and 48 permissions are discussed in this chapter.

The next chapter titled "Types Enforcement" goes into a detailed analysis of all the types and attributes as well as the rules that could be used. The majority of SELinux policy is a set of statements and rules that collectively define the type enforcement policy. Going through the chapter, I was able to get a fair idea of the syntax used in writing TE policies.

Keeping in mind the complexity of the subject, it helps a great deal that at the end of each chapter, there is a summary section where the authors have listed the important points covered in the chapter. More over, one gets to answer a couple of questions and check one's knowledge about the topic being discussed.

In the 6th chapter, the authors explain in detail the concept of roles and their relationship in SELinux. In fact, what I really like about this book is the fact that each concept of SELinux has been dedicated a chapter of its own. For instance, constraints, multilevel security, type enforcement, conditional policies,... all are explained in chapters of their own.

One thing worth noting is that Fedora Core 4 and RHEL 4 and above ship with the targeted policy by default. Where as to completely lock down a Linux machine, you need to embrace the strict SELinux policy. But this has the side effect of causing breakages with some of the existing Linux applications which expect looser security controls. In targeted policy, the more confining rules are focused on a subset of likely to be attacked network applications. So in most cases, one can manage by using targeted policy. This book mostly deals with the strict policy of SELinux and in chapter 11, the authors dissect the strict example policy maintained and updated via the NSA and Fedora Core mailing lists.

But there is another policy called the Reference Policy which is an attempt to water down the strict policy maintained by NSA and in the process make it easier to use, understand, maintain, also to make it more modular and this is covered in the succeeding chapter titled "Reference Policy".

The chapter titled "Managing an SELinux system" is one which the system administrators will relate to, where the authors throw light on the hierarchy of SELinux configuration files. The purpose of each file is explained in simple terms. And considering that SELinux comes bundled with a rich set of tools meant to be used by system administrators, one gets to know the usage of some of them and also learn about the common problems that are faced by administrators while administering an SELinux system.

And in the last chapter of the book which is the 14th chapter, one is introduced to the task of writing policy modules. Here the authors hand hold in the creation of a policy module for the IRC daemon for Fedora Core 4 from start to finish which involves right from the planning stage to writing and applying the policy module, to the final testing of the module.

This book also includes 4 appendices which contain a wealth of knowledge on SELinux. I especially liked appendix C which lists all the object classes and permissions as well as appendix D which has a list of SELinux system tools and third party utilities with explanations.

It could be just me but I found that I was better able to assimilate what the authors explained when I read the 13th chapter of this book first and then went back to read the 4rd chapter onwards. Having said that, I find this book to be an excellent resource for people interested in developing SELinux policies and to a lesser extent a resource for system administrators. At the very least, this book imparts a deep understanding of the features, structure and working of SELinux.

Book Specifications
Name : SELinux by Example
ISBN No : 0-13-196369-4
Authors : Frank Mayer, Karl Macmillan and David Caplan
Number of Pages : 430
Publisher : Prentice Hall
Price : Check the latest price at Amazon.com
Rating : A very informative resource ideal for SELinux policy writers, Linux/Unix integrators and to a lesser extent to System Administrators.

Thursday 18 January 2007

OpenSolaris installation screencasts

Today, I came across a very good collection of screencasts which visually walks one through the backing up, repartitioning and then installation of Open Solaris on ones laptop. The OpenSolaris release is 5.11 and all the installation steps are shown. You need Flash player ver 6 or greater to watch the screencast - not a big issue as Adobe has released Flash player ver 9 for Linux. In a nutshell, these are the steps that are showcased in the screencast.
  1. Backup your laptop to prevent any data loss, should something go wrong. The laptop has Windows XP professional pre-installed. So first the disk is defragmented and scan disk utility is run to make sure there are no errors. To do the actual backup, they use the free G4U - short for Ghost for Unix, which is similar to the Norton Ghost disk cloning software in Windows. This can be downloaded into three floppys or as an ISO and burned onto a CD. Using G4U, they perform a backup of the whole disk to a remote ftp server. G4U can also be used to do a disk to disk backup.
  2. The second step in the procedure is to repartition the disk to make room for OpenSolaris. For this they demonstrate how to shrink the Windows partition using the Free software QtParted. This is a GUI front end for the 'parted' tool and is similar to Partition Magic in that it non-destructively shrinks the partition. This software is available on the System Rescue CD which is a remastered Gentoo Linux distribution. One thing worth noting is that while creating the new partition, they format the new partition as Linux swap.
  3. Install Solaris on the newly created partition. This screencast shows all the steps in the installation of OpenSolaris albeit in a time compressed sequence.
  4. And finally, another screencast shows how to download and install Sun Studio 11 software on OpenSolaris.
All in all, there are 5 screencasts which I found to be truly informative. If you have the time and the bandwidth, watching the screencasts is highly recommended. More over, they may not be around for a long time as the domain doesn't work properly except the link containing the screencasts.

Tuesday 16 January 2007

traceroute - a very useful troubleshooting tool which reveals the bottlenecks on the Internet.

I am sure anyone who is at the least Internet savvy, will be aware that to move data from one point say A to another point B across the Internet, it has to pass through a number of intermediary points say C, D,E.... But what many won't know is that your data is not transferred in one piece when it is sent over the net, rather, it is split into chunks of say 1500 bytes each, then each chunk is enclosed in what is known as a packet which contain some additional data such as the destination IP address and port number apart from some other details which provide the unique identity to the packet and finally it is sent across the net.

While the packets travel the path from point A to point B, each packet may take a different path depending upon diverse factors and eventually they are merged together in the same order at the receiving end to provide the document you sent in the first place.

The intermediate gateways through which the packets pass through before they reach the final destination are known as hops. So for data to travel from point A to point B on the net, it has to go through a number of hops.

Linux & Unix being network operating systems have a number of powerful tools which aid the network administrator to find out a wealth of data about their network and the Internet. One such tool is the ubiquitous traceroute.

The tool traceroute is available in all Unix and Linux distributions and is used to find out the potential bottlenecks in between your computer and a remote computer across the net. The usage of this tool is quite simple and is as follows:
# traceroute <domain or IP address>
Usually you have to be root to run this tool as it resides in the /usr/sbin directory. But if you use the full path, then you can run this tool as a normal user as follows:
$ /usr/sbin/traceroute <domain or IP address>

For example, this is the output I received when I ran a trace on the www.yahoo.com domain from my machine.
$/usr/sbin/traceroute www.yahoo.com

traceroute to www.yahoo.com (69.147.114.210), 30 hops max, 40 byte packets
1 10.2.71.1 (10.2.71.1) 21.965 ms 22.035 ms 22.111 ms
2 (ISP) (ISP gateway) 22.510 ms 25.716 ms 26.073 ms
3 61.246.224.209 (61.246.224.209) 69.212 ms 59.778 ms 63.334 ms
4 59.145.6.1 (59.145.6.1) 65.632 ms 64.750 ms 64.868 ms
5 59.145.11.69 (59.145.11.69) 63.562 ms 64.219 ms 63.742 ms
6 203.208.143.241 (203.208.143.241) 318.632 ms 307.733 ms 316.650 ms
7 203.208.149.25 (203.208.149.25) 317.534 ms 308.116 ms 307.507 ms
8 203.208.186.10 (203.208.186.10) 245.835 ms 247.878 ms 248.862 ms
9 so-1-1-0.pat1.dce.yahoo.com (216.115.101.129) 286.774 ms 289.702 ms so-1-1-0.pat2.dce.yahoo.com (216.115.101.131) 326.470 ms
10 ge-2-1-0-p141.msr1.re1.yahoo.com (216.115.108.19) 324.044 ms 324.497 ms 326.011 ms
11 ge-1-32.bas-a1.re3.yahoo.com (66.196.112.35) 333.479 ms 333.019 ms ge-1-41.bas-a2.re3.yahoo.com (66.196.112.201) 292.967 ms
12 * * *
13 * * *
14 * * *
15 * * *
.
. //Truncated for brevity
.
29 * * *
30 * * *
As you can see from the output spewed by traceroute, it defaults to a maximum of 30 hops. The first line of the output gives the IP address of the yahoo.com domain which is 69.147.114.210, the maximum number of hops traceroute will keep track of the packets before it reaches the destination and the size of the packets which is 40 bytes.

The next 30 or so lines show the IP address or domain name of the gateway servers through which the packets pass through as well as the time in milli-seconds of the ICMP TIME_EXCEEDED response from each gateway along the path to the host. traceroute program utilizes the IP protocol's time to live (TTL) field. By default, it starts with a TTL value of 1 but this value can be changed with the -f option.

Now lets take a closer look at the output of traceroute to the yahoo.com domain as shown in the listing above. As you can see, the second hop is always to ones ISP's gateway as shown by the address (I have removed the address of my ISP's gateway). On the same line, followed by the IP address, there are three time values in milli seconds. There are three values because traceroute by default sends simultaneously, 3 packets of 40 bytes each. And the three time values are the time taken to send the packets and receive a ICMP TIME_EXCEEDED response from the gateway. Put another way, these three values are the round trip times of the packets. So for the three packets to reach my ISP's gateway, and get an echo back, it takes 22.510 milli seconds, 25.716 ms and 26.073 ms respectively as is displayed by the values of the 2nd hop.

Lets look at the 5th and 6th hop in the output above. If you compare the times, you will find a drastic increase in the times. If it is 63.562 ms for the 5th hop, it is 318.632 ms for the 6th hop. This is because up till the fifth hop, the gateway servers were within the Indian sub-continent itself. Where as the gateway of the 6th hop is in Singapore and so it takes that much more time to get a reply. Generally, smaller numbers mean better connections.

Check out the 11th hop. It shows two domains with one domain for the first two packets and a different domain for the third packet.

And from 12th hop onwards I get a series of time outs as shown by the asterisks. So my trace of the www.yahoo.com domain resulted in a series of time outs and did not complete. The problems could be one of the following:
  • The network connection between the server on the 11th hop and that on 12th hop is broken.
  • The server on the 12th hop is down.
  • Or there is some problem with the way in which the server on the 12th hop has been setup.
To make sure, I did a ping of the www.yahoo.com domain and as expected, I received 100% packet loss as shown by the ping output below.
$ ping -c 2 www.yahoo.com
PING www.yahoo-ht2.akadns.net (69.147.114.210) 56(84) bytes of data.

--- www.yahoo-ht2.akadns.net ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1009ms
Usually this means I will not be able to access the concerned domain. But in yahoo.com's case, I was able to access the domain without any problem as in all probability, their website is mirrored across a number of servers spread across the world. So if one server is down, the query is re-routed to the next nearest server.

traceroute is a very useful tool to pin-point where the error occurs on the internet. It can also be used to test the responsiveness of a domain or server. For example, If your route to a server is very long (takes over 25 hops), performance is going to suffer. A long route can be due to less-than-optimal configuration within some network along the way.

Similarly, if you see in a trace output, a large jump in latency (delay) from one hop to the next, that could indicate a problem. It could be a saturated (overused) network link; a slow network link; an overloaded router; or some other problem at that hop. It can also indicate a long hop, such as a cross-country link or one that crosses an ocean (compare the timing of the 5th and 6th hop in the yahoo.com trace output above).

Thursday 11 January 2007

A perfect New Year Gift from Sun Microsystems - A Free Solaris Media Kit

A few weeks back, I had shared with you the news of Sun Microsystems distributing free Media Kit consisting of the latest build of Solaris 10 operating system. I had placed an order for a media kit at their website and guess what, a couple of days back, I received my copy of Sun's free media kit.

The media kit is a fabulous DVD case consisting of three DVDs - a copy of Solaris 10 6/06 build operating system for the x64/x86 platform, a copy of Solaris 10 6/06 build operating system for the SPARC platform and finally a DVD containing Java software goodies which include Sun Studio 11, Sun Java Studio Enterprise 8 and Netbeans ver 5.0.

Fig: Sun Media Kit DVD case

Fig: DVD case opened

Fig: The Media Kit contains 3 DVDs

I was successful in installing Solaris 10 on my machine from the DVD and the installation went without a hitch. Solaris 10 comes with a graphical GUI installer as well as a text based installer. Sun specifies the minimum recommended memory requirement for running Solaris 10 to be 512 MB. I was able to successfully install the OS on a machine with 256 MB memory. In fact, if your machine has just 256 MB RAM, Solaris automatically selects the text based installer. For machines with at least 512 MB RAM, it uses the graphical installer. And if your machine has less than 256 MB RAM, it refuses to install on your machine. Solaris 10 bundles with it two desktops - Common Desktop Environment (CDE) and the more popular Java Desktop which is really the Gnome Desktop running on top of a Java framework. Solaris 10 also comes with two X servers, them being the Xsun Server and the more common Xorg server and you can choose one over the other at the time of installation.

You can look forward to a review of Solaris 10 on this site in the near future. But for now, you may whet your appetite by viewing a couple of photos I took of the Sun Media Kit ;-). And those who haven't yet placed an order for the Free Media kit can do so at Sun's website - but before that, just make sure that your machine has the required memory to run this robust operating system.

Tuesday 9 January 2007

Status of the OLPC project

It seems the One Laptop Per Child project which aims to provide a very cheap, child friendly laptop powered by Linux for each child in the third world countries is making steady progress. The design of the laptop has been finalized and a couple of prototypes have already found its way into the hands of a select few in the media who have come out with a review of the laptop.

To the uninitiated, the One Laptop Per Child project popularly known as OLPC is the brain child of Nicholas Negroponte who is also the chairman of this project. The aim is to provide a laptop to each child in developing nations so that they can harness the power of IT to further their education. As of now, the laptop will cost a little more than $100 but in a couple of years, the price is expected to come down to say $50.

The OLPC website states that the "laptop will be Linux-based, with a dual-mode display, both a full-color, transmissive DVD mode, and a second display option that is black and white reflective and sunlight-readable at 3× the resolution. The laptop will have a 500MHz processor (As of now, it is actually 377 MHz AMD processor) and 128MB of DRAM, with 500MB of Flash memory; it will not have a hard disk, but it will have four USB ports. The laptops will have wireless broadband that, among other things, allows them to work as a mesh network; each laptop will be able to talk to its nearest neighbors, creating an ad hoc, local area network. The laptops will use innovative power (including wind-up) and will be able to do most everything except store huge amounts of data".

The manner in which the power is to be generated is yet to be finalized. But it seems a lot of ideas are being generated.

James Tuner has compiled a detailed report of the status of the OLPC laptop prototype after taking it for a test drive. He has also provided photos of the final prototype which is named "XO". He notes that (as of now) the power for the laptop will be generated using a "Yo-Yo like device that can be pulled by hand or foot" instead of a hand crank. And the "peak power consumption will be around 5 Watts for high-demand media applications" with it falling down to a mere 350 Milli-watts for just keeping the mesh network alive.

A month back, HÃ¥kon Wium Lie of Opera Software had shared his thoughts on OLPC machine, where he explained how he installed and ran the Opera web browser on this laptop.

A number of developing countries have signed up for this unique project which aims to sell the laptops only to governments which in turn will be distributing the laptops to the children in schools. But unfortunately, the Indian government has not shown as much interest in the project stating that this money could be better utilized else where. As of now, India is taking a wait and watch attitude. Perhaps we Indians could see this laptop once the price comes down to an expected $50 in a couple of years time from now.

Sunday 7 January 2007

Introducing the new Nokia N800 Internet Tablet - powered by Linux

Many news sites are abuzz about the new Nokia offering in the form of an Internet tablet named N800. Previously, I had written about the Nokia N770 which is the predecessor of N800. Nokia has reportedly added a number of enhancements to N800 which are lacking in its predecessor such as an integrated web cam, better style, enhanced hardware specs such as more processing power and more memory, two full sized SD card slots and more. But the one thing which attracts me to this mobile device is that it is powered by Linux.

The approximate hardware specifications of N800
  • CPU speed - 320 MHz
  • Screen size - 4.1 inches
  • Screen resolution - 800x480
  • Memory - 128 MB
  • Flash Memory - 256 MB
  • Weight - 215 Gms
  • Web cam
  • Stereo speakers
  • Touch screen
  • Powered by Linux
  • Price : Yet to be known.

Fig: Nokia N800 Internet Tablet and accessories

Fig: N800 and N770 side by side (Courtesy: John Tokash)

Already a number of people who are passionate about this new Nokia offering have shared their first hand experiences. The foremost being John Tokash who has provided a very good video of N800 which you can view here - do watch the video, you will be impressed.
Also a person going by the pseudo name 'thoughtfix' has been blogging about all aspects of Nokia's internet tablet both N770 and the new N800 which also provides a lot of details about this device.

Updates will be added as and when they are available.

Friday 5 January 2007

How to get a Windows Tax Refund

Ever wonder how Microsoft got so rich ? Yes, I can visualize you pointing to their flagship OS Windows. But it is not as simple as it looks. A major portion of Microsoft's revenue through sale of its OS comes from deals struck with various hardware vendors. The revenue it gets through sale of boxed versions of Windows is only a tiny minuscule percentage when compared to the money it gets through OEM deals. So when you go out to buy a computer especially from an established PC manufacturer, in majority of cases, you have no choice but get it pre-loaded with Windows. And even if you do not want Windows and intend to run some other OS, you end up paying for a copy of Windows.

In India at least, the situation is some what different because the assembled computer market is thriving and many people choose an assembled computer over a branded computer. But what do you do if you do not want to pay for a copy of Windows OS while buying a branded PC when you are sure that you intend to run Linux on it ? You can do what Serge Wroclawski did. With a good dose of patience, perseverance and good luck, he was able to get back over $52.00 - the price of an OEM version of Windows - which he was charged while buying a Dell PC.

Thursday 4 January 2007

A sneak preview of the expected features in KDE 4.0

Ever wonder what KDE 4.0 is going to look like when it is finally released some time this year ? As far as end users should be concerned, it is going to be much more beautiful, responsive and usable than KDE 3.5.

Some of the features that it will have are as follows:

KDE 4.0 is perceived to make extensive use of SVG (SVG stands for Scalable Vector Graphics) for images instead of non scalable pixmap that is used now. For example, in KDE 3.5.x, games artwork are in pixmap and are at best lackluster. But we can see some great artwork in games in KDE 4.0 which will be using SVG.

Fig: Kreversi game in KDE 3.x and 4.0 respectively

Fig: KMajhong game in KDE 3.x and 4.0

Fig: Ksysguard in KDE 3.x and 4.0

The start menu is going to be redesigned. The sneak preview released indicates that it will have inner tabs and the applications will be grouped dynamically taking into consideration the usage of the person. Update (09-Jan-2007): A number of people have written to point out that the menu for KDE 4.0 while it just might incorporate some of the features of the kickoff menu shown below, has not been finalized yet, rather work is going on and it is developed implementing the Qt 4.2 libraries. The new menu is known by the name Raptor (More details here).

Fig: Sneak preview of the start menu in KDE 4.0

KDE 4.0 will replace the present DCOP inter process communication (IPC) system with a more advanced version built from grounds up known as D-Bus. IPC is a system which lets different applications communicate with each other.

KDE 4.0 will feature an API layer called Solid which will interact with projects like the hardware abstraction layer to let hardware connect smoothly with KDE.

KDE 4.0 will feature better multimedia experience through a project called Phonon which will collaborate with Solid. So no more need to choose multimedia backends as the Phonon API will take care of it.

Plasma will provide the next generation desktop experience in KDE 4.0. It is planned to integrate three separate applications namely the Kicker (Panel), KDesktop and Super Karamba (Widgets) into a single application. And the surprise of all things is that it will be possible to run the beautiful Dashboard widgets of Mac OSX in KDE 4.0.

KDE 4.0 will sport a brand new icon set created by the Oxygen project.

The KDE developers are working to provide a better communication experience through the project named Decibel. Through this project, it will be possible to provide integrated chat and phone communication including with networks such as MSN, Jabber and Skype.

And lastly, the Akonadi project intends to design a extensible cross-desktop storage service for PIM data and media-data to communicate with KDE, Gnome, POP, and IMAP through the same storage protocol. And this will be available in KDE 4.0.

But the biggest change is going to be inside the hood so to speak. KDE 4.0 will be using Qt 4.2 library which brings with it its own extensive set of improvements. For instance, Qt 4 is designed to save lots of memory and will perform faster. Besides the speed improvement, Qt 4 has a lot of other features and some things are simplified. So programmers need less time to develop applications which run in KDE 4.0.

And since Qt 4 library has been released under GPL, one can look forward to KDE 4.0 being ported to Windows and Mac OSX. Earlier versions of Qt is available under GPL only on unix/X11 and is released under a commercial license for development in Windows and Mac OSX . So KDE 3.x and earlier which uses Qt 3.x found it difficult to be ported to these platforms.

Tuesday 2 January 2007

Book Review: Core Python Programming - 2nd Edition

Python, the dynamic object oriented programming language created by Guido van Rossum is known to deliver both the power and general applicability of traditional compiled languages without the complexities accompanying them. Coupled with its ease of use, programs written in Python can be run on multiple Operating systems and system architectures which gives it the same portability of any interpreted language. My first brush with Python was when I tried out a beautiful game called PySol — which is more like a collection of over 200 card games and PySol is entirely coded using the Python language. Nowadays a variety of Python web frameworks have also cropped up which promise the same kind of rapid application development that is possible using other programming languages.

I found the book titled "Core Python Programming" authored by Wesley.J.Chun and published by Prentice Hall to be an ideal book to learn the wonderful Python language. This book is quite voluminous, with 23 chapters spanning 1050 pages. The book is divided into two parts the first part titled Core Python which contain 14 chapters which impart a sound understanding of the semantics of the language and the second part titled "Advanced Topics" which contain a collection of 9 chapters which give a good introduction to the specialized uses such as in database programming, network programming, using threads in python, GUI programming and so on.

In the first chapter of the book, the readers get to know the various features of Python and the steps needed to install Python on ones machine. When you install Python on your machine, it also provides its own shell where you can execute pieces of python code. The author has taken advantage of this functionality of Python in narrating the concepts throughout the book. Each concept and syntax is followed by bits of code which the readers can try out in the Python shell in its entity. I found this process much easier in learning this language as one need not go through the write -- compile -- execute cycle which is more prevalent in the traditional languages.

In depth coverage has been provided for important concepts such as lists, tuples and dictionaries as well as data-types and string sequences and they have been provided separate chapters of their own. The sixth chapter titled "Sequences: Strings, Lists and Tuples" is the second largest in the book and is quite detailed in its coverage of the topic.

Chapter 9 deals with file manipulations where the author introduces all the built in functions available in Python which allow one to open, read from and write to a file. Interestingly, the functions are also illustrated by aid of short easy to understand examples. A couple of modules related to file handling are also introduced in this chapter.

Errors and exceptions form the basis of the 10th chapter where different errors and exceptions supported in Python are explained. This chapter also explains how programmers can create custom exception classes which I found quite informative.

One of the biggest advantages of Python is that all its functionality is split up into modules. A module could be just a single python file which contain a collection of functions or classes which can be re-used in programs written in Python. And all one has to do is import the module into ones programs to start using those pieces of code. The chapter 12 titled Modules give a firm understanding of this concept and also introduces different ways in which you can import external pieces of code into the Python program.

Chapter 13 titled "Object Oriented Programming" is by far the largest chapter in this book and spans over 100 pages. In this chapter, the author endeavors to give a sound base to Object oriented concepts as well as how they relate to programming in Python. The author introduces a large number of Python classes, methods and descriptors in this chapter.

Regular expressions play a very important part in programming verily because manipulating text/data is a necessity. And it is possible to easily modify and mould data to ones choosing by way of regular expressions. Python has strong support for regular expressions. The second part titled "Advanced concepts" of the book starts with a chapter on regular expressions. In this chapter, one gets to know about the regular expression module and many functions associated with the module. The author also provides a couple of examples which give insights into the ways in which regular expressions can be used in Python to reformat data.

The next two chapters give an introduction to the world of sockets and how Python can be used to write client server programs.

Multithreaded programming forms the basis of the 18th chapter. Here the author introduces a couple of modules available in Python which make it quite easy to create threads in ones Python program.

I found the chapter titled "Web Programming" very interesting to read. Especially since Python is used in a large way to create dynamic websites. And the next chapter titled "Database programming" gives a sound introduction to the Python objects which allow one to easily connect to and retrieve data from the databases.

I found this book really informative, especially suited for upcoming Python programmers. At the end of each chapter, there is an exercise section which imparts a nice touch to this book as it allows you to test your knowledge. Even though the advanced topics (second part of the book) are not covered in depth, the author succeeds in providing enough knowledge about the relevant Python modules and functions followed by a couple of examples which whets ones appetite without overwhelming the reader. This is the second edition of the book and it has been significantly revamped to include new features introduced in Python 2.5.

Book Specifications
Name : Core Python Programming 2nd Edition
ISBN No: 0-13-226993-7
Author : Wesley J. Chun
Number of Pages : 1050
Publisher : Prentice Hall
Price : Check the latest price at Amazon.com
Rating : Excellent Book to start learning the Python language.

The author Wesley J. Chun has been a former employee at Yahoo and has played a major role in creating Yahoo Mail and Yahoo People Search using Python. He has over 20 years of experience in the IT field with over a decade of experience programming in Python.

Readers please note: I had originally contributed this review to Slashdot.org

Monday 1 January 2007

15 tips to choose a good text type

When you use good fonts in articles either in print or in screen, it always makes a positive impression on the reader. Good fonts motivate a person to read the article from start to finish. Many times I have come across books - especially scientific journals which have a very small type face - so small that you end up squinting your eyes to read the text. In those circumstances, even if the article in question holds my interest, I usually pass it by or at the most just skim through the headings. Now a days most books that are printed have a good type face and the publishers have realized the advantages of using scientifically designed fonts to enhance the reading experience.

But selecting a good text type is not just imperative for printing, it holds equal importance in publishing articles for the web. Till a few years back, the font rendering in Linux was below par and the result was that viewing web pages was atmost lackluster. But now a days, rendering fonts in Linux has been significantly improved with support for anti-aliasing and sub pixel hinting that it has turned into a very good experience.

In a previous post titled "Optimal use of fonts in GNU/Linux", I had mentioned about all the facets related to fonts and how one can optimally use fonts in Linux. Now Juan Pablo De Gregorio blogs about the characteristics of good fonts and what fonts are good in a particular situation which makes an interesting read.