Wednesday, 29 November 2006

FizzBall - A well designed enjoyable game for Linux

Anybody who has played games on their PC will be familiar with a classic game called Breakout where you have to bounce a ball with a paddle and smash all the bricks. While this game in its original make does not sport any special features, it has helped spawn a number of breakout clones which provide additional special effects such as power-ups that provide more power to the ball for a short while - and which make it far more entertaining and enjoyable to play. A couple of years back, I enjoyed playing a breakout clone called DxBall. But most of these so called breakout clones are developed to be run exclusively on Windows. And one of the standing grouse of Linux users is the dearth in quality professional games which run on Linux.

But that is bound to change as more and more professional game developers are seriously considering Linux as a viable platform alongside Windows to release their games. One such professional game development company is Grubby games - founded by Ryan Clark and Matt Parry, which has been developing games which entertain as well as educate the players.

FizzBall one of the games developed by Grubby games and released for Linux is a game with a little similarity to the classic Breakout game in that you have to bounce a bubble using a machine which has the same functions as a paddle in Breakout. But barring that, the game play is entirely different. The aim of the game is to collect all the animals from the wild by directing the bubble towards the animals. In the beginning of each level, the bubble will be small and will bounce off the animals which are larger than the bubble. So you have to collect food in the form of apples, coconuts, acorns and so on littering the area; and as the bubble gobbles up these things, it grows in size and is able to collect larger animals. The level is completed once all the animals are collected inside the bubble in which case, you are taken to the next level. There are over 180 levels in this game.

Fig: You have to break the crates to get to the animals inside.

Fig: Another game level.

What I really liked about the game is that the developers have kept a sharp eye for details. The game is gorgeously animated and illustrated. For example, the animals do not remain stationary but move around. When the bubble bounces off an animal, the animal emits a sound - for example if it is a cow, it moos, if it is a lion, it roars and so on. And if at all the bubble when it is tiny, hits a skunk, it will release a smell.The animals you have collected in each level are kept in an animal sanctuary. All along the game play, you get lots of money and power-ups which you have to collect by directing the machine to them. The money you collect helps you to hop from one island to another (there are seven of them) and also to feed the animals residing in the sanctuary.

Fig: Animal sanctuary

And the power-ups provide additional power to the bubble. Some of the extra powers that are available are the gravity bubble, energy shield, faster bubble, wacky weather ... just to name a few. There are bonus levels after every few regular levels which allow you to gain additional points and money. And each island has offbeat paths that introduce a new animal. And in some levels, you come face to face with an alien which shoots at you and the animals. And it is your duty to capture the alien by directing the bubble towards it.

Fig: View your trophies in the trophy room

The game has two modes - the regular mode and the kids mode. In the kids mode, you do not lose the bubble even if you miss hitting it with the machine. And each new level in the kids mode is preceded by a fun quiz. Just to give a taste, these are some of the questions I encountered in the fun quiz:
  • Which baby animal can be called a kid? Goat
  • A group of these animals can be called a Mob. - I forgot the answer ;-)
  • A group of these animals can be called a pride. Lions
  • Which baby animal can be called a gosling ? Goose
  • Which animal's baby can be called a snakelet ? snake
  • A group of these animals can be called a Parliament. Owl
It is clear to see that the developers behind this game, had dual purpose in mind while creating this game - which is, to educate and entertain. For instance, there are bonus levels in the game where the player has to break the numbered objects in the right order - a good way to teach the little kids how to count.

Fig: Break the numbered crates in order

The story is good, the game play is simple but entertaining and the graphical effects are outstanding which makes this game a very good one for both adults and children alike.

FizzBall game features
  • Over 180 unique levels of game play.
  • The game stage is automatically saved once you exit the game and you can continue where you left off the next time you start playing.
  • Multiple users can be created and each user's game is saved separately.
  • There are two modes - Regular mode and Kids mode. The kids mode does not allow you to lose the balls and includes fun quizzes between levels.
  • If you lose all your bubbles, you can still continue with the game, though all your scores will be canceled.
  • Get trophys for achieving unique feats. For example, I recieved a trophy for capturing an alien without getting hit by a laser :-) .
Running FizzBall in GNU/Linux
This game for GNU/Linux is packaged as a gzipped archive. And all you have to do is unpack the archive and run the script named run.sh and the game will commence.

Pros of the game
  • Eye catching design and excellent graphics.
  • Is educative for little kids as well as entertaining for all ages.
  • Over 180 levels in both the regular and kids mode of the game.
Cons of the game
Is not released under GPL, with the full version of the game costing USD $19.95. A time limited demo version of the game is available though for trying out before buying. But having played the full game, I would say that the money is well spent.

The good news is, professional game developers are seriously eyeing the Linux OS alongside Windows as a viable platform to release their games, FizzBall being a case to the point.

Sunday, 26 November 2006

Richard M Stallman talks on GPL version 3 at the 5th International GPLv3 Conference in Japan

The fifth international GPLv3 conference was held on 22nd and 23rd of November in Akibara Tokyo Japan. A couple of months back, RMS had spoken at the 4th GPLv3 international conference held at Bangalore India. These conferences are a part of a series of events organized by Free Software Foundation to enlighten the public about the upcoming new version of GPL, more specifically to make them aware how GPLv3 will help them better in safeguarding their freedom vis-a-vis the software they use.

In Tokyo too, RMS gave a talk which concentrated on the upcoming GPLv3 and the major changes that they are thinking of bringing to the license in its current form. fsfeurope.org is running a transcript of Mr Stallman's talk in Tokyo which is a must read for any GNU/Linux enthusiast.

He dwelt in depth on a variety of topics such as the differences between GPLv1 and GPLv2, The changes that are aimed at GPLv3 such as better support for internationalization, better license compatibility with the Apache license and the Eclipse license, preventing tivoisation, fighting software patents by carrying an explicit patent license and a few other things.

It is really simple when you look at the logic provided by RMS. He is not concerned about any particular OS or software... rather, his number one priority is to conserve the freedoms enjoyed by the people who use Free software in a way such that nobody will be able to hold the Free Software Movement at ransom. Today Linux is the darling of many corporates with many of the heavy weights jumping on the Linux bandwagon. For any business, the fundamental aim is to make money. And with Linux becoming a viable platform, businesses are slowly realizing the advantages of embracing Linux. The only irritant that is standing in their way is the GPL license which they could do without. RMS and Free Software Foundation is working towards safeguarding the GPL by plugging all its loopholes so that it is not possible to circumvent it and thus compromise any of the freedoms guaranteed by GPL.

Thursday, 23 November 2006

Making the right decisions while buying a PC

With the speed with which advances are made on the technological front, I sometimes wonder if buying an electronic product now is a good decision. Especially since if I choose to wait for a couple more months, I could get an even better product with more features at more or less the same price as the product I intended to buy now.

This truism is especially valid while buying a PC. On the one hand, the applications that are being developed demand more and more processing power and memory to run at their optimal level and on the other, the hardware prices are coming down at a steep rate. So if I go out to buy a PC, I have to make sure that it will be able to meet my purpose for atleast the next one and half to two years... after which it will be time to either upgrade - if I am lucky enough to have taken the right decision of buying a PC which was designed with expansions in mind, or just discard the PC and buy a new one.

So what are the things you need to watch out for if you are seriously considering buying a PC now? Thomas Soderstrom has written a very informative article which throws light on the components that one should select to be included in ones PC. He touches on the cases to be used such as full towers, ATX, mini ATX, shuttle form factor and so on, the best processor (CPU), the type of interface slots on the motherboard, the memory, the capacity of the hard drive and so on.

The gist of his choice filters down to the following:
  • ATX tower case - is capable of holding a full size motherboard with space for several optical drives and is ideal for home users and gaming enthusiasts.
  • CPU - As of now Intel core duo provides the best power-performance-price ratio. Enough applications have been optimized for dual-core chips that these should be considered for any moderate to heavy use, especially when multitasking.
  • Always go for motherboards that have the PCI Express slots over the now fast becoming outdated ordinary PCI slots.
  • And with respect to memory (RAM), your best bet is to go for atleast DDR-400 and above though ideally DDR2-800 is recommended. And don't even think of a machine with less than 512 MB RAM. The article strongly recommends a choice of 2 GB memory if you can afford it as near future applications and OSes will demand that much memory.
  • On the storage front, if you are in the habit of archiving video or hoarding music on your hard disk, do consider atleast a hard disk of 150 GB. The article recommends Western Digital's Raptor 150 GB drives if you are on the look out for better performance and Seagate Barracuda 750 GB for those on the look out for larger capacity drives. Both are costly though.
  • And do go for a DVD writer over a CD-RW/DVD combo.
I remember reading an article on the best value Desktop PC in the most recent print edition of PCWorld (Indian edition) magazine. And they selected the "HCL Ezeebee Z991 Core2 Duo" branded PC as the best buy from among a number of other branded PCs. This PC sports the Intel core 2 Duo E6300 processor, 512 MB DDR2 RAM, An optical DVD-RW drive and 160 GB SATA hard disk.

Something I have noticed is that in India, the PCs that are advertised sport just enough memory for the current needs. In fact, it is the habit of these people to skimp on memory while selling a PC. Every day, I see atleast 3 to 4 advertisements selling PCs with just 256 MB memory and in one or two cases with a measly 128 MB. The rule of the thumb to follow is the more memory the better.

Wednesday, 22 November 2006

A peep into how Compact Discs are manufactured

Ever wonder how a CD aka Compact disc is manufactured? There is a whole string of tasks involved in creating the compact disc. It starts by creating an original master disc made of glass. During the process, the glass disc is treated with two chemicals - a primer and a photo resistant coating. Then the photo resistant coating on the glass surface is dried in an oven for 30 minutes. Then the data that goes on the CD is etched on the glass and then the glass is electrocoated by applying a thin coating of nickel and vanadium. After going through a few other steps, what you have is a die - or a master copy. The CDs that you hold in your hand are manufactured from this master copy. The CDs are not made of glass but is actually a liquid polycarbonate which is injected into the mold to create the CDs.

One thing worth noting is that there are two different ways of creating a CD. One is the recordable CDs or blank CDs and the other is the pressed CD in which the data is directly stamped on the disc at the time of creation of the disc. An example of pressed CDs are the ones you get along with a IT magazines.

I found this short video of manufacturing a CD quite informative. The video clip details the creation of a pressed CD.

Update (Feb 14th 2007): The Youtube video clip embedded here has been removed as I have been notified by its real owners that the video clip is copyrighted.

Sunday, 19 November 2006

Ifconfig - dissected and demystified

ifconfig - the ubiquitous command bundled with any Unix/Linux OS is used to setup any/all the network interfaces such as ethernet, wireless, modem and so on that are connected to your computer. ifconfig command provides a wealth of knowledge to any person who takes the time to look at its output. Commonly, the ifconfig command is used for the following tasks:

1) Configuring an interface - be it ethernet card, wireless card, loop back interface or any other. For example, in its simplest form, to set up the IP address of your ethernet card, you pass the necessary options to the ifconfig command as follows:
# ifconfig eth0 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 up
Where the 192.168.0.1 number pertains to the IP address of your machine. I have used a private IP address. 255.255.255.0 denotes the network mask which decides the potential size of your network and the number 192.168.0.255 denotes the broadcast address and lastly, the 'up' keyword is the flag which loads the module related to this particular ethernet card and makes it ready to receive and send data.

2) Gathering data related to the network off which our computer is a part.
When used without any parameters, the command ifconfig shows details of the network interfaces that are up and running in your computer. In my machine which has a single ethernet card and a loop back interface, I get the following output.

eth0 Link encap:Ethernet HWaddr 00:70:40:42:8A:60
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST NOTRAILERS RUNNING MULTICAST MTU:1500 Metric:1
RX packets:160889 errors:0 dropped:0 overruns:0 frame:0
TX packets:22345 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33172704 (31.6 MiB) TX bytes:2709641 (2.5 MiB)
Interrupt:9 Base address:0xfc00

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:43 errors:0 dropped:0 overruns:0 frame:0
TX packets:43 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3176 (3.1 KiB) TX bytes:3176 (3.1 KiB)
As you can see, it throws up a lot of data, most of it providing one detail or another. Lets look at the data spewed out by the ifconfig command one by one for the ethernet device.
  • Link encap:Ethernet - This denotes that the interface is an ethernet related device.
  • HWaddr 00:70:40:42:8A:60 - This is the hardware address or MAC address which is unique to each ethernet card which is manufactured. Usually, the first half part of this address will contain the manufacturer code which is common for all the ethernet cards manufactured by the same manufacturer and the rest will denote the device Id which should not be the same for any two devices manufactured at the same place.
  • inet addr - indicates the machine IP address
  • Bcast - denotes the broadcast address
  • Mask - is the network mask which we passed using the netmask option (see above).
  • UP - This flag indicates that the kernel modules related to the ethernet interface has been loaded.
  • BROADCAST - Denotes that the ethernet device supports broadcasting - a necessary characteristic to obtain IP address via DHCP.
  • NOTRAILERS - indicate that trailer encapsulation is disabled. Linux usually ignore trailer encapsulation so this value has no effect at all.
  • RUNNING - The interface is ready to accept data.
  • MULTICAST - This indicates that the ethernet interface supports multicasting. Multicasting can be best understood by relating to a radio station. Multiple devices can capture the same signal from the radio station but if and only if they tune to a particular frequency. Multicast allows a source to send a packet(s) to multiple machines as long as the machines are watching out for that packet.
  • MTU - short form for Maximum Transmission Unit is the size of each packet received by the ethernet card. The value of MTU for all ethernet devices by default is set to 1500. Though you can change the value by passing the necessary option to the ifconfig command. Setting this to a higher value could hazard packet fragmentation or buffer overflows. Do compare the MTU value of your ethernet device and the loopback device and see if they are same or different. Usually, the loopback device will have a larger packet length.
  • Metric - This option can take a value of 0,1,2,3... with the lower the value the more leverage it has. The value of this property decides the priority of the device. This parameter has significance only while routing packets. For example, if you have two ethernet cards and you want to forcibly make your machine use one card over the other in sending the data. Then you can set the Metric value of the ethernet card which you favor lower than that of the other ethernet card. I am told that in Linux, setting this value using ifconfig has no effect on the priority of the card being chosen as Linux uses the Metric value in its routing table to decide the priority.
  • RX Packets, TX Packets - The next two lines show the total number of packets received and transmitted respectively. As you can see in the output, the total errors are 0, no packets are dropped and there are no overruns. If you find the errors or dropped value greater than zero, then it could mean that the ethernet device is failing or there is some congestion in your network.
  • collisions - The value of this field should ideally be 0. If it has a value greater than 0, it could mean that the packets are colliding while traversing your network - a sure sign of network congestion.
  • txqueuelen - This denotes the length of the transmit queue of the device. You usually set it to smaller values for slower devices with a high latency such as modem links and ISDN.
  • RX Bytes, TX Bytes - These indicate the total amount of data that has passed through the ethernet interface either way. Taking the above example, I can fairly assume that I have used up 31.6 MB in downloading and 2.5 MB uploading which is a total of 37.1 MB of bandwidth. As long as there is some network traffic being generated via the ethernet device, both the RX and TX bytes will go on increasing.
  • Interrupt - From the data, I come to know that my network interface card is using the interrupt number 9. This is usually set by the system.
The values of almost all the options listed above can be modified using the requisite ifconfig options. For example, you can pass the option 'trailers' to the ifconfig command to enable trailer encapsulation. Or you can change the packet size by using the 'mtu' option along with the new value and so on. But in majority of the cases, you always accept the default values.

Learning to use the right command is only a minuscule part of the job of a network administrator. The major part of the job is in analyzing the data returned by the command and arriving at the right conclusions.

Wednesday, 15 November 2006

LinuxBIOS - A truly GPLed Free Software BIOS

A few months back, I had posted an article related to BIOS which described its functions. A BIOS is an acronym for Basic Input Output System and is the starting point of the boot process in your computer. But one of the disadvantages of the proprietary BIOS which are embedded in most PCs is that there is a good amount of code which is used in it to support legacy operating systems such as DOS and the end result is a longer time taken to boot up and pass the control to the resident operating system.

This time can be significantly reduced if the code pertaining to legacy OSes is removed; especially if you intend to install and use any of the modern OSes on your system which tends to do all the hardware probing and load its own hardware drivers anyway. So in a PC running a modern OS such as one of the BSDs, Linux or Windows, the BIOS is doing nothing but providing information, and much of the information it provides will not even be used. In such machines, all the BIOS really had to do is load the bootstrap loader or bootloader and pass the control to the resident OS.

One project which intends to give the BIOS chip makers such as Phoenix and Award a run for their money is the LinuxBIOS project. LinuxBIOS aims to replace the normal BIOS found on PCs, Alphas, and other machines with a Linux kernel that can boot Linux from a cold start. The trick that LinuxBIOS uses is to use a embedded Linux OS to load the main OS. Some of the benefits of using a LinuxBIOS as listed in their website over the more common BIOS-es are as follows (and I quote):

  • 100% Free Software BIOS (GPL)
  • No royalties or license fees!
  • Fast boot times (3 seconds from power-on to Linux console)
  • Avoids the need for a slow, buggy, proprietary BIOS
  • Runs in 32-Bit protected mode almost from the start
  • Written in C, contains virtually no assembly code
  • Supports a wide variety of hardware and payloads
  • Further features: netboot, serial console, remote flashing, ...
The LinuxBIOS project has been making rapid inroads into general acceptance by many computer manufacturers. But one of its major break through was that the One Laptop per Child project selected it for inclusion in its laptop meant for use by children. But the hot news fresh out is that Google - the search engine giant has jumped in the foray by deciding to sponsor the LinuxBIOS project. As of now the LinuxBIOS supports a total of 121 motherboards from 58 vendors.

You can watch a video of Linux BIOS booting Linux on a rev board below:


Sunday, 12 November 2006

Is Free Software the future of India? Steve Ballmer CEO of Microsoft answers...

The solemn occasion was the talk show hosted by NDTV 24x7 - a premier cable television news channel in India. And the discussion centered on the topic - "Bridging the digital divide between the urban rich and rural poor in India". The panel composed of distinguished personalities including Steve Ballmer - the CEO of Microsoft, N.R. Narayana Murthy - Chairman of Infosys Technologies, Ashok Jhunjunwala professor of Electrical Engineering from IIT Chennai and Malvinder Mohan Singh - the chief executive and MD of Ranbaxy Laboratories. And the talk was hosted by NDTV's Prannoy Roy. The very first question that was asked off Steve Ballmer was the following: Is Free Software the future of India?

Taking care not to use the word(s) "Free software", Mr Ballmer conceded that a number of revenue streams including those by selling hardware, internet connectivity and software are important. He went on to say, "As rich and good be bridging the digital divide, software companies should look forward to three or four sources of income. Many revenues for software companies will come from not any one thing but will include subscription fees, lower cost hardware, advertising and of course traditional transaction (read proprietary software)". He does agree that "prices must come down" though it was plain to see him take care not to use the word "FREE" in his answer.

Another question that was posed to him was "Is bridging the rural divide all about money ?". Mr Ballmer answered by saying "It is not not about money but also not about short term profits". In short Microsoft is looking for long term profits.

And when asked , "American government spearheads democracy. Are the American businesses in tune with that?". He answered as follows: "Any multi-national should behave appropriately and lawfully in any country in which it does business. But our primary aim is to have a generally more helpful participation in world economy". He went on to say, "You can do three things ... you can stay in and do nothing, stay in and have a point of view or stay out".

Watching the talk show, I could not help thinking that Microsoft is more or less resigned to the fact that Open Source and Free Software is here to stay. And what ever one might do, you cannot easily wish it away. If you can't beat them, join them is the new mantra at Microsoft. The recent news of Microsoft's acquisition (sic) of (Um... partnership with) Novell being a case to the point. But I was left with the feeling that Microsoft needs to be honest and more outright in acknowledging the very important part that Free Software and Linux plays in the over all big picture in IT. Steve Ballmer was on a three day visit to India, his itinerary included calling on the Indian Prime Minister Dr Manmohan Singh to discuss Microsoft's future plans for India.

Friday, 10 November 2006

Book Review: Ubuntu Hacks

Ubuntu Hacks: Tips & Tools for Exploring, Using, and Tuning LinuxI recently got hold of a very nice book on Ubuntu called Ubuntu Hacks co-authored by three authors - Kyle Rankin, Jonathan Oxer and Bill Childers. This is the latest of the hack series of books published by O'Reilly. They have made available a rough cut version of the book online ahead of schedule which was how I got hold of the book but as of now you can also buy the book in print. Put in a nutshell, this book is a collection of around 100 tips and tricks which the authors choose to call hacks, which explain how to accomplish various tasks in Ubuntu Linux. The so called hacks range from down right ordinary to the other end of the spectrum of doing specialised things.
Read more »

Tuesday, 7 November 2006

A list of Ubuntu/Kubuntu repositories

At a time when I was using Red Hat (Fedora), One of my favourite repositories was Dag-wieers not only because the official Red Hat repository was dead slow due to excess traffic but also because dag-wieers contained a number of additional RPM packages which were missing in the official repositories such as those with support for proprietary file formats. That was the culmination of my search for additional repositories to include in my Yum configuration file.

Now a days, this is not at all a problem especially when you are using Ubuntu, as the repositories have been demarcated into different sections such as Universe, Multiverse and so on depending upon the type of package available in each one of them such as whether the package is released under a free license or a proprietary one. And it is only a matter of enabling the desired repository and then using apt-get to install the requisite package. Still, it doesn't hurt to have a number of additional repositories apart from the ones provided officially by Ubuntu. Trevino has compiled an exhaustive collection of repositories for Ubuntu and Kubuntu which you can include in your /etc/apt/sources.list file. A word of caution is in order though, which is that since these are unofficial repositories, it is difficult to verify the integrity of the packages. So use at your own risk.

Monday, 6 November 2006

Learning to use netcat - The TCP/IP swiss army knife

NC - short form for Netcat is a very useful tool available on all Posix OSes which allow one to transfer data across the network via TCP/UDP with ease. The principle is simple... There is a server mode and a client mode. You run the netcat tool as a server listening to a particular port on the machine which sends the data and you use netcat as a client connecting to that particular port on the machine it is running as a server. The basic syntax of netcat is as follows :

For the server :
nc -l <port number >
... where -l option stands for "listen" and the client connects to the server machine as follows :
nc <server ip address> <port number>
And in which all ways can you put it to use ? For one,
  • You can transfer files by this method between remote machines.
  • You can serve a file on a particular port on a machine and multiple remote machines can connect to that port and access the file.
  • Create a partition image and send it to the remote machine on the fly.
  • Compress critical files on the server machine and then have them pulled by a remote machine.
  • And you can do all this securely using a combination of netcat and SSH.
  • It can be used as a port scanner too by use of the -z option.
To see how all the above tasks are accomplished, check out the very nice compilation by G.Notaras who provides a couple of netcat examples . Just remember, the actual command is called 'nc' and not netcat.

Sunday, 5 November 2006

AptonCD - Create a backup of all the packages you have installed using apt-get

Consider this scenario... You are interested in installing GNU/Linux on your machine. Assuming you already have the latest version burned on to a CD, it is a simple affair of popping the CD into your CD drive and starting the instalation. But once the installation is done and finished, you will most certainly want to install additional software apart from the ones bundled with the CD. And if you are using a Debian based Linux distribution such as Ubuntu, you will be using the apt-get method. Over a period of time you would have installed a number of additional software including any packages satisfying their dependencies as well as upgraded some of the software to the most recent version.

The problem occurs when you decide to re-install Linux on your machine. You are forced to start all over again, downloading additional software using apt-get. Personally, I have re-installed Debian or a Debian based Linux distribution umpteen times on my machine. And each time I have wished there was a simple way of backing up the packages which I have previously downloaded and installed via apt-get.

A good samaritan has pointed out to a unique project named AptonCD which allows one to create a CD image (ISO) of all the packages downloaded via apt-get or even the packages in a given repository.

On Ubuntu for instance, you can install it using the command:
# sudo apt-get install aptoncd
Once it is installed, you will find a Gnome menu entry at the location System -> Administration -> AptonCD. Clicking on it opens a GUI which will aid in the creation of an ISO image of all the packages stored in the /var/cache/apt/archives directory and any other files which are needed. You can also run aptoncd from the command line to start the AptonCD program.

So how do you use the program ?

It is simple really, the GUI has two tabs namely Create and Restore. The Create tab has a single button which when clicked copies all the necessary packages from the /var/cache/ directory and displays it in a pop up dialog. Here you get to decide if you need to add any additional packages stored in an alternate location or remove some of the already selected packages. There is also an option to set the target media as a CD or DVD and the location where you want to save the resultant image.

Fig: The create tab has just a single button

Fig: Selectively include the packages using this dialog

Once the choices are made, the program creates the necessary CD/DVD image and saves it in the location you had chosen. Now you can either store it in a different location or burn it to a CD/DVD.

Fig: Restore tab allows you to restore from backup

The Restore tab of the AptonCD GUI contain three buttons each catering to a specific purpose. This tab allows you to -
  • Restore all packages available from an AptonCD media (read it as CD or DVD) on to the computer.
  • Restore packages from an AptonCD ISO image previously generated and stored locally.
  • Add a CD/DVD created as a repository for apt-get, aptitude or synaptic. Which means the program adds the necessary lines of code required in the /etc/apt/sources.list file which will enable you to use apt-get or any similar program to install the software on the CD.
I found this program really convenient to use not only when I re-install Ubuntu but also when I want to install the same set of programs on a different machine.

One thing it lacks is a way to automatically download the packages from a remote repository and create a CD/DVD image. But then this software is still in its beta stage and hopefully we can see more features built into it in coming years.

The AptonCD project is the brain child of Rafael who's first language (I believe) is Portuguese. Not surprisingly, I found the help files bundled with the project to require a bit more work as in the present form, they are just place holders for the required documentation. But he has done a remarkable job on the software itself and in its current form, it works flawlessly.

Friday, 3 November 2006

A talk with Jon Maddog Hall - the spokesman for the open source community and president of Linux International

Jon Hall, president of Linux® International, is a passionate spokesman for the open source community and ideal. Over a 30-plus-year career, Hall has been a programmer, systems designer, systems administrator, product manager, technical marketing manager, author, consultant to local, state and national governments worldwide and college educator. He is currently an industry consultant. While at Digital, he became interested in Linux and was instrumental in obtaining equipment and resources for Linus Torvalds to accomplish his first port, to Digital's Alpha platform. Hall serves on the boards of several companies, and several nonprofit organizations. At the U.K. Linux and Open Source Awards 2006, he was honored with a Lifetime Recognition Award for his services to the open source community.

Scott Laningham interviews Jon Maddog Hall where he quizzes Maddog on the progress and challenges for open source, and on the need to recapture a purer vision of education. It is fascinating to hear Maddog speak (mp3 - 11MB) about his experiences (mp3 - 7MB) and thoughts related to Linux and the efforts that are put into making the businesses understand the concept of open source and free software.You can also read the full transcript of the talk here.

Wednesday, 1 November 2006

Drupal 5.0 beta scaling new heights in the Content Management Systems arena

There was a time when publishing content online required fairly good technical knowledge even when the data resided in static HTML pages. Then the blogging revolution happened and the rest as they say is history. Now a days it is possible to publish content online even without an iota of knowledge about the HTML elements. The person writing the article can free his mind to fully concentrate on what he is writing as the technical aspects of publishing is taken care of for him. And catering to this new found craze of publishing content online, a plethora of content management systems have sprung up - most of them released under a free license.

One of my hobbies is to try out different blogging and content management systems. And I have tried out a whole lot of them. In a previous article, I had explained how to setup and configure a Wordpress blog on ones personal machine. Where as Wordpress is purely a blogging tool, a content management system is much more than a blogging tool. In fact a blog forms only a part of a content management system (CMS). An application should have support for other features to qualify it as a CMS such as forums, fine grained access control and a rich collection of modules which extends the functionality of the site.

One such content management system is Drupal. I have been an ardent fan of the Drupal project from the first time I tried it out which was ver 4.6. As much as I liked Drupal, it did have a big drawback; which was that it was quite difficult to install Drupal on a remote host without having shell access. Mind you, the installation as such was a simple affair and it was possible to install it painlessly. But without the shell access, it became a bit complicated. So I found that when most web hosting providers were touting one click installs of other CMSes and blogging tools, they were silent about their support for Drupal. The end result being that you had to do a lot of research before choosing a web hosting provider if you intended to use Drupal for your site.

Things have changed for the better though with the beta release of the latest version of Drupal ver 5.0. Even though Drupal 5.0 is still in the beta stage with a formal release happening once all the bugs are ironed out, I did not run into any problems while installing and using it on my machine.

One of the foremost features of Drupal 5.0 beta is its support for entirely web based installation. Finally the Drupal developers have heeded to the requests for this and many other features and have delivered on their promises. I am happy to say that I found the Drupal two click installation as painless and simple as a Wordpress installation.

Once the installation is done and finished, the first thing that will strike you is the default theme. Now Drupal sports a polished new theme called Garland which is a fluid theme. But you are also provided a fixed width version of the same theme which is named Minnelli. One of the advantages of the new themes is the ease with which it is possible to change the colors of various elements in the theme such as the header, content and links. And all this is possible with the help of a color scheme dialog. Drupal offers 7 preset color schemes but there is an option to create a custom color scheme too.

colorscheme_dialog
Fig: Set your own color scheme to your site

The administration section has been significantly overhauled and has seen major visual changes albeit for the better. Now it is possible to have one theme for the administration section and a separate one for the rest of the site ala Joomla style which is a big step forward considering that some themes messed up the rendering of the administer section while it looked beautiful for the rest of the site. For example, if I want to change the theme of the administration section, I can navigate to Administer -> Site configuration -> Administration theme and select the one of my choice from a drop down list.

administer_dialog
Fig: The administer page has seen a visual overhaul

It is now easier to navigate across different settings in the administer section as it is now broadly divided into 5 intuitive sections them being Content Management, Site Building, Site Management, User Management and Logs. Again the contents in the administer section can be sorted by task or by modules.

Drupal 5.0 beta makes extensive use of JQuery (a popular javascript library) to impart more functionality to many aspects of Drupal.

The default Drupal 5.0 setup does not ship with a WYSIWYG editor for content creation. But you can install the Tinymce module and use the popular Tinymce editor for the same. I think it would have been better if Drupal had made it available in the default setup. After all the primary purpose of a CMS is to make it as easy to publish content online without getting bogged down in the diverse technical vagaries.

One feature which I would like Drupal developers to work on is to try and keep the class and id tags of the CSS files in the themes consistent across versions even when adding new features in upcoming versions. The current problem is that if you have created a very beautiful theme for Drupal ver 4.7, you will have to do some major tweaking of the theme to get it to work well in Drupal 5.0 beta.

All in all version 5.0 beta is a big step in the right direction for this very popular content management system. And not withstanding the beta tag, I found Drupal ver 5.0 to be remarkably stable and to provide a lot of improvements over the earlier versions.