Friday 30 September 2005

Change the system login banner in Linux

Login banner is the message that you see just above the login prompt in the console. You can change the login banner in Linux by editing the /etc/issue file.

The /etc/issue is a text file which contains a message or system identification to be printed before the login prompt.

I want the login message to regenerate each time the system reboots. These are the steps I followed to achieve it.

Open /etc/rc.local file and insert just above the line ...

#File: /etc/rc.local
...
touch /var/lock/subsys/local

... the following code (which is the message I want to show on top of the login prompt) :

echo "Welcome to \n" > /etc/issue
echo "All access to this system is monitored" >> /etc/issue
echo "Unauthorized access is prohibited" >> /etc/issue
echo >> /etc/issue
echo "Last reboot completed at $(/bin/date)" >> /etc/issue

Save and quit the file. That is it. Now each time you reboot, you will get your message shown on the login console.

Explanation
The login task is managed by a daemon called mingetty. Each time the you logout or reboot your machine, mingetty reads the message in the file /etc/issue and displays it just above the login prompt in the console. In the above example, mingetty expands the character '\n' to your machine's hostname.

mingetty recognizes the following escape sequences which might be embedded in the /etc/issue file:

\d - Insert current day (local time)
\l - Insert line on which mingetty is running
\m - Machine architecture (uname -m)
\n - Machine's network node hostname (uname -n)
\o - Domain name
\r - Operating system release (uname -r)
\t - Insert current time (local time)
\s - Operating system name
\U - The number of users currently logged in.
\v - Operating system version.


Note: If you have booted into runlevel 5, then press Ctrl-Alt-F1 to view your virtual console.

Thursday 29 September 2005

Enabling centralized logging in Linux

Here is a tip to make your machine save logs to a remote machine (remote_mc) instead of logging locally. For this to succeed, you have to make changes to both the remote machine which accepts the logs on behalf of your local machine as well as the local machine itself.

On the remote machine enable remote logging
Set up syslogd to accept remote messages. Edit the /etc/sysconfig/syslog file and insert the following line:

#File: /etc/sysconfig/syslog
SYSLOGD_OPTIONS="-r -m 0"


The file is liberally commented. -r means to enable remote logging and '-m 0' means to disable "MARK" messages.

Restart syslogd
# service syslog restart

Now the machine (remote machine) will accept logging messages from other machines.

On your local machine which sends the logging message
Edit the /etc/syslog.conf file to direct the logging messages to the remote machine (remote_mc).

#File: /etc/syslog.conf
...
*.emerg;user.*;kern.err @remote_mc
...

Here I have chosen to send all emergency messages, all user program generated logs and any kernel errors to be logged at the remote machine.

Lastly for the changes to take effect, restart the syslog daemon on your local machine.
# service syslog restart

Note: This tip is applicable to RedHat based systems but also can be used for debian based systems with some modifications.

Testing your setup
Generate a log message on your local machine using the logger command:

$ logger -i -t ravi "I am just testing this. This message can be ignored."

logger is a shell command which makes entries in the system log. It provides a shell interface to the syslog system log module. In the above command, -i logs the process ID of the logger process on each line. And -t option tags every line in the log with my name.

Now go and check on the remote machine (remote_mc) to see if the logs have been generated.

remote_mc $ cat /var/log/messages | grep ravi

Also read : System logging explained in Linux

Wednesday 28 September 2005

System Logging explained in Linux

Log files form the life line of any system administrator. They help pin point any discrepancies in the day to day functioning of the OS. Naturally Linux has an excellent logging facility whose work is done by the syslogd and klogd daemons. In RedHat/Fedora, you start these daemons by the command :

# service syslog start

The above command will start both syslogd and klogd daemons. These daemons will read the configuration file /etc/syslog.conf and start logging messages accordingly.

syslogd - receives messages from many daemons.
klogd - logs kernel messages.

What is the use of monitoring log files ?

Monitoring log files will help detect the following:
  1. Equipment problems such as hard disk crashes or power outages.
  2. User problems such as repeated login failures.
  3. Security breaches from outside the system.
Most common log files and their purposes
/var/log/messages - Logs most system messages
/var/log/secure - Authentication messages, xinetd services
/var/log/vsftpd.log - FTP transactions (Usually this file will be named different if you are using a FTP server other than vsftpd).
/var/log/maillog - Mail transactions.

The information contained in /var/log/messages include the following:
  • Date and time the message was written.
  • Name of the utility, program or daemon that caused the message.
  • Action that occurred.
  • Executing program's hostname.
Note: Many applications also create their own log files which may also need to be monitored.

Syslogd and Klogd configuration
These two daemons are configured using the /etc/syslog.conf file. The format of the file is quite simple as shown below :

#Syntax of syslog.conf file
facility.priority log_location


... where facility can be any of the following:
  • authpriv - security / authorization messages
  • cron - clock daemons (atd and crond)
  • daemon - other daemons
  • kern - kernel messages
  • local[0-7] - reserved for local use
  • lpr - printing system
  • mail - mail system
  • news - news system
  • syslog - internal syslog messages
  • user - generic user level messages
... and the priorities are as follows:
  • debug - debugging information
  • info - general informative messages
  • notice - normal, but significant, condition
  • warning - warning messages
  • err - error condition
  • crit - critical condition
  • alert - immediate action required
  • emerg - system no longer available
A few examples to whet your appetite

kern.info /dev/tty0

The above rule will direct all kernel informational messages to the first console. For example, after entering this rule, and restarting syslogd and klogd, try restarting a service. You will find the message on your /dev/tty0 console.

mail.crit ravi,root
This will send all critical messages pertaining to mail to the console logged in by root and ravi.

*.emerg *
Everybody gets emergency messages from all facilities.

kern.=!info;mail.=!debug /var/log/my_special_messages
Log all kernel messages except with priority info and all mail messages other than debug to the file my_special_messages.

authpriv.none;cron.none /var/log/messages
Do not log private authentication messages.

Note: As shown in the examples above, logging can be further specified with certain operators.
  • = - log on only this exact priority
  • ! - Exclude this facility or priority
  • * - Log all facilities / priorities
Specify a comma separated list of users who will be notified.You can also use a named pipe (|) for use with external logging programs (|/name/of/pipe). The pipe has to exist before syslogd starts.

As you can see, Linux has a very good robust logging mechanism. And its strong point is that it enables one to change the parameters by editing plain text files - /etc/syslog.conf in this case. It is important to know that each time you make changes to the syslog.conf file, you have to restart the syslog daemon to bring those changes into effect.

Tuesday 27 September 2005

Securing your computer running Linux

A few days back, one of my aquaintance mentioned that their office server which was running Linux got hacked. And a lot of data was lost. He was suspecting it was an inside job. This set me thinking. Normally a machine is only as secure as its environment. And how much ever robust an OS may be (Linux included) , it would still be vulnerable if certain basic guidelines are not followed. Here I will explain some of the steps which will help make your computer running Linux , secure .
Physical Security
The first layer of security you need to take into account is the physical security of your computer systems. Some relevant questions you could ask yourself while designing security are -
  • Who has direct physical access to your machine and should they?
  • Can you protect your machine from their tampering ?
How much physical security you need on your system is very dependent on your situation (company policies, critical factor of the data stored etc) , and/or budget.

If you are a home user, you probably don't need a lot (although you might need to protect your machine from tampering by children or annoying relatives). If you are in a lab, you need considerably more, but users will still need to be able to get work done on the machines. Many of the following sections will help out. If you are in an office, you may or may not need to secure your machine off-hours or while you are away. At some companies, leaving your console unsecured is a termination offense.

Obvious physical security methods such as locks on doors, cables, locked cabinets, and video surveillance are all good ideas.
Computer locks
Many modern PC cases include a "locking" feature. Usually this will be a socket on the front of the case that allows you to turn an included key to a locked or unlocked position. Case locks can help prevent someone from stealing your PC, or opening up the case and directly manipulating/stealing your hardware. They can also sometimes prevent someone from rebooting your computer from their own floppy or other hardware.

These case locks do different things according to the support in the motherboard and how the case is constructed. On many PC's they make it so you have to break the case to get the case open. On some others, they will not let you plug in new keyboards or mice. Check your motherboard or case instructions for more information. This can sometimes be a very useful feature, even though the locks are usually very low-quality and can easily be defeated by attackers with locksmithing.

Some machines (most notably SPARC's and macs) have a dongle on the back that, if you put a cable through, attackers would have to cut the cable or break the case to get into it. Just putting a padlock or combo lock through these can be a good deterrent to someone stealing your machine.
This is different from a software protection dongle which is a hardware device that is used to prevent illegal installations of software. The software is designed so that it will only operate when the hardware device is attached. Therefore only users who have the proper hardware device attached can operate the software. If the software is installed on another machine it will not work because the hardware device is not attached. Hardware keys are distributed to end users corresponding to the number of seat licenses purchased (Sorry to digress).


Figure: Hardware dongles used to secure the PC
BIOS Security
The BIOS is the lowest level of software that configures or manipulates your x86-based hardware. LILO and other Linux boot loaders access the BIOS to determine how to boot up your Linux machine. Other hardware that Linux runs on has similar software (Open Firmware on Macs and new Suns, Sun boot PROM, etc...). You can use your BIOS to prevent attackers from rebooting your machine and manipulating your Linux system.

Many PC BIOSs let you set a boot password. This doesn't provide all that much security (the BIOS can be reset, or removed if someone can get into the case), but might be a good deterrent (i.e. it will take time and leave traces of tampering). Similarly, on Linux (Linux for SPARC(tm) processor machines), your EEPROM can be set to require a boot-up password. This might slow attackers down.

Another risk of trusting BIOS passwords to secure your system is the default password problem. Most BIOS makers don't expect people to open up their computer and disconnect batteries if they forget their password and have equipped their BIOSes with default passwords that work regardless of your chosen password. These passwords are quite easily available from manufacturers' websites and as such a BIOS password cannot be considered adequate protection from a knowledgeable attacker.

Many x86 BIOSs also allow you to specify various other good security settings. Check your BIOS manual or look at it the next time you boot up. For example, some BIOSs disallow booting from floppy drives and some require passwords to access some BIOS features.

Note: If you have a server machine, and you set up a boot password, your machine will not boot up unattended. Keep in mind that you will need to come in and supply the password in the event of a power failure.

Boot Loader Security
The Linux boot loaders like LILO and GRUB also can have a boot password set. LILO, for example, has password and restricted settings; password requires password at boot time, whereas restricted requires a boot-time password only if you specify options (such as single) at the LILO prompt. Nowadays most linux distributions use GRUB as the default boot loader. GRUB also can be password protected and the password can be encrypted so that it cannot be deciphered easily by a snooper.

Keep in mind when setting all these passwords that you need to remember them. Remember that these passwords will merely slow the determined attacker. They won't prevent someone from booting from a floppy, and mounting your root partition. If you are using security in conjunction with a boot loader, you might as well disable booting from a floppy in your computer's BIOS, and password-protect the BIOS.

Note: If you are using LILO, the /etc/lilo.conf file should have permissions set to "600" (readable and writing for root only), or others will be able to read your passwords!

To password protect GRUB bootloader, insert the following line in your /boot/grub/grub.conf file.

#File: /boot/grub/grub.conf
...
password --md5 PASSWORD
...

If this is specified, GRUB disallows any interactive control, until you press the key "p" and enter a correct password. The option `--md5' tells GRUB that `PASSWORD' is in MD5 format. If it is omitted, GRUB assumes the `PASSWORD' is in clear text.

You can encrypt your password with the command `md5crypt' . For example, run the grub shell , and enter your password as shown below:

# grub
grub\> md5crypt
Password: **********
Encrypted: $1$U$JK7xFegdxWH6VuppCUSIb.

Now copy and paste the encrypted password to your configuration file.

GRUB also has a 'lock' command that will allow you to lock a partition if you don't provide the correct password. Simply add 'lock' and the partition will not be accessable until the user supplies a password.

Locking your Terminal
If you wander away from your machine from time to time, it is nice to be able to "lock" your console so that no one can tamper with, or look at, your work. Two programs that do this are: xlock and vlock. But more recent linux distributions do not ship these utilities . More specifically Fedora does not have it. If you don't have these utilities, you could also set the TMOUT variable in your bash shell. TMOUT sets the login time out for your bash shell. It is particularly important when you are working in the console. For example, I have set my TMOUT variable as follows in my .bashrc file.

#FILE: .bashrc
TMOUT=600


The value of TMOUT is in seconds. So if my machine is left idle for atleast 10 minutes , it will automatically log out from my account.

If you have xlock installed, you can run it from any xterm on your console and it will lock the display and require your password to unlock. vlock is a simple little program that allows you to lock some or all of the virtual consoles on your Linux box. You can lock just the one you are working in or all of them. If you just lock one, others can come in and use the console; they will just not be able to use your virtual console until you unlock it.

Of course locking your console will prevent someone from tampering with your work, but won't prevent them from rebooting your machine or otherwise disrupting your work. It does not prevent them from accessing your machine from another machine on the network and causing problems.

More importantly, it does not prevent someone from switching out of the X Window System entirely, and going to a normal virtual console login prompt, or to the virtual console that X11 was started from, and suspending it, thus obtaining your privileges. For this reason, you might consider only using it while under control of xdm.

Security of local devices
If you have a webcam or a microphone attached to your system, you should consider if there is some danger of a attacker gaining access to those devices. When not in use, unplugging or removing such devices might be an option. Otherwise you should carefully read and look at any software with provides access to such devices. I have read a news item where a hacker had remotely taken control of the web cam connected to a lady's computer and he was able to view the private going-ons in her room.

Detecting Physical Security Compromises
The first thing to always note is when your machine was rebooted. Since Linux is a robust and stable OS, the only times your machine should reboot is when you take it down for OS upgrades, hardware swapping, or the like. If your machine has rebooted without you doing it, that may be a sign that an intruder has compromised it. Many of the ways that your machine can be compromised require the intruder to reboot or power off your machine.

Check for signs of tampering on the case and computer area. Although many intruders clean traces of their presence out of logs, it's a good idea to check through them all and note any discrepancy.

It is also a good idea to store log data at a secure location, such as a dedicated log server within your well-protected network. Once a machine has been compromised, log data becomes of little use as it most likely has also been modified by the intruder.

The syslog daemon can be configured to automatically send log data to a central syslog server, but this is typically sent unencrypted, allowing an intruder to view data as it is being transferred. This may reveal information about your network that is not intended to be public. There are syslog daemons available that encrypt the data as it is being sent. Some things to check for in your logs:
  • Short or incomplete logs.
  • Logs containing strange timestamps.
  • Logs with incorrect permissions or ownership.
  • Records of reboots or restarting of services.
  • Missing logs.
  • su entries or logins from strange places.
Local Security
The next thing to take a look at is the security in your system against attacks from local users.
Getting access to a local user account is one of the first things that system intruders attempt while on their way to exploiting the root account. With lax local security, they can then "upgrade" their normal user access to root access using a variety of bugs and poorly setup local services. If you make sure your local security is tight, then the intruder will have another hurdle to jump.
Local users can also cause a lot of havoc with your system even if they really are who they say they are. Providing accounts to people you don't know or for whom you have no contact information is a very bad idea.
Creating New Accounts
You should make sure you provide user accounts with only the minimal requirements for the task they need to do. If you provide your son (age 11) with an account, you might want him to only have access to a word processor or drawing program, but be unable to delete data that is not his.
Several good rules of thumb when allowing other people legitimate access to your Linux machine:
  • Give them the minimal amount of privileges they need.
  • Be aware when/where they login from, or should be logging in from.
  • Make sure you remove inactive accounts, which you can determine by using the 'last' command and/or checking log files for any activity by the user.
  • The use of the same userid on all computers and networks is advisable to ease account maintenance, and permits easier analysis of log data. You might consider using NIS or LDAP for setting up centralised login for your users.
  • The creation of group user-id's should be absolutely prohibited. User accounts also provide accountability, and this is not possible with group accounts.
Many local user accounts that are used in security compromises have not been used in months or years. Since no one is using them they, provide the ideal attack vehicle.

Root Security
The most sought-after account on your machine is the root (superuser) account. This account has authority over the entire machine, which may also include authority over other machines on the network. Remember that you should only use the root account for very short, specific tasks, and should mostly run as a normal user. Even small mistakes made while logged in as the root user can cause problems. The less time you are on with root privileges, the safer you will be.
Several tricks to avoid messing up your own box as root:
  1. When doing some complex command, try running it first in a non-destructive way...especially commands that use globing: e.g., if you want to do 'rm a*.txt', first do 'ls a*.txt' and make sure you are going to delete the files you think you are. Using echo in place of destructive commands also sometimes works.
  2. Provide your users with a default alias to the rm command to ask for confirmation for deletion of files.
  3. Only become root to do single specific tasks. If you find yourself trying to figure out how to do something, go back to a normal user shell until you are sure what needs to be done by root. Using SUDO can be a great help here in running certain super user commands like for shutting down the machine and mounting a partition.
  4. The command path for the root user is very important. The command path (that is, the PATH environment variable) specifies the directories in which the shell searches for programs. Try to limit the command path for the root user as much as possible, and never include '.' (which means "the current directory") in your PATH. Additionally, never have writable directories in your search path, as this can allow attackers to modify or place new binaries in your search path, allowing them to run as root the next time you run that command.
  5. Never use the rlogin/rsh/rexec suite of tools as root. They are subject to many sorts of attacks, and are downright dangerous when run as root. Never create a .rhosts file for root.
  6. The /etc/securetty file contains a list of terminals that root can login from. By default this is set to only the local virtual consoles(vtys). Be very wary of adding anything else to this file. You should be able to login remotely (using SSH) as your regular user account and then su if you need to , so there is no need to be able to login directly as root.
  7. Always be slow and deliberate running as root. Your actions could affect a lot of things. Think before you type!
If you absolutely need to allow someone (hopefully very trusted) to have root access to your machine, there are a few tools that can help. SUDO allows users to use their password to access a limited set of commands as root. This would allow you to, for instance, let a user be able to eject and mount removable media on your Linux box, but have no other root privileges. sudo also keeps a log of all successful and unsuccessful sudo attempts, allowing you to track down who used what command to do what. For this reason sudo works well even in places where a number of people have root access, because it helps you keep track of changes made.

Although sudo can be used to give specific users specific privileges for specific tasks, it does have several shortcomings. It should be used only for a limited set of tasks, like restarting a server, or adding new users. Any program that offers a shell escape will give root access to a user invoking it via sudo. This includes most editors, for example. Also, a program as harmless as /bin/cat can be used to overwrite files, which could allow root to be exploited. Consider sudo as a means for accountability, and don't expect it to replace the root user and still be secure.
Note : Some linux distributions like Ubuntu deviate from this principle and makes use of sudo to do all the system administration tasks.
Network Security
This includes stopping unnecessary programs from running on your machine. For example, if you are not using telnet (and rightly so), you can disable this service in your server. Also using firewalls to restrict the flow of data across the network is very desirable (Iptables and TCPWrappers come to my mind here). An alert system or network administrator will run programs like nmap and netstat to check for and plug any holes in the network.

Saturday 24 September 2005

Backup your data with 'rsync'

Uses of rsync
  • Copying local files.
  • Copying from the local machine to a remote machine and vice versa using a remote shell program as the transport (such as ssh or rsh).
  • Copying from a remote rsync server to the local machine and vice versa.
  • Listing files on a remote machine. This is done the same way as rsync transfers except that you leave off the local destination.
For example, if I want to take a backup of my files in the remote machine called remote_mc, then I will do the following:
$ rsync -avze ssh ravi@remote_mc:/home/ravi/ . 
... where, a sets the archive mode which is a quick way of saying you want recursion and want to preserve almost everything excluding hardlinks which can be included if you specify -H. v is verbose mode, The z option signifies that rsync compresses any data from the files that it sends to the destination machine. This option is useful on slow connections. And e allows you to choose an alternative remote shell program to use for communication between the local and remote copies of rsync. The above command will recursively copy the directory ravi/ from the remote machine to the current directory in my local machine.

Note: If I give ravi@remote_mc:/home/ravi instead of ravi@remote_mc:/home/ravi/ then it will copy all the files and directories from ravi/ directory but will not recreate the ravi directory locally.

The first time I run the above command, it will do a full backup - which might take a while for the copying to complete. But the real beauty of rsync is when I run the same command the next time, when it will check with the source and copy only those parts of files which have been modified after the first backup and will take relatively less time. This is called a differential backup.

You can even sync your backup to mirror the files and directory structure by using the --delete flag.

$ rsync -avz --delete -e ssh ravi@remote_mc:/home/ravi/ .

The --delete flag will delete any files on the receiving side which are not there on the sending side. This flag should be used with care and is advisable to first run the command with the dry run option (-n) to see what files would be deleted to make sure important files aren't listed.

Incremental backups with rsync
You can also create incremental backups with rsync. For example, look at the following command:
$ rsync -avz --backup --backup-dir=/backup --suffix=`date +"%F"`  -e ssh ravi@remote_mc:/home/ravi/ . 
Here, I have denoted that I want a backup with the --backup flag, specified that the backup should be stored in the /backup directory with the --backup-dir flag and the backed-up files should have the current date appended to their names automatically - which I have accomplished using the --suffix flag.

This post dwells briefly on the various uses of rsync. For more details you may read the rsync man page.
And yes, I found this very informative article called Easy Automated Snapshot-Style Backups with Linux and Rsync which explains in detail the numerous ways of taking a backup of your data using rsync and the most efficient of them all.

Also check out ...
Other backup scenarios in GNU/Linux

Wednesday 21 September 2005

Locating files using the find command

Find is a versatile tool which can be used to locate files and directories satisfying different user criteria. But the sheer number of options for this command line tool makes it at the same time both powerful and encumbering for the user. Here I will list a few combinations which one can use to get useful results using find command.

Find all HTML files starting with letter 'a' in your current directory (Case sensitive)
$ find . -name a\*.html

Same as above but case insensitive search.
$ find . -iname a\*.html

Find files which are larger than 5 MB in size.
$ find . -size +5000k -type f

Here the '+' in '+5000k' indicates greater than and k is kilobytes. And the dot '.' indicates the current directory. The -type option can take any of the following values:

f - file
d - directory
l - symbolic link
c - character
p - named pipe (FIFO)
s - socket
b - block device
Find all empty files in your directory
$ find . -size 0c -type f

... Which is all files with 0 bytes size. The option -size can take the following:

c - bytes
w - 2 byte words
k - kilo bytes
b - 512 byte blocks

Note: The above command can also take the -empty parameter.

Find is very powerful in that you can combine it with other commands. For example, to find all empty files in the current directory and delete them, do the following:
$ find . -empty -maxdepth 1 -exec rm {} \;

To search for a html file having the text 'Web sites' in it, you can combine find with grep as follows:
$ find . -type f -iname \*.html -exec grep -s "Web sites" {} \;

... the -s option in grep suppresses errors about non-existent or unreadable files. And {} is a placeholder for the files found. The semicolon ';' is escaped using backslash so as not to be interpreted by bash shell.

Note: You can use the -exec option to combine any command in Linux with the find command. Some of the useful things you can do with it are as follows:

Compress log files on an individual basis
$ find /var -iname \*.log -exec bzip {} \;

Find all files which belong to user lal and change its ownership to ravi
# find / -user lal -exec chown ravi {} \;

Note: You can also use xargs command instead of the -exec option as follows:
$ find /var -iname \*.log | xargs bzip -

Find all files which do not belong to any user:
$ find . -nouser

Find files which have permissions rwx for user and rw for group and others :
$ find . -perm 766

... and then list them.

$ find . -perm 766 -exec ls -l {} \;

Find all directories with name music_files
$ find . -type d -iname \*music_files\*

Suppose you want to find files of size between 700k and 1000k, do the following:
$ find . \( -size +700k -and -size -1000k \)

And how about getting a formatted output of the above command with the size of each file listed ?
$ find . \( -size +700k -and -size -1000k \) -exec du -Hs {} \; 2>/dev/null

... here, the '2>/dev/null' means all the error messages are discarded or suppressed.

You can also limit your search by file system type. For example, to restrict search to files residing only in the NTFS and VFAT filesystem, do the following:
$ find / -maxdepth 2 \( -fstype vfat -or -fstype ntfs \) 2> /dev/null

These are the most common uses of the find command. You can see additional uses by reading the find manual.

Sunday 18 September 2005

Routing , NAT and Gateways in Linux

A router is a device that directs network traffic destined for an entirely different network in the right direction. For example, suppose your network is having the IP address range of 192.168.1.0/16 and you also have a different network which has a network addresses in range 192.168.2.0/16 . Note that these are 'Class C' network addresses which are subnetted. So for your computer ( on the network 192.168.1.0/16 ) to directly communicate between a computer in the network 192.168.2.0/16, you need a intermediary to direct the traffic to the destination network. This is achieved by a router.

Configuring Linux as a router

Linux can be effectively configured to act as a router between two networks. To activate routing functionality , you enable IP forwarding in Linux. This is how you do this:

# echo "1" > /proc/sys/net/ipv4/ip_forward

Now you have enabled IP forwarding in Linux. Now make this change persistent across reboots by editing the file /etc/sysctl.conf and entering the following line:

#FILE : /etc/sysctl.conf
...
net.ipv4.ip_forward = 1
...

Optionally, after editing the above file, you may execute the command :

# sysctl -p

Note: For your linux machine to act as a router, you need two ethernet cards in your machine or you can also configure a single ethernet card to have multiple IP addresses.

What is a gateway?
Any device which acts as the path to or from your network to another network or the internet is considered to be a gateway. Let me explain this with an example: Suppose your computer, machine_B has an address 192.168.0.5 with default netmask. And another computer (machine_A) with an IP address 192.168.0.1 in your network is connected to the internet using a USB cable modem. Now if you want machine_B to send or recieve data destined for an outside network a.k.a internet, it has to direct it to machine_A first which forwards it to the internet. So machine_A acts as the gateway to the internet. Each machine needs a default gateway to reach machines outside the local network. You can set the gateway in machine_B to point to machine_A as follows:

# route add default gw machine_A

Or if DNS is not configured...

# route add default gw 192.168.0.1

Now you can check if the default gateway is set on machine_B as follows:

# route -n

As you can see in the image above, the machine_B has the default gateway set to 192.168.0.1 . And the default gateway has the flag set as UG - 'G' means Gateway and 'U' means the network is UP.

Note: Additional routes can be set using route command. To make the changes persistent across reboots, you may edit the /etc/sysconfig/static-routes file to show the configured route.

What is NAT ?
Network Address Translation (NAT) is a capability of linux kernel where the source or destination address / port of the packet is altered while in transit.
This is used in situations where multiple machines need to access the internet with only one official IP address available. A common name for this is IP masquerading. With masquerading, your router acts as a OSI layer 3 or layer 4 proxy. In this case, Linux keeps track of the packet(s) journey so that during transmission and recipt of data, the content of the session remains intact. You can easily implement NAT on your gateway machine or router by using Iptables, which I will explain in another post.

Related Posts :
How to install network card in Linux
How to assign an IP address
Setting up multiple IP addresses on a single NIC

Thursday 15 September 2005

How to setup SSH keys and why?

Secure SHell (SSH) as the name indicates is a secure form of connecting to a remote machine. It is secure because all data transfer via SSH happens in encrypted form. SSH comes with a collection of tools. For instance, you have -
  • scp - which is used to copy files between remote machines securely.
  • sftp - Which is secure FTP , file transfer.
  • And of course SSH's more common duty being to let users login securely to a remote machine.
SSH comes in two versions. ie SSH 1 and SSH 2. The more recent SSH 2 is provided by a package called OpenSSH. I am going to use SSH 2 here. So make sure you have OpenSSH package installed on your machine.
SSH makes use of public and private keys to verify who you are. And the keys are generated using either RSA or DSA algorithms. Of course , you can also ssh to a remote machine without going through the trouble of creating public and private keys, but it will be less secure.

Figure: Asks for password when ssh(ing) without public and private keys.

Creation of SSH public and private key
To create an SSH key, you make use of the ssh-keygen program as follows:

$ ssh-keygen -t rsa -b 1024

Now it will ask a few details and finally ask to enter a secret pass phrase. After you have entered the pass phrase, it will generate two keys. A public key by name 'id_rsa.pub' and a private key by name 'id_rsa'. And these keys will be stored in a hidden directory called .ssh in your home folder.

Figure: Creating the public private key pair.

Next you have to copy the just created public portion of your key to the remote machine. Let us assume that your local machine is local_mc and the remote machine to which you want to SSH to is remote_mc . You can use scp to copy the key to the remote machine as follows:

$ scp ~/.ssh/id_rsa.pub remote_mc:.ssh/authorized_keys

Above, I have copied the id_rsa.pub key to the .ssh folder of the remote machine and named it authorized_keys. Now remote_mc is ready to accept your ssh connection.

Note: Usually you are not the administrator of the remote machine. In which case, you have to email your public key to the administrator of remote_mc. And he will first check if the key is valid by entering the command :

# ssh-keygen -l -f the_key_you_send.pub

And once he is satisfied, he will include the key into the .ssh directory of the user's account on remote_mc.Now when you want to login to remote_mc via ssh, it will ask you for the pass phrase.
Note: This is significant because you are transmitting your encrypted pass phrase and NOT the password across the network. And the finger print you generated on local_mc is tied not only to the user account on your local machine but also to the machine itself.

Figure: Asks for the pass phrase instead of password.

Which means, you can log in to the remote_mc only from the local_mc using that public key and not from any other machine on the network.

Password-less logins using SSH
All this is fine; But what happens when you have to ssh to the remote_mc frequently. It becomes tedious and error prone to type the pass phrase each and every time. There is a way for you to circumvent this issue.
You can use a tool called ssh-agent. So when you try to ssh to remote_mc from the local_mc, the ssh agent will verify that this key does come from you. It is ssh agent's responsibility to handle the key. So now you need only give the pass phrase the first time you log on to the remote_mc . And the next time you log in, the ssh agent verifies your identity and you are automatically logged on to the remote_mc.
These are the steps required for password less logins.
  • Start ssh-agent in the command line.
    $ exec ssh-agent /bin/bash
    $_

  • Add your identity to the ssh-agent using the ssh-add tool.

    $ ssh-add ~/.ssh/id_rsa
When you enter the above command, it will ask for your pass phrase which you have to provide. From here on,the ssh-agent will verify your identity and you can ssh to the remote machine without entering the pass phrase.

Figure: Using ssh-add to add your identity to the ssh agent.

$ ssh -l skinner remote_mc

From now on you don't have to use the password or pass phrase to ssh to the remote_mc machine.
Also Read:
My previous post on Secure SHell.

Prohibiting users from shutting down or rebooting the machine

If you are allowing the general public, access to your computers (Like for example, in a cyber cafe), then you will be interested in restricting the users from shutting down or rebooting your Linux machine. The following are the steps needed to accomplish this:

Disable access through the Action Menu (Applicable to GNOME desktop)
Run the gconf-editor program on the GNOME desktop,

# gconf-editor

and check off the entry for /apps/gnome-session/options/logout_prompt.

Disable shutdown and reboot commands at the login screen
The gdm daemon is responsible for managing the login screen in X. Edit the /etc/X11/gdm/gdm.conf file, and set the 'SystemMenu' directive to 'false'.

Note: The gdm.conf file is a liberally commented file which contains a lot of configuration parameters which can be changed to modify how the system logs in for an X session. For example if you want the system to log in automatically to a users account after an interval of logout period then you set the 'TimedLoginEnable' parameter to true.

Prevent users from executing these commands in the console
Rename the reboot, poweroff, and halt files under the /etc/security/console.apps/ directory.
And finally ...

Disable the Ctrl+Alt+Del key combination
Comment out the following line in the /etc/inittab file:

# FILE: /etc/inittab
# ca::ctrlaltdel:/sbin/shutdown -t3 -r now

This will disable the Ctrl+Alt+Del key sequence . Now only root can power off or reboot the machine.

Also read:
Give selective superuser powers to users.

Tuesday 13 September 2005

How to change MAC address

Changing MAC address of a machine is called spoofing a MAC address or faking a MAC address. In linux, you can change MAC address of your machine.This is how it is done.

How to change MAC address in Linux


First find the physical MAC address of your machine by running the following command :
$ ifconfig -a | grep HWaddr
eth0 Link encap:Ethernet HWaddr 00:80:48:BA:d1:20

The hexadecimal numbers in blue denote my machine's MAC address. Yours will be different. Learn how to use the ifconfig Linux command.

Next, login as root in Linux and enter the following commands -

# ifconfig eth0 down
# ifconfig eth0 hw ether 00:80:48:BA:d1:30
# ifconfig eth0 up
# ifconfig eth0 |grep HWaddr

Note above that I have changed the MAC address to a different number highlighted in blue. 00:80:48:BA:d1:30 is the new MAC address I have provided for my Linux machine. You can choose any 48 bits hexadecimal address as your MAC address.

Why you should change MAC address of your Linux machine


These are the reasons you should change the MAC address of your machine.
  • For privacy  - For instance when you are connecting to a Wi-Fi hotspot.
  • To ensure interoperability. Some internet service providers bind their service to a specific MAC address; if the user then changes their network card or intends to install a router, the service won't work anymore. Changing the MAC address of the new interface will solve the problem.

Caveats to Changing MAC address


In Linux, Windows, Mac OS X, or a different operating system, changing MAC address is only temporary. Once you reboot your machine, the operating system reflects the physical MAC address burnt in your network card and not the MAC address you set.

Bash Completion - Makes life easier for Linux users

One thing that really makes working in the command line in Linux a pleasure is the various in-built shortcuts and name completion features in Bash - the default shell in Linux.

But one grouse I always had was it was really difficult to remember all the options that each command had. For example, 'find' came with numerous options which I found difficult to memorize and had to resort to reading the man page each time I had to use the command. Now you can enhance the bash shell to give you the added functionality of listing the options that can be used with a command. For that you should download and install an add-on package called bash-completion. I use Fedora Core 2 but if you are using the latest Linux distribution, it might be installed by default on your machine.
In Debian based Linux distributions, you may install it using the following command :
# apt-get install bash-completion
After installing the bash-completion package, fire up a terminal and type:

$ grep --

... followed by two TABs and you get all the options that can be passed to the grep command (see figure). This works for any command in linux. Now you don't have to remember all those options that need be passed to the programs any longer.

bash completion
Also read:
Bash Shell Shortcuts
Special Shell Variables

Friday 9 September 2005

Apache : Name-based Vs IP Based Virtual Hosting

Often when, you attend interviews for network administration related jobs , the one question you may encounter while discussing about web servers is the difference between name-based and IP based virtual hosting. Here I will explain the difference between the two.

In IP-based virtual hosting, you are running more than one web site on the same server machine, but each web site has its own IP address. In order to do this, you have to first tell your operating system about the multiple IP addresses. See my post on configuring multiple IP addresses on a single NIC . You also need to put each IP in your DNS, so that it will resolve to the names that you want to give those addresses .

In Name-based virtual hosting, you host multiple websites on the same IP address. But for this to succeed, you have to put more than one DNS record for your IP address in the DNS database. This is done using CNAME tag in BIND. You can have as many CNAME(s) as you like pointing to a particular machine. Of course, you also have to uncomment the NameVirtualHost section in httpd.conf file and point it to the IP address of your machine.

#FILE: httpd.conf
...
NameVirtualHost 192.168.0.1
...

This excellent article on Serverwatch.com explains in detail the configuration details of both types of virtual hosting in apache webserver.

Thursday 8 September 2005

Linux Eyecandy



This is my new linux desktop featuring actor Angelina Joulie. I got this wallpaper at WallpaperStock , which contains a fantastic collection , updated daily.

Wednesday 7 September 2005

Implementing DNS on Linux - Part III

This is the third and final part of my post on DNS. You can read the part I and part II of this post if you haven't done yet and then come back to this post.
Check BIND Syntax with these utilities
If there is a syntax error in the files /etc/named.conf or /var/named/* files, then the BIND server will fail to start. There are two utilities that come along with BIND which helps one to check for syntax errors in the files. They are :
  • named-checkconf - This script checks the /etc/named.conf file for any syntax errors.
  • named-checkzone - This file checks for any syntax errors in a specific zone configuration.
# named-checkzone mysite.com /var/named/mysite.com.zone

BIND Utilities
Many useful utilities are included in the bind-utils RPM package. Some of these are as follows:
host - This is a utility used to gather host/domain information. It is capable of showing all the information about a host and / or listing an entire domain.

# host -a www.mysite.com


... lists all information about www.mysite.com

# host -al mysite.com


... shows all information for every host in the mysite.com domain. Listing an entire domain's contents is known as performing a "total zone transfer".

dig - Short form for domain information gropher is a utility used to send queries directly to the name server, bypassing any system resolver libraries. This direct access is useful for problem isolation. The output of dig is in zone file format.
Some examples using dig are as follows:

$ dig @ns mysite.com
$ dig mail.mysite.com
$ dig -x 192.168.0.254
$ dig www.yahoo.com


Note: Dig expects to be given FQDNs for lookups, while host utility will look at the search information in /etc/resolv.conf file.

Additional help on configuring BIND
If you have installed BIND software on your machine, you can find additional docs on BIND at these locations :
BIND features - /usr/share/doc/bind-version/README
Migration to BIND from other DNS servers - /usr/share/doc/bind-version/misc/migration
BIND ver 9 administration manual - /usr/share/doc/bind-version/arm/Bv9ARM.html
Also visit the BIND home page.

Tuesday 6 September 2005

Implementing DNS on Linux - Part II

In my previous post, Implementing DNS on Linux - Part I, I explained the syntax of /etc/named.conf file. In this post, I will explain the rest of the steps needed to implement DNS in Linux using BIND.

Zone Files
Zone files reside in the /var/named/ directory. These files are those named in the /etc/named.conf file with the 'file' directive.

For example, in Part -I of this post, I had created a zone called mysite.com which had a file directive by name "mysite.com.zone". So I move to /var/named/ directory and create a file by the same name here.

# touch mysite.com.zone

Similarly for each zone you have created in /etc/named.conf file, you should have a corresponding file by the same name as that given in the file directive in the /var/named/ directory.

Syntax of Zone File
  • Begins with $TTL (Time To Live). This determines the default length of time in seconds which you want resolving servers to cache your zone's data.
  • First resource record is zone's Start Of Authority (SOA).
  • Zone data in additional resource records.
  • Fully qualified domain names (FQDN) in zone file must end with a dot (.) .
    BIND assume that the names that don't end with a dot should end with the name of the current domain. Always use a dot at the end of a name that is fully qualified.
  • Semi colons ; in database files signify a comment to the end of line.
Example :

; FILE : mysite.com.zone
$TTL 86400 ; Time to Live in Seconds

@ IN SOA ns.mysite.com. root.mysite.com. (
20011042501 ; Serial Number.
300 ; refresh.
60 ; retry
1209600 ; expire
43200 ; minimum TTL for negative answers.
)
...


The @ is interpreted as the name of the originating domain - mysite.com in the above example. The @ itself is not mandatory, but the domain must be indicated. The values of fields between the brackets, except for the first, are time periods.

Serial numbers - Are based on ISO dates. Every time the data in the database is changed, the serial number must be increased in order that the slave servers know the zone has changed.

Refresh - Is the delay time that slave name servers should wait between checking the master name server's serial number for changes. A good value is one hour.

Retry - is the delay time that a slave name server should wait to refresh its database after a refresh has failed. One minute is a good value.

Expire - is the upper time limit that a slave name server should use in serving DNS information for lack of a refresh from the master name server. A good value is 7 days.

The minimum time to live for negative answers specifies how long a nameserver should cache a "no such host" response from an authoritative server of the domain. This reduces load on the server.

Note that all times are in seconds by default. However, the following may be used :
W = Weeks
D = Days
H = Hours
M = Minutes

Must use capital letters, no space between the number and the unit is allowed.

The last string in the first line of the SOA record (root.mysite.com. in the above example) specifies the contact person for the domain. Conventionally, the responsible party's email address is used, replacing the @ with a dot.
Types of Records in a Zone File
Name Server NS
There should be an NS record for each master or slave name server serving your zone. NS records point to any slave servers that should be consulted by the client's name server if the master should fail.

; FILE : mysite.com.zone
...
@ IN NS ns.mysite.com.
mysite.com. IN NS ns1.mysite.com.
...

NS records designate name servers to use for this domain. It should contain at least one DNS server that is authoritative for the zone. A list of slave servers that can be referenced is commonly included. Fully qualified names must be used for NS resource records. The @ notation allows the domain name to be taken as the originating domain for the zone.

A record - An A resource record maps a hostname - which may or may not be fully qualified - and an IP address.

; FILE : mysite.com.zone
...
mail IN A 192.100.100.5
login.mysite.com. IN A 192.100.100.6
...

PTR - These are the inverse of A records - they map an IP address to a hostname. For reverse lookups - that is, PTR records - specify the octets of the domain in the reverse order. For example, if the zone was defined as 100.192.in-addr.arpa, then the name server would expand the PTR reference in the slide into 6.100.100.192.in-addr.arpa. A lookup of 192.100.100.6 would find this reference and would return login.mysite.com.

; FILE : mysite.com.zone
...
login.mysite.com. IN A 192.100.100.6
6.100 IN PTR login.mysite.com.
...

MX - These records are used to define mail handlers (or, exchangers) for a zone. MX records must have a positive integer listed immediately after the MX and before the host name. This integer is used by remote Mail Transport Agents (MTA) to determine which host has delivery priority for the zone.

; FILE : mysite.com.zone
...
mysite.com. IN MX 5 mail.mysite.com.
mysite.com. IN MX 10 mymail.mysite.com.
...

Precedence is given to the mail exchanger with the lowest priority. If that host is not up, then the next lowest priority mail exchanger will be used. If none of the mail exchangers are up, then the mail will be returned to the forwarding SMTP server to be queued for later delivery.

CNAME - These records map address aliases.

; FILE : mysite.com.zone
...
pop IN CNAME mail
ssh IN CNAME login.mysite.com.
...

Note: CNAME, A and PTR resource records comprise the bulk of resources seen in the database files. Incorrect setup of these records can cause many problems, so they should always be evaluated carefully before changes are committed.

Round-Robin load sharing through DNS
Load balancing can be achieved through the simple use of multiple A records.

; FILE : mysite.com.zone
...
www 0 IN A 192.168.1.10
www 0 IN A 192.168.1.11
www 0 IN A 192.168.1.12
...

Note: DNS traffic will increase because a TTL of 0 means queries will never be cached.

To be Contd...