Wednesday, 27 April 2005

Compiling a linux kernel from source

Something I found really wonderful in linux is that you can compile a custom kernel for your machine including just the features that are needed. You might think why anyone would want to compile a custom kernel from source when you get a precompiled binary kernel, right? For one, suppose you have an old PC on which you want to run linux; And you know that you have no need for USB and PCMCIA support because you don't have eaither on your PC. Won't it be really nice to recompile your kernel from source without support for these ? Because then you will be reducing the size of the kernel and optimizing the kernel for better performance.
Ofcourse with the most recent hardware, there is no need for recompiling because the performance gains are not noticeable. But you can get better performance if you run a custom kernel on old hardware with limited amount of memory.
I had earlier writen about the steps needed to compile linux kernel 2.4.x . But the steps needed to compile the latest 2.6.x version kernel is a lot different from 2.4.x versions.I came across an interesting article with details about patching a kernel to compiling both 2.4.x and 2.6.x kernels from source. You can read all about it at Digital Hermit.

Monday, 25 April 2005

The /proc filesystem

Linux has become really popular as a server operating system mainly due to its security and stability. Infact the system administrators could do system maintanence tasks short of any hardware upgradations without rebooting the machine. That means there is virtually no downtime suffered by a linux server. This is made possible because Linux provides various ways to change the underlying operating system values and settings while keeping the system up and running. Linux contains a virtual filesystem called /proc which can be accessed by the system administrators to achieve the above tasks.
A /proc filesystem is not a real filesystem because it resides only in the computer's memory and does not utilize any space on the hard disk. This filesystem is a map to the running kernel process. The /proc filesystem is mounted in the /proc directory during system initialization and has an entry in the /etc/fstab file.
If you move into the /proc directory, you will find a lot of sub directories and files. Some of these files give information on system hardware, networking settings and activity, memory usage and so on of your computer. And there are some other files in the /proc/sys directory whose values you can manipulate to make changes to the various parameters of the running kernel settings.
Here I will describe the various files and directories residing in /proc that a system administrator could find helpful.
If you list the files under /proc, you will find that all the files have a size of zero - this is because they are not really files and directories in the typical sense. If you want to view the contents of a file in the /proc directory, you use the 'cat' command.
WARNING: Do not use 'cat' on /proc/kcore as this is a special file which is an image of the running kernel's memory at that particular moment - cat'ing this file will leave your terminal unusable.
Some of the key files in the top-level directory are as follows :
  • /proc/interrupts - View IRQ settings
  • /proc/cpuinfo - Information about the system's CPU(s)
  • /proc/dma - Direct Memory Access (DMA) settings
  • /proc/ioports - I/O settings.
  • /proc/meminfo - Information on available memory, free memory, swap, cache memory and buffers. You can also get the same information using the utilities free and vmstat.
  • /proc/loadavg - System load average
  • /proc/uptime - system uptime and idle time. Can also be obtained using utility uptime.
  • /proc/version - Linux kernel version, build host, build date etc. Can also be obtained by executing `uname -a`.
Beneath the top-level /proc directory are a number of important subdirectories containing files with useful information. These include :
  • /proc/scsi - Gives information about SCSI devices
  • /proc/ide - information about IDE devices
  • /proc/net - information about network activity and configuration
  • /proc/sys - Kernel configuration parameters. The values in files in this directory are editable by root, which I will further explain below.
  • /proc/ - information about process PID.
/proc/sys directory
As explained earlier, this directory holds most kernel configuration parameters and is the one that is designed to be changed while the system is running. Some of the files which are of real use to the system administrators are as follows:
  • /proc/sys/fs/file-max
    This specifies the maximum number of file handles that can be allocated. If some of your users get an error when trying to open more files stating that the maximum limit of number of open files have been reached, then you need increase the value in this file (default is 4096) to set the problem straight as follows :
    # echo "10000" > /proc/sys/fs/file-max
  • /proc/sys/fs/super-max
    This specifies the maximum number of super block handlers. Any filesystem you mount needs to use a super block, so you could possibly run out if you mount a lot of filesystems.
    Default setting: 256
  • /proc/sys/kernel/acct
    This holds three configurable values that control when process accounting takes place based on the amount of free space (as a percentage) on the filesystem that contains the log:
    1. If free space goes below this percentage value then process accounting stops.
    2. If free space goes above this percentage value then process accounting starts.
    3. The frequency (in seconds) at which the other two values will be checked.
    To change a value in this file you should echo a space separated list of numbers.
    Default setting: 2 4 30
    These values will stop accounting if there is less than 2 percent free space on the filesystem that contains the log and starts it again if there is 4 or more percent free space. Checks are made every 30 seconds.
  • /proc/sys/kernel/ctrl-alt-del
    This file holds a binary value that controls how the system reacts when it receives the ctrl+alt+delete key combination. The two values represent:
    1. A zero (0) value means the ctrl+alt+delete is trapped and sent to the init program. This will allow the system to have a graceful shutdown and restart, as if you typed the shutdown command.
    2. A one (1) value means the ctrl+alt+delete is not trapped and no clean shutdown will be performed, as if you just turned the power off.
    Default setting: 0
  • /proc/sys/kernel/domainname
    This allows you to configure your network domain name. This has no default value and may or may not already be set.
  • /proc/sys/kernel/hostname
    This allows you to configure your network host name. This has no default value and may or may not already be set.
  • /proc/sys/net/ipv4/ip_forward
    This allows you to turn on/off IP forwarding. If the value in this file is "1" then ip forwarding is turned "on "and if value is "0" then ip forwarding is turned off.
This is only a small subset of 100's of configuration parameters that can be changed via the files in /proc/sys directory.
But the /proc/sys modifications are temporary and are not saved at system shutdown (which may be a rare instance as far as servers are concerned). But you can use the "sysctl" command to manage such settings in a static and centralized fashion. It reads the values in the /etc/sysctl.conf file. The sysctl command is called during boot time by the /etc/rc.d/rc.sysinit script. So to make the changes you make in the kernel parameters permanent, just enter it in the /etc/sysctl.conf file and then execute the command :
# sysctl -p
... to make the kernel reread the changes from the /etc/sysctl.conf file.
There are two simple rules for converting between files in /proc/sys and variables in sysctl:
  • Drop the /proc/sys from the beginning.
  • Swap slashes for dots in the filenames.
These two rules will let you swap any file name in /proc/sys for any variable name in sysctl.
So /proc/sys/net/ipv4/ip_forward will become net.ipv4.ip_forward
There is an interesting article about the /proc filesystem at Linux Gazette.

Friday, 22 April 2005

Resizing Logical Volumes

This is a continuation of my earlier post Creating Logical Volumes in Linux . Here I will explain how to resize an existing logical volume. Logical volumes may be resized dynamically while preserving the data on the volume. Here is how:
Reducing a logical volume
  1. Reduce the filesystem residing on the logical volume.
  2. Reduce the logical volume.
For different file systems, it is achieved differently.

For ext2 file system

If you are using LVM 1, then both the above steps could be acomplished by executing a single utility called e2fsadm.
# umount /data
# e2fsadm -L -1G /dev/my_vol_grp/my_logical_vol
# mount /data
The above command first reduces the filesystem in the 'my_logical_vol' by 1 GB and then reduces the my_logical_vol itself by the same amount.

If you are using LVM 2 - more recent linux distributions like Fedora use LVM 2 - then you do not have the 'e2fsadm' utility. So you have to first reduce the filesystem using 'resize2fs' and then reduce the logical volume using 'lvreduce'.
# umount /data
# resize2fs /dev/my_vol_grp/my_logical_vol 1G
# lvreduce -L 1G /dev/my_vol_grp/my_logical_vol
# mount /data
In the above case, I have reduced my file system "to" 1 GB size ...

Note: I didn't use the minus (-) sign while using resize2fs

... And then used the lvreduce command to reduce the logical volume "to" 1 GB. If I want to reduce the logical volume "by" 1 GB, then I give the same command but with "-L -1G" instead of "-L 1G".

Reiserfs file system
If you have a reiserfs filesystem, then the commands are a bit different than ext2(3).
# umount /data
# resize_reiserfs -s -1G /dev/my_vol_grp/my_logical_vol
# lvreduce -L -1G /dev/my_vol_grp/my_logical_vol
# mount -t reiserfs /data
XFS and JFS filesystems
As of now, there is no way to shrink these filesystems residing on logical volumes.

Grow a Logical Volume
The steps for growing a logical volume are the exact opposite of those for shrinking the logical volume.
  1. Enlarge the logical volume first.
  2. Then resize the filesystem to the new size of your logical volume.
Update (July 22nd 2005) : I came across this very interesting article on LVM at RedHat Magazine which I found really informative.

Thursday, 21 April 2005

Creating a LVM in Linux

I am sure anybody who have used windows (2000 and above) have come across the term dynamic disks. Linux/Unix also have its own dynamic disk management called LVM.
What is an LVM ?
LVM stands for Logical Disk Manager which is the fundamental way to manage UNIX/Linux storage systems in a scalable manner. An LVM abstracts disk devices into pools of storage space called Volume Groups. These volume groups are in turn subdivided into virtual disks called Logical Volumes. The logical volumes may be used just like regular disks with filesystem created on them and mounted in the Unix/Linux filesystem tree. The logical volumes can span multiple disks. Even though a lot of companies have implemented their own LVM's for *nixes, the one created by Open Software Foundation (OSF) was integrated into many Unix systems which serves as a base for the Linux implementation of LVM.
Note: Sun Solaris ships with LVM from Veritas which is substantially different from the OSF implementation.
Benefits of Logical Volume Management
  • LVM created in conjunction with RAID can provide fault tolerance coupled with scalability and easy disk management.
  • Create a logical volume and filesystem which spans multiple disks.
    By creating virtual pools of space, an administrator can create dozens of small filesystems for different projects and add space to them as needed without (much) disruption. When a project ends, he can remove the space and put it back into the pool of free space.
Note : Before you move to implement LVM's in linux, make sure your kernel is 2.4 and above. Or else you will have to recompile your kernel from source to include support for LVM.
LVM Creation
To create a LVM, we follow a three step process.
Step One : We need to select the physical storage resources that are going to be used for LVM. Typically, these are standard partitions but can also be Linux software RAID volumes that we've created. In LVM terminology, these storage resources are called "physical volumes" (eg: /dev/hda1, /dev/hda2 ... etc).
Our first step in setting up LVM involves properly initializing these partitions so that they can be recognized by the LVM system. This involves setting the correct partition type (usually using the fdisk command, and entering the type of partition as 'Linux LVM' - 0x8e ) if we're adding a physical partition; and then running the pvcreate command.
# pvcreate /dev/hda1 /dev/hda2 /dev/hda3
# pvscan
The above step creates a physical volume from 3 partitions which I want to initialize for inclusion in a volume group.
Step Two : Creating a volume group. You can think of a volume group as a pool of storage that consists of one or more physical volumes. While LVM is running, we can add physical volumes to the volume group or even remove them.
First initialize the /etc/lvmtab and /etc/lvmtab.d files by running the following command:
# vgscan
Now you can create a volume group and assign one or more physical volumes to the volume group.
# vgcreate my_vol_grp /dev/hda1 /dev/hda2
Behind the scenes, the LVM system allocates storage in equal-sized "chunks", called extents. We can specify the particular extent size to use at volume group creation time. The size of an extent defaults to 4Mb, which is perfect for most uses.You can use the -s flag to change the size of the extent. The extent affects the minimum size of changes which can be made to a logical volume in the volume group, and the maximum size of logical and physical volumes in the volume group. A logical volume can contain at most 65534 extents, so the default extent size (4 MB) limits the volume to about 256 GB; a size of 1 TB would require extents of atleast 16 MB. So to accomodate a 1 TB size, the above command can be rewriten as :
# vgcreate -s 16M my_vol_grp /dev/hda1 /dev/hda2
You can check the result of your work at this stage by entering the command:
# vgdisplay
This command displays the total physical extends in a volume group, size of each extent, the allocated size and so on.
Step Three : This step involves the creation of one or more "logical volumes" using our volume group storage pool. The logical volumes are created from volume groups, and may have arbitary names. The size of the new volume may be requested in either extents (-l switch) or in KB, MB, GB or TB ( -L switch) rounding up to whole extents.
# lvcreate -l 50 -n my_logical_vol my_vol_grp
The above command allocates 50 extents of space in my_vol_grp to the newly created my_logical_vol. The -n switch specifies the name of the logical volume we are creating.
Now you can check if you got the desired results by using the command :
# lvdisplay
which shows the information of your newly created logical volume.
Once a logical volume is created, we can go ahead and put a filesystem on it, mount it, and start using the volume to store our files. For creating a filesystem, we do the following:
# mke2fs -j /dev/my_vol_grp/my_logical_vol
The -j signifies journaling support for the ext3 filesystem we are creating.
Mount the newly created file system :
# mount /dev/my_vol_grp/my_logical_vol /data
Also do not forget to append the corresponding line in the /etc/fstab file:
#File: /etc/fstab
/dev/my_vol_grp/my_logical_vol /data ext3 defaults 0 0
Now you can start using the newly created logical volume accessable at /data mount point.
Next : Resizing Logical Volumes

Wednesday, 20 April 2005

The Process of Chip making explained

What is the use of an OS (be it windows or linux or any other) if you do not have the right hardware to install it on ? Excluding MacOS, which runs on PowerPC architecture, almost all linux and windows Oses run on PC's that use either Intel, IBM or AMD processors. Have you wondered what these processors are made up of and how they make such powerful processors ?
These powerful processors are made on silicon wafers. Experts say that AMD, IBM, Intel and other heavy hitters all employ the same principles when it comes to manufacturing devices on a silicon wafer. Major differences in terms of the production process arise due to the type and number of process steps and the use of tools and materials. Did you know that AMD's latest processor Athlon AMD-64 contains 105.9 million transistors?
I came across an interesting article at Tom's Hardware which gives the exclusive inside story on AMD's chip production intersperced with lots of pictures of their manufacturing facility.
Really Interesting !!

Tuesday, 19 April 2005

Prevent a non-root user from shutting down or rebooting the system

To prevent all non-root users from using the shutdown, reboot or halt commands, do the following :

  1. In the file /etc/X11/gdm/gdm.conf , change the line that reads :
    SystemMenu=true
    to
    SystemMenu=false
  2. In the file /etc/inittab, change the line that reads :
    ca:ctrlaltdel:/sbin/shutdown -t3 -r now
    to
    ca:ctrlaltdel:echo "You are not authorized to turn off the machine"
  3. In the directory /etc/security/console.apps/, delete the file reboot, poweroff and halt.
  4. Remove the file /usr/bin/poweroff

Now only the root user will be able to turn off or reboot the machine.

Monday, 18 April 2005

The GPL FAQ

If you have been using linux or much less, reading your daily newspaper, you might have come across the terms like Free Software Foundation, GPL and statements like "linux is GPLed". Have you ever wondered what GPL stands for ? And do you know if there is a difference between GPLed software, freeware and open source software ?

GPL stands for GNU Public Licence. It gives the end user the right to use, view, modify and share the source code of any software which has been released under this licence. And yes Linux is free because it is released under GPL. You can find answers to all these and more at the Official FSF FAQ on GNU GPL.

Sunday, 17 April 2005

Window Managers in Linux

In Linux, the concept of a GUI is different from that of Microsoft Windows OSes. While in operating systems like Windows 98/XP, the GUI is permenantly embedded in the operating system, Linux follow a different philosophy. In Linux,the act of rendering a GUI on screen follows a client - server architecture. You have got a server called XFree86 (later versions of linux ships with X.org which is the breakaway group of XFree86 due to differences in licensing issues) which listens for connections from X client applications via a network or local loopback interface.
Linux comes with a base X Window System, which includes many cutting edge XFree86 technology enhancements such as 3D hardware acceleration support, the XRender extension for anti-aliased fonts, a modular driver based design, and support for modern video hardware and input devices.
Window managers are X client programs which are either part of a desktop environment or, in some cases, standalone. Their primary purpose is to control the way graphical windows are positioned, resized, or moved. Window managers also control title bars, window focus behavior, and user-specified key and mouse button bindings.
Having said that, there are a lot of window managers for linux. So the user has the flexiblity to modify the look and feel of his GUI environment according to his tastes.
Some of the popular window managers in linux are as follows:
  • kwin — The KWin window manager is the default window manager for the KDE desktop environment. It is an efficient window manager which supports custom themes.
  • metacity — The Metacity window manager is the default window manager for the GNOME desktop environment. It is a simple and efficient window manager which supports custom themes.
  • mwm — The Motif window manager, is a basic, standalone window manager. Since it is designed to be a standalone window manager, it should not be used in conjunction with the GNOME or KDE desktop environments.
  • sawfish — The Sawfish window manager is a full featured window manager. It can be used either standalone or with a desktop environment.
  • twm — The minimalist Tab Window Manager, which provides the most basic tool set of any of the window managers and can be used either standalone or with a desktop environment. It is installed as part of X.org.
  • fluxbox - A very popular light weight window manager.
  • fvwm - Fvwm was designed to minimize memory consumption, provide a 3-D look (similar to from Motif's mwm) and provide a simple virtual desktop. Functionality can be enhanced by the use of various modules.
  • Afterstep - This is a window manager based on fvwm but designed to emulate some of the look and feel of Apple's NeXtStep interface.
  • IceWM - Another light weight window manager with a windows 98 like interface....these are just a few that come to my mind.

There are a lot more.

So what is the difference between a window manager and a desktop ?
The difference is that Desktop environments have advanced features which allow X clients and other running processes to communicate with one another and allow all applications written to work in that environment to perform advanced tasks, such as drag and drop operations; Cut, copy and paste data between different applications to name a few and, they supply their own range of integrated utilities and applications. This convenience and ease of use makes them particularly attractive to new users, which has made them very popular. A Desktop environment displays a more polished front to the user. Some of the more popular desktop enviroments are Gnome, Kde, XFce and cde.
You can also run the window managers as standalone. For example, try running twm window manager as follows from the command line:

# startx -e

To start twm, try:

# startx -e twm

This will give you a fair idea about the differences between window managers and desktop environments.

$100 Laptops running Linux

Yes!! you heard me right. This is a project brought out by MIT Media Labs to introduce advanced computing to the masses. More specifically the various schools in the developing world where the students cannot afford to buy computers. The laptop will have 12" colour screen which will show the images using "electronic ink" - a technology developed at MIT Media Labs and it will be loaded with - what else - Linux OS. The other configuration parameters are 1GB HDD, 500Mhz processor, lots of USB ports, WiFi enabled and will use innovative power including wind-up and it will cost just $100. The project is still in the design stage and is expected to be out by end of 2006.
What makes it really interesting is that, the project is sponsored by a host of reputed companies including Google, AMD and News Corp amoung others.But there is a catch though - It will not be available to individuals for purchase; but only to government agencies which are willing to adopt a policy of one laptop per child. Good for the kids and good for linux. You can read all about the project at the MIT Media Labs Website.
Check out these photos:

Figure : You can use this laptop in three different ways.

Figure : Looks Cool. Uses revolutionary technology.


Updates :

Thursday, 14 April 2005

A collection of PDF viewers for GNU/Linux

Portable Document Format (pdf) file is a self-contained cross-platform document. In plain language, it is a file that will look the same on the screen and in print, regardless of what kind of computer or printer someone is using and regardless of what software package was originally used to create it.Although they contain the complete formatting of the original document, including fonts and images, PDF files are highly compressed, allowing complex information to be downloaded efficiently.
PDF is a file type created by Adobe Systems Incorporated. You need a viewer to read the pdf files.And since Adobe released the schema of PDF and postscript (another file format for printing purposes) in the public domain, anybody interested in creating a software for viewing PDF files can easily create one.
Linux being a community effort,there are a number of utilities for viewing PDF files.
The one that come installed by default on most modern linux distributions is GPDF . GPdf is a PDF viewer for the GNOME 2 platform. It has got basic PDF viewing capabilities but not many features . Another PDF viewer is XPDF . Xpdf is not so beautiful to look at. But it has a very functional interface with which you can view the PDF files. An advantage of XPDF over GPDF is that it has got a low memory footprint and so loads very fast even on my machine which has only 64 MB RAM.
Recently Gnome has brought out another document viewer called Evince . Evince is a document viewer for multiple document formats like pdf, postscript and support planned for other formats like multi-page TIFF, DJVU, DVI and Images. Their plan is to replace the multiple document viewers that exist on the GNOME Desktop with a single simple application.
Other PDF viewers that come to mind are ggv (Ghostscript Viewer) and kpdf (pdf viewer on kde). In fact true to the spirit of freedom of choice, there are a plethora of pdf viewers for the GNU/Linux platform and the user can choose to use the one he is more comfortable with.
Till recently Adobe had not released a version of Acrobat Reader (a popular software used to view PDF files) for GNU/Linux. But that has changed now. They have released the Acrobat Reader Version 7.0 for linux (over 30 MB download), which is a very important milestone for the GNU/Linux community because even though Adobe has released only a binary version of their Acrobat Reader, the mere act of releasing a software on this platform strengthens the fact that GNU/Linux on the desktop has come of age and is becoming increasingly popular among the masses. And companies that have a stake in the IT industry cannot afford to sit back and all together ignore this platform.

Saturday, 9 April 2005

Installing linux on a Mac Mini

I have always been fascinated by the iMacs from Apple after I got to use an iMac at a friends place. But buying one was beyond my price range until Apple released the Mac Mini - widely touted as the budget Mac which could make the PC run for their money.

For people who doesn't know - Apple brings out the Mac series of machines running their own proprietary OS called OSX. This Operating system is built on a Unix base (more specifically the FreeBSD) and so can run most command line tools of Unix/Linux. Also unlike the PC which is built on Intel architecture, the Mac computers use the G5 Chipset (PowerPC Architecture). Frankly speaking, each Mac computer could be considered to be a piece of art considering the elegant look and latest technologies used in them.

Now my writing about this in the blog dedicated to linux takes significance because I have always wondered if Linux could be installed on a Mac computer. Recently, I came across this article which gives a detailed explanation on Installing Debian Linux on the Mac Mini. Those of you who are thinking of buying a Mac Mini might be interested to know that you can also dual boot between Linux and OSX in the Mac Mini just like you do in a PC.

Tuesday, 5 April 2005

Configuring Apache webserver to restrict access to your website

Apache Webserver is the most popular web server and has a market share of around 60%. Here I will explain a small but very useful feature of the apache web server - which is restricting access to (a part of) your website to only the privileged few by implementing username and passwords.
There are two ways of restricting access to documents.
  1. Either by the hostname of the browser being used
  2. By asking a username and password
The first method can be used to, for example, restricting documents being used within a company. But if the people who are accessing documents are widely dispersed, then the second method is more suitable.
Here I will explain the second method - ie, assigning username and passwords to users who are authorized to access the documents. This is known as user authentication.
Setting up user authentication takes two steps :
  1. Create a file containing username and passwords - Apache webserver has a utility called htpasswd to create the file containing username and passwords. Here I am creating a file called 'users' in the /usr/local/etc/ directory.
Note: For security reasons, the above file should NOT be under the document root. It can be anywhere BUT the document root.
The first time you run the htpasswd utility, you run it using the -c flag as follows:
# htpasswd -c /usr/local/etc/users gopinath
The -c argument tells htpasswd to create a new users file. When you run the above command, you will be prompted to enter a password for the user gopinath, and confirm it by entering it again. Other users can be added to the existing file in the same way, except that the -c argument is not needed. The same command can also be used to modify the password of an existing user.
After adding a few users, the /usr/local/etc/users file might look like this :
gopinath:WrU90BHQai36
kumar:iABSd12QWs67
ankit:Wer56HsD12s6
The first field is the username and the second field is the encrypted password.
Now comes the Apache server's configuration part. Open the apache server's configuration file /etc/httpd/conf/httpd.conf (your configuration file will be in different location depending on the distribution you are using) and look for the line :
AllowOverride None 
And change it to
AllowOverride AuthConfig
If you want to protect the document root itself, create a file '.htaccess' in the top directory path and include the following lines to it:
#File: .htaccess
AuthName "Only Valid Access"
AuthType Basic
AuthUserFile /usr/local/etc/users
require valid-user
AuthName - directive specifies a realm name for this protection. Once a user has entered a valid username and password, any other resource within the same realm name can be accessed with the same username and password.
AuthType - directive tells the server what protocol is to be used for authentication. At the moment, 'Basic' is the only method available. However, a new method, 'Digest' is about to be standardized, and once browsers start to implement it, Digest authentication will provide more security than Basic authentication.
AuthUserFile - tells the server the location of the user file created by htpasswd utility. A similar directive, AuthGroupFile, can be used to tell the server the location of a group's file.

Lastly don't forget to restart the web server which rereads the changed configuration files :
# service httpd restart
From here on when you try to access your protected site or the directory via the web browser, you will be asked for authentication first. And only those users whose name has been entered in the file pointed to by the AuthUserFile directive will be allowed access to your site.